00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 83 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3261 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.077 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.078 The recommended git tool is: git 00:00:00.078 using credential 00000000-0000-0000-0000-000000000002 00:00:00.084 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.111 Fetching changes from the remote Git repository 00:00:00.113 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.153 Using shallow fetch with depth 1 00:00:00.153 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.153 > git --version # timeout=10 00:00:00.183 > git --version # 'git version 2.39.2' 00:00:00.183 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.206 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.206 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.702 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.712 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.724 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:05.724 > git config core.sparsecheckout # timeout=10 00:00:05.733 > git read-tree -mu HEAD # timeout=10 00:00:05.749 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:05.770 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:05.770 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:05.861 [Pipeline] Start of Pipeline 00:00:05.872 [Pipeline] library 00:00:05.874 Loading library shm_lib@master 00:00:05.874 Library shm_lib@master is cached. Copying from home. 00:00:05.887 [Pipeline] node 00:00:05.896 Running on VM-host-SM4 in /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:05.898 [Pipeline] { 00:00:05.907 [Pipeline] catchError 00:00:05.908 [Pipeline] { 00:00:05.917 [Pipeline] wrap 00:00:05.925 [Pipeline] { 00:00:05.932 [Pipeline] stage 00:00:05.933 [Pipeline] { (Prologue) 00:00:05.948 [Pipeline] echo 00:00:05.949 Node: VM-host-SM4 00:00:05.953 [Pipeline] cleanWs 00:00:05.960 [WS-CLEANUP] Deleting project workspace... 00:00:05.960 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.965 [WS-CLEANUP] done 00:00:06.116 [Pipeline] setCustomBuildProperty 00:00:06.180 [Pipeline] httpRequest 00:00:06.210 [Pipeline] echo 00:00:06.211 Sorcerer 10.211.164.101 is alive 00:00:06.219 [Pipeline] httpRequest 00:00:06.223 HttpMethod: GET 00:00:06.223 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:06.224 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:06.241 Response Code: HTTP/1.1 200 OK 00:00:06.242 Success: Status code 200 is in the accepted range: 200,404 00:00:06.242 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:11.179 [Pipeline] sh 00:00:11.459 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:11.476 [Pipeline] httpRequest 00:00:11.506 [Pipeline] echo 00:00:11.508 Sorcerer 10.211.164.101 is alive 00:00:11.517 [Pipeline] httpRequest 00:00:11.522 HttpMethod: GET 00:00:11.522 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:11.522 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:11.547 Response Code: HTTP/1.1 200 OK 00:00:11.547 Success: Status code 200 is in the accepted range: 200,404 00:00:11.548 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:20.211 [Pipeline] sh 00:01:20.489 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:23.036 [Pipeline] sh 00:01:23.322 + git -C spdk log --oneline -n5 00:01:23.322 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:23.322 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:23.322 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:23.322 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:01:23.322 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:01:23.348 [Pipeline] withCredentials 00:01:23.358 > git --version # timeout=10 00:01:23.372 > git --version # 'git version 2.39.2' 00:01:23.388 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:23.391 [Pipeline] { 00:01:23.401 [Pipeline] retry 00:01:23.404 [Pipeline] { 00:01:23.422 [Pipeline] sh 00:01:23.703 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:23.974 [Pipeline] } 00:01:23.999 [Pipeline] // retry 00:01:24.005 [Pipeline] } 00:01:24.026 [Pipeline] // withCredentials 00:01:24.036 [Pipeline] httpRequest 00:01:24.055 [Pipeline] echo 00:01:24.057 Sorcerer 10.211.164.101 is alive 00:01:24.066 [Pipeline] httpRequest 00:01:24.070 HttpMethod: GET 00:01:24.071 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:24.072 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:24.072 Response Code: HTTP/1.1 200 OK 00:01:24.073 Success: Status code 200 is in the accepted range: 200,404 00:01:24.073 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:29.373 [Pipeline] sh 00:01:29.649 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:31.049 [Pipeline] sh 00:01:31.326 + git -C dpdk log --oneline -n5 00:01:31.326 caf0f5d395 version: 22.11.4 00:01:31.326 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:31.326 dc9c799c7d vhost: fix missing spinlock unlock 00:01:31.326 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:31.327 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:31.346 [Pipeline] writeFile 00:01:31.365 [Pipeline] sh 00:01:31.644 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:31.656 [Pipeline] sh 00:01:31.936 + cat autorun-spdk.conf 00:01:31.936 SPDK_TEST_UNITTEST=1 00:01:31.936 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.936 SPDK_TEST_NVME=1 00:01:31.936 SPDK_TEST_BLOCKDEV=1 00:01:31.936 SPDK_RUN_ASAN=1 00:01:31.936 SPDK_RUN_UBSAN=1 00:01:31.936 SPDK_TEST_RAID5=1 00:01:31.936 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:31.936 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:31.936 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.942 RUN_NIGHTLY=1 00:01:31.944 [Pipeline] } 00:01:31.961 [Pipeline] // stage 00:01:31.976 [Pipeline] stage 00:01:31.978 [Pipeline] { (Run VM) 00:01:31.993 [Pipeline] sh 00:01:32.271 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:32.271 + echo 'Start stage prepare_nvme.sh' 00:01:32.271 Start stage prepare_nvme.sh 00:01:32.271 + [[ -n 0 ]] 00:01:32.271 + disk_prefix=ex0 00:01:32.271 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]] 00:01:32.271 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]] 00:01:32.271 + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf 00:01:32.271 ++ SPDK_TEST_UNITTEST=1 00:01:32.271 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.271 ++ SPDK_TEST_NVME=1 00:01:32.271 ++ SPDK_TEST_BLOCKDEV=1 00:01:32.271 ++ SPDK_RUN_ASAN=1 00:01:32.271 ++ SPDK_RUN_UBSAN=1 00:01:32.271 ++ SPDK_TEST_RAID5=1 00:01:32.271 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:32.271 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:32.271 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.271 ++ RUN_NIGHTLY=1 00:01:32.271 + cd /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:32.271 + nvme_files=() 00:01:32.271 + declare -A nvme_files 00:01:32.271 + backend_dir=/var/lib/libvirt/images/backends 00:01:32.271 + nvme_files['nvme.img']=5G 00:01:32.271 + nvme_files['nvme-cmb.img']=5G 00:01:32.271 + nvme_files['nvme-multi0.img']=4G 00:01:32.271 + nvme_files['nvme-multi1.img']=4G 00:01:32.271 + nvme_files['nvme-multi2.img']=4G 00:01:32.271 + nvme_files['nvme-openstack.img']=8G 00:01:32.271 + nvme_files['nvme-zns.img']=5G 00:01:32.271 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:32.271 + (( SPDK_TEST_FTL == 1 )) 00:01:32.271 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:32.271 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:32.271 + for nvme in "${!nvme_files[@]}" 00:01:32.271 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:32.271 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.271 + for nvme in "${!nvme_files[@]}" 00:01:32.271 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:32.271 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.271 + for nvme in "${!nvme_files[@]}" 00:01:32.271 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:32.528 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:32.528 + for nvme in "${!nvme_files[@]}" 00:01:32.528 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:32.528 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.528 + for nvme in "${!nvme_files[@]}" 00:01:32.528 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:32.528 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.787 + for nvme in "${!nvme_files[@]}" 00:01:32.787 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:32.787 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.787 + for nvme in "${!nvme_files[@]}" 00:01:32.787 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:32.787 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.787 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:33.045 + echo 'End stage prepare_nvme.sh' 00:01:33.045 End stage prepare_nvme.sh 00:01:33.057 [Pipeline] sh 00:01:33.332 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:33.332 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -H -a -v -f ubuntu2204 00:01:33.332 00:01:33.332 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant 00:01:33.332 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk 00:01:33.332 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest 00:01:33.332 HELP=0 00:01:33.333 DRY_RUN=0 00:01:33.333 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img, 00:01:33.333 NVME_DISKS_TYPE=nvme, 00:01:33.333 NVME_AUTO_CREATE=0 00:01:33.333 NVME_DISKS_NAMESPACES=, 00:01:33.333 NVME_CMB=, 00:01:33.333 NVME_PMR=, 00:01:33.333 NVME_ZNS=, 00:01:33.333 NVME_MS=, 00:01:33.333 NVME_FDP=, 00:01:33.333 SPDK_VAGRANT_DISTRO=ubuntu2204 00:01:33.333 SPDK_VAGRANT_VMCPU=10 00:01:33.333 SPDK_VAGRANT_VMRAM=12288 00:01:33.333 SPDK_VAGRANT_PROVIDER=libvirt 00:01:33.333 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:33.333 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:33.333 SPDK_OPENSTACK_NETWORK=0 00:01:33.333 VAGRANT_PACKAGE_BOX=0 00:01:33.333 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:33.333 FORCE_DISTRO=true 00:01:33.333 VAGRANT_BOX_VERSION= 00:01:33.333 EXTRA_VAGRANTFILES= 00:01:33.333 NIC_MODEL=e1000 00:01:33.333 00:01:33.333 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt' 00:01:33.333 /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:36.619 Bringing machine 'default' up with 'libvirt' provider... 00:01:36.619 ==> default: Creating image (snapshot of base box volume). 00:01:36.619 ==> default: Creating domain with the following settings... 00:01:36.619 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1720768330_82ce8c7ebda358444f82 00:01:36.619 ==> default: -- Domain type: kvm 00:01:36.619 ==> default: -- Cpus: 10 00:01:36.619 ==> default: -- Feature: acpi 00:01:36.619 ==> default: -- Feature: apic 00:01:36.619 ==> default: -- Feature: pae 00:01:36.619 ==> default: -- Memory: 12288M 00:01:36.619 ==> default: -- Memory Backing: hugepages: 00:01:36.619 ==> default: -- Management MAC: 00:01:36.619 ==> default: -- Loader: 00:01:36.619 ==> default: -- Nvram: 00:01:36.619 ==> default: -- Base box: spdk/ubuntu2204 00:01:36.619 ==> default: -- Storage pool: default 00:01:36.619 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1720768330_82ce8c7ebda358444f82.img (20G) 00:01:36.619 ==> default: -- Volume Cache: default 00:01:36.619 ==> default: -- Kernel: 00:01:36.619 ==> default: -- Initrd: 00:01:36.619 ==> default: -- Graphics Type: vnc 00:01:36.619 ==> default: -- Graphics Port: -1 00:01:36.619 ==> default: -- Graphics IP: 127.0.0.1 00:01:36.619 ==> default: -- Graphics Password: Not defined 00:01:36.619 ==> default: -- Video Type: cirrus 00:01:36.619 ==> default: -- Video VRAM: 9216 00:01:36.619 ==> default: -- Sound Type: 00:01:36.619 ==> default: -- Keymap: en-us 00:01:36.619 ==> default: -- TPM Path: 00:01:36.619 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:36.619 ==> default: -- Command line args: 00:01:36.619 ==> default: -> value=-device, 00:01:36.619 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:36.619 ==> default: -> value=-drive, 00:01:36.619 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:36.619 ==> default: -> value=-device, 00:01:36.619 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.878 ==> default: Creating shared folders metadata... 00:01:36.878 ==> default: Starting domain. 00:01:38.783 ==> default: Waiting for domain to get an IP address... 00:01:56.865 ==> default: Waiting for SSH to become available... 00:01:56.865 ==> default: Configuring and enabling network interfaces... 00:02:02.148 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:07.420 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:11.608 ==> default: Mounting SSHFS shared folder... 00:02:12.545 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:02:12.545 ==> default: Checking Mount.. 00:02:13.480 ==> default: Folder Successfully Mounted! 00:02:13.480 ==> default: Running provisioner: file... 00:02:13.738 default: ~/.gitconfig => .gitconfig 00:02:13.997 00:02:13.997 SUCCESS! 00:02:13.997 00:02:13.997 cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:02:13.997 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:13.997 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm. 00:02:13.997 00:02:14.005 [Pipeline] } 00:02:14.024 [Pipeline] // stage 00:02:14.033 [Pipeline] dir 00:02:14.034 Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt 00:02:14.035 [Pipeline] { 00:02:14.046 [Pipeline] catchError 00:02:14.047 [Pipeline] { 00:02:14.057 [Pipeline] sh 00:02:14.337 + vagrant ssh-config --host vagrant 00:02:14.337 + sed -ne /^Host/,$p 00:02:14.337 + tee ssh_conf 00:02:17.623 Host vagrant 00:02:17.623 HostName 192.168.121.243 00:02:17.623 User vagrant 00:02:17.623 Port 22 00:02:17.623 UserKnownHostsFile /dev/null 00:02:17.623 StrictHostKeyChecking no 00:02:17.623 PasswordAuthentication no 00:02:17.623 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:02:17.623 IdentitiesOnly yes 00:02:17.623 LogLevel FATAL 00:02:17.623 ForwardAgent yes 00:02:17.623 ForwardX11 yes 00:02:17.623 00:02:17.635 [Pipeline] withEnv 00:02:17.637 [Pipeline] { 00:02:17.652 [Pipeline] sh 00:02:17.931 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:17.931 source /etc/os-release 00:02:17.931 [[ -e /image.version ]] && img=$(< /image.version) 00:02:17.931 # Minimal, systemd-like check. 00:02:17.931 if [[ -e /.dockerenv ]]; then 00:02:17.931 # Clear garbage from the node's name: 00:02:17.931 # agt-er_autotest_547-896 -> autotest_547-896 00:02:17.931 # $HOSTNAME is the actual container id 00:02:17.931 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:17.931 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:17.931 # We can assume this is a mount from a host where container is running, 00:02:17.931 # so fetch its hostname to easily identify the target swarm worker. 00:02:17.931 container="$(< /etc/hostname) ($agent)" 00:02:17.931 else 00:02:17.931 # Fallback 00:02:17.931 container=$agent 00:02:17.931 fi 00:02:17.931 fi 00:02:17.931 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:17.931 00:02:18.201 [Pipeline] } 00:02:18.220 [Pipeline] // withEnv 00:02:18.228 [Pipeline] setCustomBuildProperty 00:02:18.243 [Pipeline] stage 00:02:18.245 [Pipeline] { (Tests) 00:02:18.263 [Pipeline] sh 00:02:18.542 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:18.816 [Pipeline] sh 00:02:19.095 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:19.370 [Pipeline] timeout 00:02:19.370 Timeout set to expire in 1 hr 30 min 00:02:19.372 [Pipeline] { 00:02:19.388 [Pipeline] sh 00:02:19.668 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:20.236 HEAD is now at 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:02:20.249 [Pipeline] sh 00:02:20.527 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:20.799 [Pipeline] sh 00:02:21.078 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:21.355 [Pipeline] sh 00:02:21.680 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:02:21.952 ++ readlink -f spdk_repo 00:02:21.952 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:21.952 + [[ -n /home/vagrant/spdk_repo ]] 00:02:21.952 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:21.952 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:21.952 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:21.952 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:21.952 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:21.952 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:02:21.952 + cd /home/vagrant/spdk_repo 00:02:21.952 + source /etc/os-release 00:02:21.952 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:02:21.952 ++ NAME=Ubuntu 00:02:21.952 ++ VERSION_ID=22.04 00:02:21.952 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:02:21.952 ++ VERSION_CODENAME=jammy 00:02:21.952 ++ ID=ubuntu 00:02:21.952 ++ ID_LIKE=debian 00:02:21.952 ++ HOME_URL=https://www.ubuntu.com/ 00:02:21.952 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:02:21.952 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:02:21.952 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:02:21.952 ++ UBUNTU_CODENAME=jammy 00:02:21.952 + uname -a 00:02:21.952 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:02:21.952 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:22.518 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:02:22.518 Hugepages 00:02:22.518 node hugesize free / total 00:02:22.518 node0 1048576kB 0 / 0 00:02:22.518 node0 2048kB 0 / 0 00:02:22.518 00:02:22.518 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:22.518 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:22.518 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:22.518 + rm -f /tmp/spdk-ld-path 00:02:22.518 + source autorun-spdk.conf 00:02:22.518 ++ SPDK_TEST_UNITTEST=1 00:02:22.518 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.518 ++ SPDK_TEST_NVME=1 00:02:22.518 ++ SPDK_TEST_BLOCKDEV=1 00:02:22.518 ++ SPDK_RUN_ASAN=1 00:02:22.518 ++ SPDK_RUN_UBSAN=1 00:02:22.518 ++ SPDK_TEST_RAID5=1 00:02:22.518 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:22.518 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:22.518 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.518 ++ RUN_NIGHTLY=1 00:02:22.518 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:22.518 + [[ -n '' ]] 00:02:22.518 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:22.518 + for M in /var/spdk/build-*-manifest.txt 00:02:22.518 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:22.518 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.518 + for M in /var/spdk/build-*-manifest.txt 00:02:22.518 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:22.518 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.518 ++ uname 00:02:22.518 + [[ Linux == \L\i\n\u\x ]] 00:02:22.518 + sudo dmesg -T 00:02:22.518 + sudo dmesg --clear 00:02:22.518 + sudo dmesg -Tw 00:02:22.518 + dmesg_pid=2839 00:02:22.518 + [[ Ubuntu == FreeBSD ]] 00:02:22.518 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:22.518 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:22.518 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:22.518 + [[ -x /usr/src/fio-static/fio ]] 00:02:22.518 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:22.518 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:22.518 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:22.518 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:22.518 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:22.518 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:22.518 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:22.518 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:22.518 Test configuration: 00:02:22.518 SPDK_TEST_UNITTEST=1 00:02:22.518 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.518 SPDK_TEST_NVME=1 00:02:22.518 SPDK_TEST_BLOCKDEV=1 00:02:22.518 SPDK_RUN_ASAN=1 00:02:22.518 SPDK_RUN_UBSAN=1 00:02:22.518 SPDK_TEST_RAID5=1 00:02:22.518 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:22.518 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:22.518 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.776 RUN_NIGHTLY=1 07:12:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:22.776 07:12:56 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:22.776 07:12:56 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:22.776 07:12:56 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:22.776 07:12:56 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:22.776 07:12:56 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:22.776 07:12:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:22.776 07:12:56 -- paths/export.sh@5 -- $ export PATH 00:02:22.776 07:12:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:22.776 07:12:56 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:22.776 07:12:56 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:22.776 07:12:56 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720768376.XXXXXX 00:02:22.776 07:12:56 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720768376.tBghVy 00:02:22.776 07:12:56 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:22.776 07:12:56 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:02:22.776 07:12:56 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:22.776 07:12:56 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:22.776 07:12:56 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:22.776 07:12:56 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:22.776 07:12:56 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:22.776 07:12:56 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:02:22.776 07:12:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.776 07:12:56 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:22.776 07:12:56 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:22.776 07:12:56 -- pm/common@17 -- $ local monitor 00:02:22.776 07:12:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.776 07:12:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.776 07:12:56 -- pm/common@25 -- $ sleep 1 00:02:22.776 07:12:56 -- pm/common@21 -- $ date +%s 00:02:22.776 07:12:56 -- pm/common@21 -- $ date +%s 00:02:22.776 07:12:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720768376 00:02:22.776 07:12:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720768376 00:02:22.776 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720768376_collect-vmstat.pm.log 00:02:22.776 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720768376_collect-cpu-load.pm.log 00:02:23.711 07:12:57 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:23.711 07:12:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:23.711 07:12:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:23.711 07:12:57 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:23.711 07:12:57 -- spdk/autobuild.sh@16 -- $ date -u 00:02:23.711 Fri Jul 12 07:12:57 UTC 2024 00:02:23.711 07:12:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:23.711 v24.05-13-g5fa2f5086 00:02:23.711 07:12:57 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:23.711 07:12:57 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:23.711 07:12:57 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:23.711 07:12:57 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:23.711 07:12:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.711 ************************************ 00:02:23.711 START TEST asan 00:02:23.711 ************************************ 00:02:23.711 using asan 00:02:23.711 07:12:57 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:02:23.711 00:02:23.711 real 0m0.000s 00:02:23.711 user 0m0.000s 00:02:23.711 sys 0m0.000s 00:02:23.711 07:12:57 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:23.711 07:12:57 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:23.711 ************************************ 00:02:23.711 END TEST asan 00:02:23.711 ************************************ 00:02:23.970 07:12:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:23.970 07:12:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:23.970 07:12:57 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:23.970 07:12:57 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:23.970 07:12:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.970 ************************************ 00:02:23.970 START TEST ubsan 00:02:23.970 ************************************ 00:02:23.970 using ubsan 00:02:23.970 07:12:57 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:23.970 00:02:23.970 real 0m0.000s 00:02:23.970 user 0m0.000s 00:02:23.970 sys 0m0.000s 00:02:23.970 07:12:57 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:23.970 ************************************ 00:02:23.970 END TEST ubsan 00:02:23.970 07:12:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:23.970 ************************************ 00:02:23.970 07:12:57 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:23.970 07:12:57 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:23.970 07:12:57 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:23.970 07:12:57 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:23.970 07:12:57 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:23.970 07:12:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.970 ************************************ 00:02:23.970 START TEST build_native_dpdk 00:02:23.970 ************************************ 00:02:23.970 07:12:57 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=11 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=11 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:23.970 caf0f5d395 version: 22.11.4 00:02:23.970 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:23.970 dc9c799c7d vhost: fix missing spinlock unlock 00:02:23.970 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:23.970 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 11 -ge 5 ]] 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 11 -ge 10 ]] 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:23.970 07:12:57 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:23.971 07:12:57 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:23.971 07:12:57 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:23.971 07:12:57 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:23.971 07:12:57 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:23.971 07:12:57 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:23.971 patching file config/rte_config.h 00:02:23.971 Hunk #1 succeeded at 60 (offset 1 line). 00:02:23.971 07:12:57 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:23.971 07:12:57 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:02:23.971 07:12:57 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:23.971 07:12:57 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:23.971 07:12:57 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:29.295 The Meson build system 00:02:29.295 Version: 1.4.0 00:02:29.295 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:29.295 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:29.295 Build type: native build 00:02:29.295 Program cat found: YES (/usr/bin/cat) 00:02:29.295 Project name: DPDK 00:02:29.295 Project version: 22.11.4 00:02:29.295 C compiler for the host machine: gcc (gcc 11.4.0 "gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:02:29.295 C linker for the host machine: gcc ld.bfd 2.38 00:02:29.295 Host machine cpu family: x86_64 00:02:29.295 Host machine cpu: x86_64 00:02:29.295 Message: ## Building in Developer Mode ## 00:02:29.295 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:29.295 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:29.295 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:29.295 Program objdump found: YES (/usr/bin/objdump) 00:02:29.295 Program python3 found: YES (/usr/bin/python3) 00:02:29.296 Program cat found: YES (/usr/bin/cat) 00:02:29.296 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:29.296 Checking for size of "void *" : 8 00:02:29.296 Checking for size of "void *" : 8 (cached) 00:02:29.296 Library m found: YES 00:02:29.296 Library numa found: YES 00:02:29.296 Has header "numaif.h" : YES 00:02:29.296 Library fdt found: NO 00:02:29.296 Library execinfo found: NO 00:02:29.296 Has header "execinfo.h" : YES 00:02:29.296 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:02:29.296 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:29.296 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:29.296 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:29.296 Run-time dependency openssl found: YES 3.0.2 00:02:29.296 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:29.296 Library pcap found: NO 00:02:29.296 Compiler for C supports arguments -Wcast-qual: YES 00:02:29.296 Compiler for C supports arguments -Wdeprecated: YES 00:02:29.296 Compiler for C supports arguments -Wformat: YES 00:02:29.296 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:29.296 Compiler for C supports arguments -Wformat-security: YES 00:02:29.296 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:29.296 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:29.296 Compiler for C supports arguments -Wnested-externs: YES 00:02:29.296 Compiler for C supports arguments -Wold-style-definition: YES 00:02:29.296 Compiler for C supports arguments -Wpointer-arith: YES 00:02:29.296 Compiler for C supports arguments -Wsign-compare: YES 00:02:29.296 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:29.296 Compiler for C supports arguments -Wundef: YES 00:02:29.296 Compiler for C supports arguments -Wwrite-strings: YES 00:02:29.296 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:29.296 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:29.296 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:29.296 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:29.296 Compiler for C supports arguments -mavx512f: YES 00:02:29.296 Checking if "AVX512 checking" compiles: YES 00:02:29.296 Fetching value of define "__SSE4_2__" : 1 00:02:29.296 Fetching value of define "__AES__" : 1 00:02:29.297 Fetching value of define "__AVX__" : 1 00:02:29.297 Fetching value of define "__AVX2__" : 1 00:02:29.297 Fetching value of define "__AVX512BW__" : 1 00:02:29.297 Fetching value of define "__AVX512CD__" : 1 00:02:29.297 Fetching value of define "__AVX512DQ__" : 1 00:02:29.297 Fetching value of define "__AVX512F__" : 1 00:02:29.297 Fetching value of define "__AVX512VL__" : 1 00:02:29.297 Fetching value of define "__PCLMUL__" : 1 00:02:29.297 Fetching value of define "__RDRND__" : 1 00:02:29.297 Fetching value of define "__RDSEED__" : 1 00:02:29.297 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:29.297 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:29.297 Message: lib/kvargs: Defining dependency "kvargs" 00:02:29.297 Message: lib/telemetry: Defining dependency "telemetry" 00:02:29.297 Checking for function "getentropy" : YES 00:02:29.297 Message: lib/eal: Defining dependency "eal" 00:02:29.297 Message: lib/ring: Defining dependency "ring" 00:02:29.297 Message: lib/rcu: Defining dependency "rcu" 00:02:29.297 Message: lib/mempool: Defining dependency "mempool" 00:02:29.297 Message: lib/mbuf: Defining dependency "mbuf" 00:02:29.297 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:29.297 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:29.297 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:29.297 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:29.297 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:29.298 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:29.298 Compiler for C supports arguments -mpclmul: YES 00:02:29.298 Compiler for C supports arguments -maes: YES 00:02:29.298 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:29.298 Compiler for C supports arguments -mavx512bw: YES 00:02:29.298 Compiler for C supports arguments -mavx512dq: YES 00:02:29.298 Compiler for C supports arguments -mavx512vl: YES 00:02:29.298 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:29.298 Compiler for C supports arguments -mavx2: YES 00:02:29.298 Compiler for C supports arguments -mavx: YES 00:02:29.298 Message: lib/net: Defining dependency "net" 00:02:29.298 Message: lib/meter: Defining dependency "meter" 00:02:29.298 Message: lib/ethdev: Defining dependency "ethdev" 00:02:29.298 Message: lib/pci: Defining dependency "pci" 00:02:29.298 Message: lib/cmdline: Defining dependency "cmdline" 00:02:29.298 Message: lib/metrics: Defining dependency "metrics" 00:02:29.298 Message: lib/hash: Defining dependency "hash" 00:02:29.298 Message: lib/timer: Defining dependency "timer" 00:02:29.298 Fetching value of define "__AVX2__" : 1 (cached) 00:02:29.298 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:29.298 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:29.298 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:29.298 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:29.298 Message: lib/acl: Defining dependency "acl" 00:02:29.298 Message: lib/bbdev: Defining dependency "bbdev" 00:02:29.298 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:29.298 Run-time dependency libelf found: YES 0.186 00:02:29.298 lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled 00:02:29.298 Message: lib/bpf: Defining dependency "bpf" 00:02:29.298 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:29.298 Message: lib/compressdev: Defining dependency "compressdev" 00:02:29.298 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:29.298 Message: lib/distributor: Defining dependency "distributor" 00:02:29.298 Message: lib/efd: Defining dependency "efd" 00:02:29.298 Message: lib/eventdev: Defining dependency "eventdev" 00:02:29.298 Message: lib/gpudev: Defining dependency "gpudev" 00:02:29.298 Message: lib/gro: Defining dependency "gro" 00:02:29.298 Message: lib/gso: Defining dependency "gso" 00:02:29.298 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:29.298 Message: lib/jobstats: Defining dependency "jobstats" 00:02:29.298 Message: lib/latencystats: Defining dependency "latencystats" 00:02:29.298 Message: lib/lpm: Defining dependency "lpm" 00:02:29.298 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:29.298 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:29.298 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:29.298 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:29.298 Message: lib/member: Defining dependency "member" 00:02:29.298 Message: lib/pcapng: Defining dependency "pcapng" 00:02:29.298 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:29.298 Message: lib/power: Defining dependency "power" 00:02:29.298 Message: lib/rawdev: Defining dependency "rawdev" 00:02:29.298 Message: lib/regexdev: Defining dependency "regexdev" 00:02:29.298 Message: lib/dmadev: Defining dependency "dmadev" 00:02:29.298 Message: lib/rib: Defining dependency "rib" 00:02:29.298 Message: lib/reorder: Defining dependency "reorder" 00:02:29.299 Message: lib/sched: Defining dependency "sched" 00:02:29.299 Message: lib/security: Defining dependency "security" 00:02:29.299 Message: lib/stack: Defining dependency "stack" 00:02:29.299 Has header "linux/userfaultfd.h" : YES 00:02:29.299 Message: lib/vhost: Defining dependency "vhost" 00:02:29.299 Message: lib/ipsec: Defining dependency "ipsec" 00:02:29.299 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:29.299 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:29.299 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:29.299 Message: lib/fib: Defining dependency "fib" 00:02:29.299 Message: lib/port: Defining dependency "port" 00:02:29.299 Message: lib/pdump: Defining dependency "pdump" 00:02:29.299 Message: lib/table: Defining dependency "table" 00:02:29.299 Message: lib/pipeline: Defining dependency "pipeline" 00:02:29.299 Message: lib/graph: Defining dependency "graph" 00:02:29.299 Message: lib/node: Defining dependency "node" 00:02:29.299 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:29.299 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:29.299 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:29.299 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:29.299 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:29.299 Compiler for C supports arguments -Wno-unused-value: YES 00:02:29.299 Compiler for C supports arguments -Wno-format: YES 00:02:29.299 Compiler for C supports arguments -Wno-format-security: YES 00:02:29.299 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:30.237 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:30.237 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:30.237 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:30.237 Fetching value of define "__AVX2__" : 1 (cached) 00:02:30.237 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:30.237 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:30.238 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:30.238 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:30.238 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:30.238 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:30.238 Program doxygen found: YES (/usr/bin/doxygen) 00:02:30.238 Configuring doxy-api.conf using configuration 00:02:30.238 Program sphinx-build found: NO 00:02:30.238 Configuring rte_build_config.h using configuration 00:02:30.238 Message: 00:02:30.238 ================= 00:02:30.238 Applications Enabled 00:02:30.238 ================= 00:02:30.238 00:02:30.238 apps: 00:02:30.238 pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, test-eventdev, 00:02:30.238 test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, test-security-perf, 00:02:30.238 00:02:30.238 00:02:30.238 Message: 00:02:30.238 ================= 00:02:30.238 Libraries Enabled 00:02:30.238 ================= 00:02:30.238 00:02:30.238 libs: 00:02:30.238 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:30.238 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:30.238 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:30.238 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:30.238 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:30.238 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:30.238 table, pipeline, graph, node, 00:02:30.238 00:02:30.238 Message: 00:02:30.238 =============== 00:02:30.238 Drivers Enabled 00:02:30.238 =============== 00:02:30.238 00:02:30.238 common: 00:02:30.238 00:02:30.238 bus: 00:02:30.238 pci, vdev, 00:02:30.238 mempool: 00:02:30.238 ring, 00:02:30.238 dma: 00:02:30.238 00:02:30.238 net: 00:02:30.238 i40e, 00:02:30.238 raw: 00:02:30.238 00:02:30.238 crypto: 00:02:30.238 00:02:30.238 compress: 00:02:30.238 00:02:30.238 regex: 00:02:30.238 00:02:30.238 vdpa: 00:02:30.238 00:02:30.238 event: 00:02:30.238 00:02:30.238 baseband: 00:02:30.238 00:02:30.238 gpu: 00:02:30.238 00:02:30.238 00:02:30.238 Message: 00:02:30.238 ================= 00:02:30.238 Content Skipped 00:02:30.238 ================= 00:02:30.238 00:02:30.238 apps: 00:02:30.238 dumpcap: missing dependency, "libpcap" 00:02:30.238 00:02:30.238 libs: 00:02:30.238 kni: explicitly disabled via build config (deprecated lib) 00:02:30.238 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:30.238 00:02:30.238 drivers: 00:02:30.238 common/cpt: not in enabled drivers build config 00:02:30.238 common/dpaax: not in enabled drivers build config 00:02:30.238 common/iavf: not in enabled drivers build config 00:02:30.238 common/idpf: not in enabled drivers build config 00:02:30.238 common/mvep: not in enabled drivers build config 00:02:30.238 common/octeontx: not in enabled drivers build config 00:02:30.238 bus/auxiliary: not in enabled drivers build config 00:02:30.238 bus/dpaa: not in enabled drivers build config 00:02:30.238 bus/fslmc: not in enabled drivers build config 00:02:30.238 bus/ifpga: not in enabled drivers build config 00:02:30.238 bus/vmbus: not in enabled drivers build config 00:02:30.238 common/cnxk: not in enabled drivers build config 00:02:30.238 common/mlx5: not in enabled drivers build config 00:02:30.238 common/qat: not in enabled drivers build config 00:02:30.238 common/sfc_efx: not in enabled drivers build config 00:02:30.238 mempool/bucket: not in enabled drivers build config 00:02:30.238 mempool/cnxk: not in enabled drivers build config 00:02:30.238 mempool/dpaa: not in enabled drivers build config 00:02:30.238 mempool/dpaa2: not in enabled drivers build config 00:02:30.238 mempool/octeontx: not in enabled drivers build config 00:02:30.238 mempool/stack: not in enabled drivers build config 00:02:30.238 dma/cnxk: not in enabled drivers build config 00:02:30.238 dma/dpaa: not in enabled drivers build config 00:02:30.238 dma/dpaa2: not in enabled drivers build config 00:02:30.238 dma/hisilicon: not in enabled drivers build config 00:02:30.238 dma/idxd: not in enabled drivers build config 00:02:30.238 dma/ioat: not in enabled drivers build config 00:02:30.238 dma/skeleton: not in enabled drivers build config 00:02:30.238 net/af_packet: not in enabled drivers build config 00:02:30.238 net/af_xdp: not in enabled drivers build config 00:02:30.238 net/ark: not in enabled drivers build config 00:02:30.238 net/atlantic: not in enabled drivers build config 00:02:30.238 net/avp: not in enabled drivers build config 00:02:30.238 net/axgbe: not in enabled drivers build config 00:02:30.238 net/bnx2x: not in enabled drivers build config 00:02:30.238 net/bnxt: not in enabled drivers build config 00:02:30.238 net/bonding: not in enabled drivers build config 00:02:30.238 net/cnxk: not in enabled drivers build config 00:02:30.238 net/cxgbe: not in enabled drivers build config 00:02:30.238 net/dpaa: not in enabled drivers build config 00:02:30.238 net/dpaa2: not in enabled drivers build config 00:02:30.238 net/e1000: not in enabled drivers build config 00:02:30.238 net/ena: not in enabled drivers build config 00:02:30.238 net/enetc: not in enabled drivers build config 00:02:30.238 net/enetfec: not in enabled drivers build config 00:02:30.238 net/enic: not in enabled drivers build config 00:02:30.238 net/failsafe: not in enabled drivers build config 00:02:30.238 net/fm10k: not in enabled drivers build config 00:02:30.238 net/gve: not in enabled drivers build config 00:02:30.238 net/hinic: not in enabled drivers build config 00:02:30.238 net/hns3: not in enabled drivers build config 00:02:30.238 net/iavf: not in enabled drivers build config 00:02:30.238 net/ice: not in enabled drivers build config 00:02:30.238 net/idpf: not in enabled drivers build config 00:02:30.238 net/igc: not in enabled drivers build config 00:02:30.238 net/ionic: not in enabled drivers build config 00:02:30.238 net/ipn3ke: not in enabled drivers build config 00:02:30.238 net/ixgbe: not in enabled drivers build config 00:02:30.238 net/kni: not in enabled drivers build config 00:02:30.238 net/liquidio: not in enabled drivers build config 00:02:30.238 net/mana: not in enabled drivers build config 00:02:30.238 net/memif: not in enabled drivers build config 00:02:30.238 net/mlx4: not in enabled drivers build config 00:02:30.238 net/mlx5: not in enabled drivers build config 00:02:30.238 net/mvneta: not in enabled drivers build config 00:02:30.238 net/mvpp2: not in enabled drivers build config 00:02:30.238 net/netvsc: not in enabled drivers build config 00:02:30.238 net/nfb: not in enabled drivers build config 00:02:30.238 net/nfp: not in enabled drivers build config 00:02:30.238 net/ngbe: not in enabled drivers build config 00:02:30.238 net/null: not in enabled drivers build config 00:02:30.238 net/octeontx: not in enabled drivers build config 00:02:30.238 net/octeon_ep: not in enabled drivers build config 00:02:30.238 net/pcap: not in enabled drivers build config 00:02:30.238 net/pfe: not in enabled drivers build config 00:02:30.238 net/qede: not in enabled drivers build config 00:02:30.238 net/ring: not in enabled drivers build config 00:02:30.238 net/sfc: not in enabled drivers build config 00:02:30.238 net/softnic: not in enabled drivers build config 00:02:30.238 net/tap: not in enabled drivers build config 00:02:30.238 net/thunderx: not in enabled drivers build config 00:02:30.238 net/txgbe: not in enabled drivers build config 00:02:30.238 net/vdev_netvsc: not in enabled drivers build config 00:02:30.238 net/vhost: not in enabled drivers build config 00:02:30.238 net/virtio: not in enabled drivers build config 00:02:30.238 net/vmxnet3: not in enabled drivers build config 00:02:30.238 raw/cnxk_bphy: not in enabled drivers build config 00:02:30.238 raw/cnxk_gpio: not in enabled drivers build config 00:02:30.238 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:30.238 raw/ifpga: not in enabled drivers build config 00:02:30.238 raw/ntb: not in enabled drivers build config 00:02:30.238 raw/skeleton: not in enabled drivers build config 00:02:30.238 crypto/armv8: not in enabled drivers build config 00:02:30.238 crypto/bcmfs: not in enabled drivers build config 00:02:30.238 crypto/caam_jr: not in enabled drivers build config 00:02:30.238 crypto/ccp: not in enabled drivers build config 00:02:30.238 crypto/cnxk: not in enabled drivers build config 00:02:30.238 crypto/dpaa_sec: not in enabled drivers build config 00:02:30.238 crypto/dpaa2_sec: not in enabled drivers build config 00:02:30.238 crypto/ipsec_mb: not in enabled drivers build config 00:02:30.238 crypto/mlx5: not in enabled drivers build config 00:02:30.238 crypto/mvsam: not in enabled drivers build config 00:02:30.238 crypto/nitrox: not in enabled drivers build config 00:02:30.238 crypto/null: not in enabled drivers build config 00:02:30.238 crypto/octeontx: not in enabled drivers build config 00:02:30.238 crypto/openssl: not in enabled drivers build config 00:02:30.238 crypto/scheduler: not in enabled drivers build config 00:02:30.238 crypto/uadk: not in enabled drivers build config 00:02:30.238 crypto/virtio: not in enabled drivers build config 00:02:30.238 compress/isal: not in enabled drivers build config 00:02:30.238 compress/mlx5: not in enabled drivers build config 00:02:30.238 compress/octeontx: not in enabled drivers build config 00:02:30.238 compress/zlib: not in enabled drivers build config 00:02:30.238 regex/mlx5: not in enabled drivers build config 00:02:30.238 regex/cn9k: not in enabled drivers build config 00:02:30.238 vdpa/ifc: not in enabled drivers build config 00:02:30.238 vdpa/mlx5: not in enabled drivers build config 00:02:30.238 vdpa/sfc: not in enabled drivers build config 00:02:30.238 event/cnxk: not in enabled drivers build config 00:02:30.238 event/dlb2: not in enabled drivers build config 00:02:30.238 event/dpaa: not in enabled drivers build config 00:02:30.238 event/dpaa2: not in enabled drivers build config 00:02:30.238 event/dsw: not in enabled drivers build config 00:02:30.238 event/opdl: not in enabled drivers build config 00:02:30.238 event/skeleton: not in enabled drivers build config 00:02:30.238 event/sw: not in enabled drivers build config 00:02:30.238 event/octeontx: not in enabled drivers build config 00:02:30.238 baseband/acc: not in enabled drivers build config 00:02:30.238 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:30.238 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:30.238 baseband/la12xx: not in enabled drivers build config 00:02:30.238 baseband/null: not in enabled drivers build config 00:02:30.238 baseband/turbo_sw: not in enabled drivers build config 00:02:30.238 gpu/cuda: not in enabled drivers build config 00:02:30.238 00:02:30.238 00:02:30.239 Build targets in project: 310 00:02:30.239 00:02:30.239 DPDK 22.11.4 00:02:30.239 00:02:30.239 User defined options 00:02:30.239 libdir : lib 00:02:30.239 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:30.239 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:30.239 c_link_args : 00:02:30.239 enable_docs : false 00:02:30.239 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:30.239 enable_kmods : false 00:02:30.239 machine : native 00:02:30.239 tests : false 00:02:30.239 00:02:30.239 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:30.239 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:30.239 07:13:03 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:30.239 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:30.239 [1/737] Generating lib/rte_kvargs_def with a custom command 00:02:30.239 [2/737] Generating lib/rte_telemetry_mingw with a custom command 00:02:30.239 [3/737] Generating lib/rte_kvargs_mingw with a custom command 00:02:30.239 [4/737] Generating lib/rte_telemetry_def with a custom command 00:02:30.498 [5/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:30.498 [6/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:30.498 [7/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:30.498 [8/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:30.498 [9/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:30.498 [10/737] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:30.498 [11/737] Linking static target lib/librte_kvargs.a 00:02:30.498 [12/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:30.498 [13/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:30.498 [14/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:30.498 [15/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:30.498 [16/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.498 [17/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:30.757 [18/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:30.757 [19/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.757 [20/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:30.757 [21/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:30.757 [22/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:30.757 [23/737] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:30.757 [24/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:30.757 [25/737] Linking static target lib/librte_telemetry.a 00:02:30.757 [26/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:30.757 [27/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:31.049 [28/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:31.049 [29/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:31.049 [30/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:31.049 [31/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:31.049 [32/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:31.049 [33/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:31.049 [34/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:31.049 [35/737] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.049 [36/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:31.049 [37/737] Linking target lib/librte_kvargs.so.23.0 00:02:31.049 [38/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:31.334 [39/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:31.334 [40/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:31.334 [41/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:31.334 [42/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:31.334 [43/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:31.334 [44/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:31.334 [45/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:31.334 [46/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:31.334 [47/737] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:31.335 [48/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:31.593 [49/737] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:31.593 [50/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:31.593 [51/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:31.593 [52/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:31.593 [53/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:31.593 [54/737] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:31.593 [55/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:31.593 [56/737] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:31.593 [57/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:31.593 [58/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:31.593 [59/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:31.593 [60/737] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:31.593 [61/737] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:31.593 [62/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:31.593 [63/737] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:31.593 [64/737] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.593 [65/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:31.593 [66/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:31.851 [67/737] Linking target lib/librte_telemetry.so.23.0 00:02:31.851 [68/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:31.851 [69/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:31.851 [70/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:31.851 [71/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:31.851 [72/737] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:31.851 [73/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:31.851 [74/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:31.851 [75/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:31.852 [76/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:31.852 [77/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:31.852 [78/737] Generating lib/rte_eal_def with a custom command 00:02:31.852 [79/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:31.852 [80/737] Generating lib/rte_eal_mingw with a custom command 00:02:31.852 [81/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:31.852 [82/737] Generating lib/rte_ring_mingw with a custom command 00:02:31.852 [83/737] Generating lib/rte_ring_def with a custom command 00:02:31.852 [84/737] Generating lib/rte_rcu_def with a custom command 00:02:31.852 [85/737] Generating lib/rte_rcu_mingw with a custom command 00:02:31.852 [86/737] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:32.110 [87/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:32.110 [88/737] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:32.110 [89/737] Generating lib/rte_mempool_def with a custom command 00:02:32.110 [90/737] Linking static target lib/librte_ring.a 00:02:32.110 [91/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:32.110 [92/737] Generating lib/rte_mempool_mingw with a custom command 00:02:32.110 [93/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:32.369 [94/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:32.369 [95/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:32.369 [96/737] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:32.369 [97/737] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:32.369 [98/737] Generating lib/rte_mbuf_def with a custom command 00:02:32.369 [99/737] Generating lib/rte_mbuf_mingw with a custom command 00:02:32.369 [100/737] Linking static target lib/librte_eal.a 00:02:32.369 [101/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:32.369 [102/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:32.628 [103/737] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:32.628 [104/737] Linking static target lib/librte_rcu.a 00:02:32.628 [105/737] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.628 [106/737] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:32.628 [107/737] Linking static target lib/librte_mempool.a 00:02:32.628 [108/737] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:32.628 [109/737] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:32.628 [110/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:32.628 [111/737] Generating lib/rte_net_def with a custom command 00:02:32.628 [112/737] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:32.628 [113/737] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:32.628 [114/737] Generating lib/rte_net_mingw with a custom command 00:02:32.628 [115/737] Generating lib/rte_meter_def with a custom command 00:02:32.888 [116/737] Generating lib/rte_meter_mingw with a custom command 00:02:32.888 [117/737] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:32.888 [118/737] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.888 [119/737] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:32.888 [120/737] Linking static target lib/librte_meter.a 00:02:32.888 [121/737] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:32.888 [122/737] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:32.888 [123/737] Linking static target lib/librte_net.a 00:02:33.147 [124/737] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:33.147 [125/737] Linking static target lib/librte_mbuf.a 00:02:33.147 [126/737] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.147 [127/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:33.147 [128/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:33.147 [129/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:33.147 [130/737] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:33.147 [131/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:33.147 [132/737] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.406 [133/737] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.406 [134/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:33.666 [135/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:33.666 [136/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:33.666 [137/737] Generating lib/rte_ethdev_def with a custom command 00:02:33.666 [138/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:33.666 [139/737] Generating lib/rte_ethdev_mingw with a custom command 00:02:33.666 [140/737] Generating lib/rte_pci_def with a custom command 00:02:33.666 [141/737] Generating lib/rte_pci_mingw with a custom command 00:02:33.666 [142/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:33.666 [143/737] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.666 [144/737] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:33.666 [145/737] Linking static target lib/librte_pci.a 00:02:33.925 [146/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:33.925 [147/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:33.925 [148/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:33.925 [149/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:33.925 [150/737] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.925 [151/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:33.925 [152/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:33.925 [153/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:33.925 [154/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:33.925 [155/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:33.925 [156/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:33.925 [157/737] Generating lib/rte_cmdline_def with a custom command 00:02:33.925 [158/737] Generating lib/rte_cmdline_mingw with a custom command 00:02:33.925 [159/737] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:34.185 [160/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:34.185 [161/737] Generating lib/rte_metrics_def with a custom command 00:02:34.185 [162/737] Generating lib/rte_metrics_mingw with a custom command 00:02:34.185 [163/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:34.185 [164/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:34.185 [165/737] Generating lib/rte_hash_def with a custom command 00:02:34.185 [166/737] Generating lib/rte_hash_mingw with a custom command 00:02:34.185 [167/737] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:34.185 [168/737] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:34.185 [169/737] Linking static target lib/librte_cmdline.a 00:02:34.185 [170/737] Generating lib/rte_timer_def with a custom command 00:02:34.185 [171/737] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:34.185 [172/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:34.185 [173/737] Generating lib/rte_timer_mingw with a custom command 00:02:34.444 [174/737] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:34.444 [175/737] Linking static target lib/librte_metrics.a 00:02:34.444 [176/737] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:34.444 [177/737] Linking static target lib/librte_timer.a 00:02:34.704 [178/737] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:34.704 [179/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:34.704 [180/737] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:34.704 [181/737] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.963 [182/737] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.963 [183/737] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:34.963 [184/737] Generating lib/rte_acl_def with a custom command 00:02:34.963 [185/737] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:34.963 [186/737] Generating lib/rte_acl_mingw with a custom command 00:02:34.963 [187/737] Generating lib/rte_bbdev_def with a custom command 00:02:35.230 [188/737] Generating lib/rte_bbdev_mingw with a custom command 00:02:35.230 [189/737] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:35.230 [190/737] Generating lib/rte_bitratestats_def with a custom command 00:02:35.230 [191/737] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:35.230 [192/737] Generating lib/rte_bitratestats_mingw with a custom command 00:02:35.230 [193/737] Linking static target lib/librte_ethdev.a 00:02:35.230 [194/737] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.487 [195/737] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:35.487 [196/737] Linking static target lib/librte_bitratestats.a 00:02:35.487 [197/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:35.487 [198/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:35.745 [199/737] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.745 [200/737] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:35.745 [201/737] Linking static target lib/librte_bbdev.a 00:02:36.003 [202/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:36.003 [203/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:36.261 [204/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:36.261 [205/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:36.261 [206/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:36.261 [207/737] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:36.261 [208/737] Linking static target lib/librte_hash.a 00:02:36.261 [209/737] Generating lib/rte_bpf_def with a custom command 00:02:36.261 [210/737] Generating lib/rte_bpf_mingw with a custom command 00:02:36.261 [211/737] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.518 [212/737] Generating lib/rte_cfgfile_def with a custom command 00:02:36.518 [213/737] Generating lib/rte_cfgfile_mingw with a custom command 00:02:36.518 [214/737] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:36.518 [215/737] Linking static target lib/librte_cfgfile.a 00:02:36.776 [216/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:36.776 [217/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:36.776 [218/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:37.033 [219/737] Generating lib/rte_compressdev_def with a custom command 00:02:37.033 [220/737] Generating lib/rte_compressdev_mingw with a custom command 00:02:37.033 [221/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:37.033 [222/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:37.033 [223/737] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.033 [224/737] Generating lib/rte_cryptodev_def with a custom command 00:02:37.033 [225/737] Generating lib/rte_cryptodev_mingw with a custom command 00:02:37.033 [226/737] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.033 [227/737] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:37.033 [228/737] Linking static target lib/librte_bpf.a 00:02:37.291 [229/737] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:37.291 [230/737] Linking static target lib/librte_compressdev.a 00:02:37.291 [231/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:37.549 [232/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:37.549 [233/737] Generating lib/rte_distributor_def with a custom command 00:02:37.549 [234/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:37.549 [235/737] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.549 [236/737] Generating lib/rte_distributor_mingw with a custom command 00:02:37.549 [237/737] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:37.549 [238/737] Generating lib/rte_efd_def with a custom command 00:02:37.549 [239/737] Linking static target lib/librte_acl.a 00:02:37.549 [240/737] Generating lib/rte_efd_mingw with a custom command 00:02:37.549 [241/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:37.807 [242/737] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:37.807 [243/737] Linking static target lib/librte_distributor.a 00:02:37.807 [244/737] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:37.807 [245/737] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.065 [246/737] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:38.065 [247/737] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.323 [248/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:38.323 [249/737] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.323 [250/737] Generating lib/rte_eventdev_def with a custom command 00:02:38.323 [251/737] Generating lib/rte_eventdev_mingw with a custom command 00:02:38.582 [252/737] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:38.582 [253/737] Linking static target lib/librte_efd.a 00:02:38.582 [254/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:38.582 [255/737] Generating lib/rte_gpudev_def with a custom command 00:02:38.582 [256/737] Generating lib/rte_gpudev_mingw with a custom command 00:02:38.842 [257/737] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.842 [258/737] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:38.842 [259/737] Linking static target lib/librte_cryptodev.a 00:02:38.842 [260/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:38.842 [261/737] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:38.842 [262/737] Linking static target lib/librte_gpudev.a 00:02:38.842 [263/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:39.101 [264/737] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:39.101 [265/737] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:39.359 [266/737] Generating lib/rte_gro_def with a custom command 00:02:39.359 [267/737] Generating lib/rte_gro_mingw with a custom command 00:02:39.359 [268/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:39.359 [269/737] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:39.359 [270/737] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:39.617 [271/737] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:39.617 [272/737] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:39.617 [273/737] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:39.617 [274/737] Linking static target lib/librte_gro.a 00:02:39.874 [275/737] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.874 [276/737] Generating lib/rte_gso_def with a custom command 00:02:39.874 [277/737] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:39.874 [278/737] Generating lib/rte_gso_mingw with a custom command 00:02:39.874 [279/737] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.874 [280/737] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:39.874 [281/737] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:40.133 [282/737] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:40.133 [283/737] Linking static target lib/librte_gso.a 00:02:40.133 [284/737] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:40.133 [285/737] Linking static target lib/librte_eventdev.a 00:02:40.133 [286/737] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.391 [287/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:40.391 [288/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:40.391 [289/737] Generating lib/rte_ip_frag_mingw with a custom command 00:02:40.391 [290/737] Generating lib/rte_ip_frag_def with a custom command 00:02:40.391 [291/737] Generating lib/rte_jobstats_def with a custom command 00:02:40.391 [292/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:40.391 [293/737] Generating lib/rte_jobstats_mingw with a custom command 00:02:40.391 [294/737] Generating lib/rte_latencystats_def with a custom command 00:02:40.391 [295/737] Generating lib/rte_latencystats_mingw with a custom command 00:02:40.649 [296/737] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:40.649 [297/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:40.649 [298/737] Linking static target lib/librte_jobstats.a 00:02:40.649 [299/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:40.649 [300/737] Generating lib/rte_lpm_def with a custom command 00:02:40.649 [301/737] Generating lib/rte_lpm_mingw with a custom command 00:02:40.649 [302/737] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:40.649 [303/737] Linking static target lib/librte_ip_frag.a 00:02:40.907 [304/737] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.907 [305/737] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:40.907 [306/737] Linking static target lib/librte_latencystats.a 00:02:41.165 [307/737] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.165 [308/737] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.165 [309/737] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:41.165 [310/737] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.423 [311/737] Generating lib/rte_member_def with a custom command 00:02:41.423 [312/737] Generating lib/rte_member_mingw with a custom command 00:02:41.423 [313/737] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:41.423 [314/737] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:41.423 [315/737] Generating lib/rte_pcapng_def with a custom command 00:02:41.423 [316/737] Generating lib/rte_pcapng_mingw with a custom command 00:02:41.423 [317/737] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:41.423 [318/737] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:41.423 [319/737] Linking static target lib/librte_lpm.a 00:02:41.681 [320/737] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:41.681 [321/737] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.681 [322/737] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:41.681 [323/737] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:41.939 [324/737] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:41.939 [325/737] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.939 [326/737] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:41.939 [327/737] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:41.939 [328/737] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:41.939 [329/737] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:41.939 [330/737] Linking static target lib/librte_pcapng.a 00:02:41.939 [331/737] Linking target lib/librte_eal.so.23.0 00:02:41.939 [332/737] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.939 [333/737] Generating lib/rte_power_def with a custom command 00:02:42.197 [334/737] Generating lib/rte_power_mingw with a custom command 00:02:42.197 [335/737] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:42.197 [336/737] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:42.197 [337/737] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:42.197 [338/737] Linking target lib/librte_ring.so.23.0 00:02:42.197 [339/737] Linking target lib/librte_meter.so.23.0 00:02:42.197 [340/737] Linking target lib/librte_pci.so.23.0 00:02:42.197 [341/737] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:42.197 [342/737] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:42.197 [343/737] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:42.456 [344/737] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.456 [345/737] Linking target lib/librte_rcu.so.23.0 00:02:42.456 [346/737] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:42.456 [347/737] Linking target lib/librte_timer.so.23.0 00:02:42.456 [348/737] Linking target lib/librte_mempool.so.23.0 00:02:42.456 [349/737] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:42.456 [350/737] Linking target lib/librte_acl.so.23.0 00:02:42.456 [351/737] Linking target lib/librte_cfgfile.so.23.0 00:02:42.456 [352/737] Generating lib/rte_rawdev_def with a custom command 00:02:42.456 [353/737] Linking target lib/librte_jobstats.so.23.0 00:02:42.456 [354/737] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:42.456 [355/737] Linking static target lib/librte_rawdev.a 00:02:42.456 [356/737] Generating lib/rte_rawdev_mingw with a custom command 00:02:42.456 [357/737] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:42.456 [358/737] Generating lib/rte_regexdev_def with a custom command 00:02:42.456 [359/737] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:42.456 [360/737] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:42.456 [361/737] Generating lib/rte_regexdev_mingw with a custom command 00:02:42.456 [362/737] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:42.456 [363/737] Linking static target lib/librte_power.a 00:02:42.456 [364/737] Generating lib/rte_dmadev_def with a custom command 00:02:42.456 [365/737] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:42.456 [366/737] Generating lib/rte_dmadev_mingw with a custom command 00:02:42.456 [367/737] Generating lib/rte_rib_def with a custom command 00:02:42.714 [368/737] Linking target lib/librte_mbuf.so.23.0 00:02:42.714 [369/737] Generating lib/rte_rib_mingw with a custom command 00:02:42.714 [370/737] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:42.714 [371/737] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.714 [372/737] Linking target lib/librte_bbdev.so.23.0 00:02:42.714 [373/737] Linking target lib/librte_net.so.23.0 00:02:42.714 [374/737] Linking target lib/librte_compressdev.so.23.0 00:02:42.971 [375/737] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:42.971 [376/737] Linking target lib/librte_cryptodev.so.23.0 00:02:42.971 [377/737] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:42.971 [378/737] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:42.971 [379/737] Linking target lib/librte_distributor.so.23.0 00:02:42.971 [380/737] Linking target lib/librte_cmdline.so.23.0 00:02:42.971 [381/737] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:42.971 [382/737] Linking static target lib/librte_member.a 00:02:42.971 [383/737] Linking target lib/librte_hash.so.23.0 00:02:42.971 [384/737] Linking target lib/librte_ethdev.so.23.0 00:02:42.971 [385/737] Linking target lib/librte_gpudev.so.23.0 00:02:42.971 [386/737] Linking static target lib/librte_regexdev.a 00:02:42.971 [387/737] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:42.971 [388/737] Linking static target lib/librte_dmadev.a 00:02:42.971 [389/737] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.971 [390/737] Linking target lib/librte_rawdev.so.23.0 00:02:42.971 [391/737] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:43.229 [392/737] Generating lib/rte_reorder_def with a custom command 00:02:43.229 [393/737] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:43.229 [394/737] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:43.229 [395/737] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:43.229 [396/737] Generating lib/rte_reorder_mingw with a custom command 00:02:43.229 [397/737] Linking target lib/librte_efd.so.23.0 00:02:43.229 [398/737] Linking target lib/librte_lpm.so.23.0 00:02:43.229 [399/737] Linking target lib/librte_bpf.so.23.0 00:02:43.229 [400/737] Linking target lib/librte_metrics.so.23.0 00:02:43.229 [401/737] Linking target lib/librte_eventdev.so.23.0 00:02:43.229 [402/737] Linking target lib/librte_gro.so.23.0 00:02:43.229 [403/737] Linking target lib/librte_gso.so.23.0 00:02:43.229 [404/737] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.229 [405/737] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:43.229 [406/737] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:43.229 [407/737] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:43.229 [408/737] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:43.487 [409/737] Linking target lib/librte_bitratestats.so.23.0 00:02:43.487 [410/737] Linking target lib/librte_latencystats.so.23.0 00:02:43.487 [411/737] Linking target lib/librte_ip_frag.so.23.0 00:02:43.487 [412/737] Linking static target lib/librte_rib.a 00:02:43.487 [413/737] Linking static target lib/librte_reorder.a 00:02:43.487 [414/737] Linking target lib/librte_member.so.23.0 00:02:43.487 [415/737] Linking target lib/librte_pcapng.so.23.0 00:02:43.487 [416/737] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:43.487 [417/737] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:43.487 [418/737] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:43.487 [419/737] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:43.487 [420/737] Generating lib/rte_sched_def with a custom command 00:02:43.487 [421/737] Generating lib/rte_sched_mingw with a custom command 00:02:43.487 [422/737] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:43.487 [423/737] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:43.487 [424/737] Generating lib/rte_security_def with a custom command 00:02:43.487 [425/737] Generating lib/rte_security_mingw with a custom command 00:02:43.810 [426/737] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.810 [427/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:43.810 [428/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:43.810 [429/737] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.810 [430/737] Linking target lib/librte_reorder.so.23.0 00:02:43.810 [431/737] Generating lib/rte_stack_def with a custom command 00:02:43.810 [432/737] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:43.810 [433/737] Generating lib/rte_stack_mingw with a custom command 00:02:43.810 [434/737] Linking static target lib/librte_stack.a 00:02:43.810 [435/737] Linking target lib/librte_dmadev.so.23.0 00:02:43.810 [436/737] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.810 [437/737] Linking target lib/librte_power.so.23.0 00:02:43.810 [438/737] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.810 [439/737] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:43.810 [440/737] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:43.810 [441/737] Linking target lib/librte_regexdev.so.23.0 00:02:44.101 [442/737] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.101 [443/737] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.101 [444/737] Linking target lib/librte_rib.so.23.0 00:02:44.101 [445/737] Linking target lib/librte_stack.so.23.0 00:02:44.101 [446/737] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:44.101 [447/737] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:44.101 [448/737] Generating lib/rte_vhost_def with a custom command 00:02:44.101 [449/737] Linking static target lib/librte_security.a 00:02:44.101 [450/737] Generating lib/rte_vhost_mingw with a custom command 00:02:44.101 [451/737] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:44.360 [452/737] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:44.360 [453/737] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:44.618 [454/737] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.618 [455/737] Linking target lib/librte_security.so.23.0 00:02:44.618 [456/737] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:44.618 [457/737] Linking static target lib/librte_sched.a 00:02:44.877 [458/737] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:44.877 [459/737] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:44.877 [460/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:44.877 [461/737] Generating lib/rte_ipsec_def with a custom command 00:02:44.877 [462/737] Generating lib/rte_ipsec_mingw with a custom command 00:02:44.877 [463/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:45.137 [464/737] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:45.137 [465/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:45.137 [466/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:45.137 [467/737] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:45.397 [468/737] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.397 [469/737] Linking target lib/librte_sched.so.23.0 00:02:45.397 [470/737] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:45.397 [471/737] Generating lib/rte_fib_def with a custom command 00:02:45.397 [472/737] Generating lib/rte_fib_mingw with a custom command 00:02:45.655 [473/737] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:45.655 [474/737] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:45.655 [475/737] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:45.655 [476/737] Linking static target lib/librte_ipsec.a 00:02:45.655 [477/737] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:45.655 [478/737] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:45.655 [479/737] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:45.912 [480/737] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:45.912 [481/737] Linking static target lib/librte_fib.a 00:02:46.171 [482/737] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.171 [483/737] Linking target lib/librte_ipsec.so.23.0 00:02:46.171 [484/737] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:46.171 [485/737] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:46.171 [486/737] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:46.171 [487/737] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:46.171 [488/737] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:46.430 [489/737] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.430 [490/737] Linking target lib/librte_fib.so.23.0 00:02:46.689 [491/737] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:46.689 [492/737] Generating lib/rte_port_def with a custom command 00:02:46.689 [493/737] Generating lib/rte_port_mingw with a custom command 00:02:46.689 [494/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:46.689 [495/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:46.689 [496/737] Generating lib/rte_pdump_def with a custom command 00:02:46.689 [497/737] Generating lib/rte_pdump_mingw with a custom command 00:02:46.947 [498/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:46.947 [499/737] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:46.947 [500/737] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:46.947 [501/737] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:46.947 [502/737] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:46.947 [503/737] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:47.206 [504/737] Linking static target lib/librte_port.a 00:02:47.206 [505/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:47.206 [506/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:47.206 [507/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:47.206 [508/737] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:47.206 [509/737] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:47.463 [510/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:47.463 [511/737] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:47.463 [512/737] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:47.463 [513/737] Linking static target lib/librte_pdump.a 00:02:47.721 [514/737] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.721 [515/737] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:47.979 [516/737] Linking target lib/librte_port.so.23.0 00:02:47.979 [517/737] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.979 [518/737] Linking target lib/librte_pdump.so.23.0 00:02:47.979 [519/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:47.979 [520/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:47.979 [521/737] Generating lib/rte_table_def with a custom command 00:02:47.979 [522/737] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:47.979 [523/737] Generating lib/rte_table_mingw with a custom command 00:02:48.236 [524/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:48.236 [525/737] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:48.236 [526/737] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:48.236 [527/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:48.236 [528/737] Generating lib/rte_pipeline_def with a custom command 00:02:48.236 [529/737] Generating lib/rte_pipeline_mingw with a custom command 00:02:48.494 [530/737] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:48.494 [531/737] Linking static target lib/librte_table.a 00:02:48.494 [532/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:48.752 [533/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:48.752 [534/737] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:48.752 [535/737] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:49.010 [536/737] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:49.268 [537/737] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:49.268 [538/737] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.268 [539/737] Generating lib/rte_graph_def with a custom command 00:02:49.268 [540/737] Generating lib/rte_graph_mingw with a custom command 00:02:49.268 [541/737] Linking target lib/librte_table.so.23.0 00:02:49.268 [542/737] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:49.268 [543/737] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:49.526 [544/737] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:49.526 [545/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:49.526 [546/737] Linking static target lib/librte_graph.a 00:02:49.526 [547/737] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:49.785 [548/737] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:49.785 [549/737] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:49.785 [550/737] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:50.080 [551/737] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:50.338 [552/737] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:50.338 [553/737] Generating lib/rte_node_def with a custom command 00:02:50.338 [554/737] Generating lib/rte_node_mingw with a custom command 00:02:50.338 [555/737] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:50.338 [556/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:50.338 [557/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:50.595 [558/737] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:50.595 [559/737] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.595 [560/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:50.595 [561/737] Linking target lib/librte_graph.so.23.0 00:02:50.595 [562/737] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:50.595 [563/737] Linking static target lib/librte_node.a 00:02:50.595 [564/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:50.595 [565/737] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:50.595 [566/737] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:50.595 [567/737] Generating drivers/rte_bus_pci_def with a custom command 00:02:50.595 [568/737] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:50.853 [569/737] Generating drivers/rte_bus_vdev_def with a custom command 00:02:50.853 [570/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:50.853 [571/737] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:50.853 [572/737] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:50.853 [573/737] Generating drivers/rte_mempool_ring_def with a custom command 00:02:50.853 [574/737] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:50.853 [575/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:50.853 [576/737] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:50.853 [577/737] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:50.853 [578/737] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:51.111 [579/737] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.111 [580/737] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:51.111 [581/737] Linking target lib/librte_node.so.23.0 00:02:51.111 [582/737] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:51.111 [583/737] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.111 [584/737] Linking static target drivers/librte_bus_pci.a 00:02:51.111 [585/737] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:51.111 [586/737] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.111 [587/737] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.111 [588/737] Linking static target drivers/librte_bus_vdev.a 00:02:51.368 [589/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:51.368 [590/737] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.368 [591/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:51.368 [592/737] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.637 [593/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:51.637 [594/737] Linking target drivers/librte_bus_vdev.so.23.0 00:02:51.637 [595/737] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:51.637 [596/737] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:51.637 [597/737] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:51.637 [598/737] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.637 [599/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:51.637 [600/737] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:51.637 [601/737] Linking target drivers/librte_bus_pci.so.23.0 00:02:51.637 [602/737] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.895 [603/737] Linking static target drivers/librte_mempool_ring.a 00:02:51.895 [604/737] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:51.895 [605/737] Linking target drivers/librte_mempool_ring.so.23.0 00:02:51.895 [606/737] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:51.895 [607/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:52.459 [608/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:52.716 [609/737] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:52.716 [610/737] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:52.716 [611/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:53.281 [612/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:53.281 [613/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:53.281 [614/737] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:53.281 [615/737] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:53.281 [616/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:53.537 [617/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:53.537 [618/737] Generating drivers/rte_net_i40e_def with a custom command 00:02:53.537 [619/737] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:53.794 [620/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:54.051 [621/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:54.614 [622/737] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:54.614 [623/737] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:54.614 [624/737] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:54.614 [625/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:54.614 [626/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:54.614 [627/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:54.871 [628/737] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:54.871 [629/737] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:54.871 [630/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:55.128 [631/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:55.128 [632/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:55.387 [633/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:55.387 [634/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:55.646 [635/737] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:55.646 [636/737] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:55.646 [637/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:55.646 [638/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:55.904 [639/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:55.904 [640/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:55.904 [641/737] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:55.904 [642/737] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:55.904 [643/737] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:55.904 [644/737] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:55.904 [645/737] Linking static target drivers/librte_net_i40e.a 00:02:56.163 [646/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:56.163 [647/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:56.480 [648/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:56.480 [649/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:56.480 [650/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:56.480 [651/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:56.480 [652/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:56.739 [653/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:56.739 [654/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:56.739 [655/737] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.739 [656/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:56.739 [657/737] Linking target drivers/librte_net_i40e.so.23.0 00:02:56.997 [658/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:56.997 [659/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:56.997 [660/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:56.997 [661/737] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:57.255 [662/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:57.255 [663/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:57.514 [664/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:57.773 [665/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:57.773 [666/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:58.031 [667/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:58.288 [668/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:58.288 [669/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:58.288 [670/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:58.288 [671/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:58.288 [672/737] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:58.288 [673/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:58.546 [674/737] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:58.546 [675/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:58.804 [676/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:58.804 [677/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:58.804 [678/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:59.062 [679/737] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:59.062 [680/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:59.062 [681/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:59.062 [682/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:59.062 [683/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:59.332 [684/737] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:59.591 [685/737] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:59.591 [686/737] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:59.591 [687/737] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:59.591 [688/737] Linking static target lib/librte_vhost.a 00:02:59.591 [689/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:59.850 [690/737] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:59.850 [691/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:59.850 [692/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:00.523 [693/737] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:00.523 [694/737] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:00.523 [695/737] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:00.523 [696/737] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:00.784 [697/737] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:01.042 [698/737] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:01.042 [699/737] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:01.042 [700/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:01.042 [701/737] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.301 [702/737] Linking target lib/librte_vhost.so.23.0 00:03:01.301 [703/737] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:01.301 [704/737] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:01.559 [705/737] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:01.817 [706/737] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:01.817 [707/737] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:01.817 [708/737] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:02.075 [709/737] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:02.075 [710/737] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:02.333 [711/737] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:02.333 [712/737] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:02.333 [713/737] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:02.333 [714/737] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:02.333 [715/737] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:02.592 [716/737] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:03.159 [717/737] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:05.060 [718/737] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:05.060 [719/737] Linking static target lib/librte_pipeline.a 00:03:05.318 [720/737] Linking target app/dpdk-pdump 00:03:05.318 [721/737] Linking target app/dpdk-test-fib 00:03:05.318 [722/737] Linking target app/dpdk-test-cmdline 00:03:05.318 [723/737] Linking target app/dpdk-test-compress-perf 00:03:05.318 [724/737] Linking target app/dpdk-proc-info 00:03:05.318 [725/737] Linking target app/dpdk-test-acl 00:03:05.318 [726/737] Linking target app/dpdk-test-bbdev 00:03:05.576 [727/737] Linking target app/dpdk-test-crypto-perf 00:03:05.576 [728/737] Linking target app/dpdk-test-eventdev 00:03:05.834 [729/737] Linking target app/dpdk-test-flow-perf 00:03:05.834 [730/737] Linking target app/dpdk-test-gpudev 00:03:05.834 [731/737] Linking target app/dpdk-test-pipeline 00:03:05.834 [732/737] Linking target app/dpdk-test-regex 00:03:05.834 [733/737] Linking target app/dpdk-testpmd 00:03:05.834 [734/737] Linking target app/dpdk-test-sad 00:03:05.834 [735/737] Linking target app/dpdk-test-security-perf 00:03:10.025 [736/737] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.025 [737/737] Linking target lib/librte_pipeline.so.23.0 00:03:10.025 07:13:43 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:10.025 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:10.025 [0/1] Installing files. 00:03:10.290 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.290 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.291 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.292 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.293 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.294 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.294 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.294 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.294 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.294 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.294 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.294 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.294 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.294 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.294 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.294 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.294 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.294 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.294 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.294 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:10.594 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:10.594 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:10.594 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:10.594 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:10.594 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.594 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.594 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.595 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.595 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.595 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.595 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.595 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.861 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.861 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.861 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.861 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.861 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.861 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.861 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.861 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:10.863 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:10.863 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:10.863 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:10.863 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:10.863 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:10.863 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:10.863 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:10.863 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:10.863 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:10.863 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:10.863 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:10.863 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:10.863 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:10.863 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:10.863 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:10.863 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:10.863 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:10.863 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:10.863 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:10.863 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:10.864 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:10.864 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:10.864 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:10.864 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:10.864 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:10.864 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:10.864 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:10.864 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:10.864 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:10.864 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:10.864 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:10.864 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:10.864 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:10.864 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:10.864 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:10.864 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:10.864 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:10.864 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:10.864 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:10.864 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:10.864 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:10.864 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:10.864 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:10.864 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:10.864 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:10.864 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:10.864 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:10.864 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:10.864 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:10.864 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:10.864 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:10.864 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:10.864 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:10.864 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:10.864 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:10.864 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:10.864 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:10.864 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:10.864 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:10.864 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:10.864 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:10.864 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:10.864 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:10.864 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:10.864 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:10.864 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:10.864 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:10.864 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:10.864 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:10.864 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:10.864 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:10.864 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:10.864 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:10.864 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:10.864 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:10.864 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:10.864 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:10.864 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:10.864 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:10.864 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:10.864 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:10.864 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:10.864 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:10.864 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:10.864 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:10.864 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:10.864 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:10.864 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:10.864 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:10.864 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:10.864 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:10.864 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:10.864 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:10.864 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:10.864 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:10.864 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:10.864 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:10.864 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:10.864 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:10.864 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:10.864 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:10.864 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:10.864 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:10.864 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:10.864 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:10.864 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:10.864 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:10.864 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:10.864 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:10.864 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:10.864 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:10.864 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:10.864 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:10.864 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:10.864 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:10.864 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:10.864 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:10.864 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:10.864 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:10.864 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:10.864 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:10.864 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:10.864 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:10.864 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:10.864 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:10.864 07:13:44 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:03:10.864 07:13:44 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:10.864 07:13:44 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:03:10.864 07:13:44 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:10.864 ************************************ 00:03:10.864 END TEST build_native_dpdk 00:03:10.864 ************************************ 00:03:10.864 00:03:10.864 real 0m46.961s 00:03:10.864 user 4m36.952s 00:03:10.864 sys 0m54.117s 00:03:10.864 07:13:44 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:10.864 07:13:44 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:10.864 07:13:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:10.864 07:13:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:10.864 07:13:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:10.864 07:13:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:10.864 07:13:44 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:03:10.864 07:13:44 -- spdk/autobuild.sh@58 -- $ unittest_build 00:03:10.865 07:13:44 -- common/autobuild_common.sh@413 -- $ run_test unittest_build _unittest_build 00:03:10.865 07:13:44 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:03:10.865 07:13:44 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:10.865 07:13:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:10.865 ************************************ 00:03:10.865 START TEST unittest_build 00:03:10.865 ************************************ 00:03:10.865 07:13:44 unittest_build -- common/autotest_common.sh@1121 -- $ _unittest_build 00:03:10.865 07:13:44 unittest_build -- common/autobuild_common.sh@404 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared 00:03:11.124 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:11.124 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.124 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:11.124 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:11.382 Using 'verbs' RDMA provider 00:03:27.199 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:45.291 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:45.291 Creating mk/config.mk...done. 00:03:45.291 Creating mk/cc.flags.mk...done. 00:03:45.291 Type 'make' to build. 00:03:45.291 07:14:16 unittest_build -- common/autobuild_common.sh@405 -- $ make -j10 00:03:45.291 make[1]: Nothing to be done for 'all'. 00:04:03.379 CC lib/log/log.o 00:04:03.379 CC lib/log/log_flags.o 00:04:03.379 CC lib/ut_mock/mock.o 00:04:03.379 CC lib/log/log_deprecated.o 00:04:03.379 CC lib/ut/ut.o 00:04:03.379 LIB libspdk_ut_mock.a 00:04:03.379 LIB libspdk_log.a 00:04:03.379 LIB libspdk_ut.a 00:04:03.379 CC lib/util/base64.o 00:04:03.379 CC lib/util/bit_array.o 00:04:03.379 CC lib/util/cpuset.o 00:04:03.379 CC lib/util/crc16.o 00:04:03.379 CC lib/util/crc32c.o 00:04:03.379 CC lib/util/crc32.o 00:04:03.379 CC lib/dma/dma.o 00:04:03.379 CC lib/ioat/ioat.o 00:04:03.379 CXX lib/trace_parser/trace.o 00:04:03.379 CC lib/vfio_user/host/vfio_user_pci.o 00:04:03.379 CC lib/vfio_user/host/vfio_user.o 00:04:03.379 CC lib/util/crc32_ieee.o 00:04:03.379 CC lib/util/crc64.o 00:04:03.379 CC lib/util/dif.o 00:04:03.379 LIB libspdk_dma.a 00:04:03.379 CC lib/util/fd.o 00:04:03.379 CC lib/util/file.o 00:04:03.379 CC lib/util/hexlify.o 00:04:03.379 LIB libspdk_ioat.a 00:04:03.379 CC lib/util/iov.o 00:04:03.379 CC lib/util/math.o 00:04:03.379 CC lib/util/pipe.o 00:04:03.379 CC lib/util/strerror_tls.o 00:04:03.379 CC lib/util/string.o 00:04:03.379 CC lib/util/uuid.o 00:04:03.379 LIB libspdk_vfio_user.a 00:04:03.379 CC lib/util/fd_group.o 00:04:03.379 CC lib/util/xor.o 00:04:03.379 CC lib/util/zipf.o 00:04:03.379 LIB libspdk_util.a 00:04:03.379 LIB libspdk_trace_parser.a 00:04:03.379 CC lib/rdma/rdma_verbs.o 00:04:03.379 CC lib/rdma/common.o 00:04:03.379 CC lib/json/json_parse.o 00:04:03.379 CC lib/json/json_write.o 00:04:03.379 CC lib/json/json_util.o 00:04:03.379 CC lib/idxd/idxd.o 00:04:03.379 CC lib/env_dpdk/env.o 00:04:03.379 CC lib/vmd/vmd.o 00:04:03.379 CC lib/vmd/led.o 00:04:03.379 CC lib/conf/conf.o 00:04:03.379 CC lib/idxd/idxd_user.o 00:04:03.379 CC lib/env_dpdk/memory.o 00:04:03.379 CC lib/env_dpdk/pci.o 00:04:03.379 CC lib/env_dpdk/init.o 00:04:03.379 LIB libspdk_conf.a 00:04:03.379 LIB libspdk_rdma.a 00:04:03.697 CC lib/env_dpdk/threads.o 00:04:03.697 LIB libspdk_json.a 00:04:03.697 CC lib/env_dpdk/pci_ioat.o 00:04:03.697 CC lib/env_dpdk/pci_virtio.o 00:04:03.697 CC lib/env_dpdk/pci_vmd.o 00:04:03.697 CC lib/env_dpdk/pci_idxd.o 00:04:03.697 CC lib/env_dpdk/pci_event.o 00:04:03.697 CC lib/env_dpdk/sigbus_handler.o 00:04:03.697 CC lib/env_dpdk/pci_dpdk.o 00:04:03.697 LIB libspdk_idxd.a 00:04:03.697 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:03.697 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:03.959 LIB libspdk_vmd.a 00:04:03.959 CC lib/jsonrpc/jsonrpc_server.o 00:04:03.959 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:03.959 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:03.959 CC lib/jsonrpc/jsonrpc_client.o 00:04:04.218 LIB libspdk_jsonrpc.a 00:04:04.477 CC lib/rpc/rpc.o 00:04:04.477 LIB libspdk_env_dpdk.a 00:04:04.736 LIB libspdk_rpc.a 00:04:04.736 CC lib/trace/trace.o 00:04:04.736 CC lib/trace/trace_flags.o 00:04:04.736 CC lib/trace/trace_rpc.o 00:04:04.736 CC lib/keyring/keyring.o 00:04:04.736 CC lib/keyring/keyring_rpc.o 00:04:04.736 CC lib/notify/notify.o 00:04:04.736 CC lib/notify/notify_rpc.o 00:04:04.995 LIB libspdk_notify.a 00:04:04.995 LIB libspdk_keyring.a 00:04:04.995 LIB libspdk_trace.a 00:04:05.255 CC lib/thread/thread.o 00:04:05.255 CC lib/thread/iobuf.o 00:04:05.255 CC lib/sock/sock.o 00:04:05.255 CC lib/sock/sock_rpc.o 00:04:05.824 LIB libspdk_sock.a 00:04:06.084 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:06.084 CC lib/nvme/nvme_ctrlr.o 00:04:06.084 CC lib/nvme/nvme_fabric.o 00:04:06.084 CC lib/nvme/nvme_ns_cmd.o 00:04:06.084 CC lib/nvme/nvme_pcie_common.o 00:04:06.084 CC lib/nvme/nvme_qpair.o 00:04:06.084 CC lib/nvme/nvme_ns.o 00:04:06.084 CC lib/nvme/nvme.o 00:04:06.084 CC lib/nvme/nvme_pcie.o 00:04:06.652 CC lib/nvme/nvme_quirks.o 00:04:06.652 CC lib/nvme/nvme_transport.o 00:04:06.652 CC lib/nvme/nvme_discovery.o 00:04:06.652 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:06.910 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:06.910 LIB libspdk_thread.a 00:04:06.910 CC lib/nvme/nvme_tcp.o 00:04:06.910 CC lib/nvme/nvme_opal.o 00:04:06.910 CC lib/nvme/nvme_io_msg.o 00:04:06.910 CC lib/nvme/nvme_poll_group.o 00:04:06.910 CC lib/nvme/nvme_zns.o 00:04:07.169 CC lib/nvme/nvme_stubs.o 00:04:07.169 CC lib/nvme/nvme_auth.o 00:04:07.169 CC lib/nvme/nvme_cuse.o 00:04:07.169 CC lib/nvme/nvme_rdma.o 00:04:07.427 CC lib/accel/accel.o 00:04:07.427 CC lib/blob/blobstore.o 00:04:07.427 CC lib/init/json_config.o 00:04:07.427 CC lib/init/subsystem.o 00:04:07.427 CC lib/virtio/virtio.o 00:04:07.687 CC lib/virtio/virtio_vhost_user.o 00:04:07.687 CC lib/init/subsystem_rpc.o 00:04:07.687 CC lib/virtio/virtio_vfio_user.o 00:04:07.946 CC lib/init/rpc.o 00:04:07.946 CC lib/blob/request.o 00:04:07.946 CC lib/virtio/virtio_pci.o 00:04:07.946 CC lib/blob/zeroes.o 00:04:07.946 LIB libspdk_init.a 00:04:07.946 CC lib/blob/blob_bs_dev.o 00:04:07.946 CC lib/accel/accel_rpc.o 00:04:08.205 CC lib/accel/accel_sw.o 00:04:08.205 LIB libspdk_virtio.a 00:04:08.205 CC lib/event/app.o 00:04:08.205 CC lib/event/log_rpc.o 00:04:08.205 CC lib/event/reactor.o 00:04:08.205 CC lib/event/app_rpc.o 00:04:08.205 CC lib/event/scheduler_static.o 00:04:08.205 LIB libspdk_nvme.a 00:04:08.470 LIB libspdk_accel.a 00:04:08.728 LIB libspdk_event.a 00:04:08.728 CC lib/bdev/bdev.o 00:04:08.728 CC lib/bdev/bdev_rpc.o 00:04:08.728 CC lib/bdev/bdev_zone.o 00:04:08.729 CC lib/bdev/part.o 00:04:08.729 CC lib/bdev/scsi_nvme.o 00:04:10.631 LIB libspdk_blob.a 00:04:10.889 CC lib/blobfs/blobfs.o 00:04:10.889 CC lib/blobfs/tree.o 00:04:10.889 CC lib/lvol/lvol.o 00:04:11.148 LIB libspdk_bdev.a 00:04:11.407 CC lib/scsi/lun.o 00:04:11.407 CC lib/scsi/scsi.o 00:04:11.407 CC lib/scsi/port.o 00:04:11.407 CC lib/scsi/scsi_bdev.o 00:04:11.407 CC lib/scsi/dev.o 00:04:11.407 CC lib/nbd/nbd.o 00:04:11.407 CC lib/nvmf/ctrlr.o 00:04:11.407 CC lib/ftl/ftl_core.o 00:04:11.407 CC lib/ftl/ftl_init.o 00:04:11.407 LIB libspdk_blobfs.a 00:04:11.666 CC lib/ftl/ftl_layout.o 00:04:11.666 CC lib/ftl/ftl_debug.o 00:04:11.666 CC lib/ftl/ftl_io.o 00:04:11.666 CC lib/ftl/ftl_sb.o 00:04:11.666 CC lib/ftl/ftl_l2p.o 00:04:11.925 CC lib/nbd/nbd_rpc.o 00:04:11.925 LIB libspdk_lvol.a 00:04:11.925 CC lib/nvmf/ctrlr_discovery.o 00:04:11.925 CC lib/nvmf/ctrlr_bdev.o 00:04:11.925 CC lib/ftl/ftl_l2p_flat.o 00:04:11.925 CC lib/scsi/scsi_pr.o 00:04:11.925 CC lib/scsi/scsi_rpc.o 00:04:11.925 CC lib/scsi/task.o 00:04:11.925 CC lib/ftl/ftl_nv_cache.o 00:04:11.925 CC lib/ftl/ftl_band.o 00:04:11.925 LIB libspdk_nbd.a 00:04:12.183 CC lib/ftl/ftl_band_ops.o 00:04:12.183 CC lib/ftl/ftl_writer.o 00:04:12.183 CC lib/nvmf/subsystem.o 00:04:12.183 CC lib/nvmf/nvmf.o 00:04:12.183 LIB libspdk_scsi.a 00:04:12.183 CC lib/nvmf/nvmf_rpc.o 00:04:12.183 CC lib/nvmf/transport.o 00:04:12.442 CC lib/ftl/ftl_rq.o 00:04:12.442 CC lib/iscsi/conn.o 00:04:12.442 CC lib/vhost/vhost.o 00:04:12.442 CC lib/vhost/vhost_rpc.o 00:04:12.701 CC lib/iscsi/init_grp.o 00:04:12.701 CC lib/nvmf/tcp.o 00:04:12.959 CC lib/nvmf/stubs.o 00:04:12.959 CC lib/iscsi/iscsi.o 00:04:12.959 CC lib/iscsi/md5.o 00:04:12.959 CC lib/vhost/vhost_scsi.o 00:04:12.959 CC lib/ftl/ftl_reloc.o 00:04:13.218 CC lib/vhost/vhost_blk.o 00:04:13.218 CC lib/vhost/rte_vhost_user.o 00:04:13.218 CC lib/ftl/ftl_l2p_cache.o 00:04:13.218 CC lib/nvmf/mdns_server.o 00:04:13.476 CC lib/nvmf/rdma.o 00:04:13.476 CC lib/iscsi/param.o 00:04:13.476 CC lib/iscsi/portal_grp.o 00:04:13.733 CC lib/iscsi/tgt_node.o 00:04:13.733 CC lib/iscsi/iscsi_subsystem.o 00:04:13.733 CC lib/nvmf/auth.o 00:04:13.733 CC lib/ftl/ftl_p2l.o 00:04:13.733 CC lib/ftl/mngt/ftl_mngt.o 00:04:13.990 CC lib/iscsi/iscsi_rpc.o 00:04:13.990 LIB libspdk_vhost.a 00:04:14.251 CC lib/iscsi/task.o 00:04:14.251 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:14.251 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:14.251 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:14.251 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:14.251 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:14.251 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:14.508 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:14.508 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:14.508 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:14.508 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:14.508 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:14.508 LIB libspdk_iscsi.a 00:04:14.508 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:14.508 CC lib/ftl/utils/ftl_conf.o 00:04:14.508 CC lib/ftl/utils/ftl_md.o 00:04:14.508 CC lib/ftl/utils/ftl_mempool.o 00:04:14.508 CC lib/ftl/utils/ftl_bitmap.o 00:04:14.508 CC lib/ftl/utils/ftl_property.o 00:04:14.766 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:14.766 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:14.766 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:14.766 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:14.766 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:14.766 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:15.023 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:15.023 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:15.023 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:15.023 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:15.023 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:15.023 CC lib/ftl/base/ftl_base_dev.o 00:04:15.023 CC lib/ftl/base/ftl_base_bdev.o 00:04:15.023 CC lib/ftl/ftl_trace.o 00:04:15.281 LIB libspdk_ftl.a 00:04:15.845 LIB libspdk_nvmf.a 00:04:16.102 CC module/env_dpdk/env_dpdk_rpc.o 00:04:16.102 CC module/sock/posix/posix.o 00:04:16.102 CC module/accel/error/accel_error.o 00:04:16.102 CC module/accel/ioat/accel_ioat.o 00:04:16.102 CC module/accel/dsa/accel_dsa.o 00:04:16.102 CC module/blob/bdev/blob_bdev.o 00:04:16.102 CC module/keyring/file/keyring.o 00:04:16.102 CC module/accel/iaa/accel_iaa.o 00:04:16.102 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:16.102 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:16.370 LIB libspdk_env_dpdk_rpc.a 00:04:16.370 CC module/keyring/file/keyring_rpc.o 00:04:16.370 CC module/accel/ioat/accel_ioat_rpc.o 00:04:16.370 CC module/accel/error/accel_error_rpc.o 00:04:16.370 CC module/accel/iaa/accel_iaa_rpc.o 00:04:16.370 LIB libspdk_scheduler_dpdk_governor.a 00:04:16.370 LIB libspdk_scheduler_dynamic.a 00:04:16.370 CC module/accel/dsa/accel_dsa_rpc.o 00:04:16.370 LIB libspdk_blob_bdev.a 00:04:16.370 LIB libspdk_keyring_file.a 00:04:16.370 LIB libspdk_accel_error.a 00:04:16.370 LIB libspdk_accel_ioat.a 00:04:16.370 LIB libspdk_accel_iaa.a 00:04:16.370 CC module/keyring/linux/keyring.o 00:04:16.645 CC module/keyring/linux/keyring_rpc.o 00:04:16.645 CC module/scheduler/gscheduler/gscheduler.o 00:04:16.645 LIB libspdk_accel_dsa.a 00:04:16.645 LIB libspdk_keyring_linux.a 00:04:16.645 LIB libspdk_scheduler_gscheduler.a 00:04:16.645 CC module/blobfs/bdev/blobfs_bdev.o 00:04:16.645 CC module/bdev/error/vbdev_error.o 00:04:16.645 CC module/bdev/gpt/gpt.o 00:04:16.645 CC module/bdev/delay/vbdev_delay.o 00:04:16.645 CC module/bdev/lvol/vbdev_lvol.o 00:04:16.645 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:16.645 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:16.645 CC module/bdev/malloc/bdev_malloc.o 00:04:16.645 CC module/bdev/null/bdev_null.o 00:04:16.903 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:16.903 CC module/bdev/gpt/vbdev_gpt.o 00:04:16.903 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:16.903 LIB libspdk_sock_posix.a 00:04:16.903 CC module/bdev/error/vbdev_error_rpc.o 00:04:16.903 CC module/bdev/null/bdev_null_rpc.o 00:04:16.903 LIB libspdk_blobfs_bdev.a 00:04:16.903 LIB libspdk_bdev_delay.a 00:04:17.162 LIB libspdk_bdev_error.a 00:04:17.162 LIB libspdk_bdev_null.a 00:04:17.162 LIB libspdk_bdev_gpt.a 00:04:17.162 LIB libspdk_bdev_malloc.a 00:04:17.162 CC module/bdev/nvme/bdev_nvme.o 00:04:17.162 CC module/bdev/raid/bdev_raid.o 00:04:17.162 CC module/bdev/passthru/vbdev_passthru.o 00:04:17.162 CC module/bdev/split/vbdev_split.o 00:04:17.162 CC module/bdev/split/vbdev_split_rpc.o 00:04:17.162 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:17.162 LIB libspdk_bdev_lvol.a 00:04:17.162 CC module/bdev/aio/bdev_aio.o 00:04:17.162 CC module/bdev/ftl/bdev_ftl.o 00:04:17.420 CC module/bdev/iscsi/bdev_iscsi.o 00:04:17.420 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:17.420 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:17.420 LIB libspdk_bdev_split.a 00:04:17.420 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:17.420 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:17.420 CC module/bdev/aio/bdev_aio_rpc.o 00:04:17.420 CC module/bdev/raid/bdev_raid_rpc.o 00:04:17.678 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:17.678 LIB libspdk_bdev_zone_block.a 00:04:17.678 LIB libspdk_bdev_passthru.a 00:04:17.678 CC module/bdev/raid/bdev_raid_sb.o 00:04:17.678 LIB libspdk_bdev_ftl.a 00:04:17.678 LIB libspdk_bdev_iscsi.a 00:04:17.678 LIB libspdk_bdev_aio.a 00:04:17.678 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:17.678 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:17.678 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:17.678 CC module/bdev/nvme/nvme_rpc.o 00:04:17.678 CC module/bdev/raid/raid0.o 00:04:17.678 CC module/bdev/nvme/bdev_mdns_client.o 00:04:17.936 CC module/bdev/nvme/vbdev_opal.o 00:04:17.936 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:17.936 CC module/bdev/raid/raid1.o 00:04:17.936 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:17.936 CC module/bdev/raid/concat.o 00:04:17.936 CC module/bdev/raid/raid5f.o 00:04:18.194 LIB libspdk_bdev_virtio.a 00:04:18.453 LIB libspdk_bdev_raid.a 00:04:19.396 LIB libspdk_bdev_nvme.a 00:04:19.961 CC module/event/subsystems/sock/sock.o 00:04:19.961 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:19.961 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:19.961 CC module/event/subsystems/iobuf/iobuf.o 00:04:19.961 CC module/event/subsystems/keyring/keyring.o 00:04:19.961 CC module/event/subsystems/scheduler/scheduler.o 00:04:19.961 CC module/event/subsystems/vmd/vmd.o 00:04:19.961 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:19.961 LIB libspdk_event_keyring.a 00:04:19.961 LIB libspdk_event_vhost_blk.a 00:04:19.961 LIB libspdk_event_sock.a 00:04:19.961 LIB libspdk_event_scheduler.a 00:04:20.219 LIB libspdk_event_iobuf.a 00:04:20.219 LIB libspdk_event_vmd.a 00:04:20.477 CC module/event/subsystems/accel/accel.o 00:04:20.477 LIB libspdk_event_accel.a 00:04:21.043 CC module/event/subsystems/bdev/bdev.o 00:04:21.043 LIB libspdk_event_bdev.a 00:04:21.302 CC module/event/subsystems/nbd/nbd.o 00:04:21.302 CC module/event/subsystems/scsi/scsi.o 00:04:21.302 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:21.302 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:21.561 LIB libspdk_event_scsi.a 00:04:21.561 LIB libspdk_event_nbd.a 00:04:21.561 LIB libspdk_event_nvmf.a 00:04:21.820 CC module/event/subsystems/iscsi/iscsi.o 00:04:21.820 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:22.078 LIB libspdk_event_vhost_scsi.a 00:04:22.078 LIB libspdk_event_iscsi.a 00:04:22.337 TEST_HEADER include/spdk/accel.h 00:04:22.337 TEST_HEADER include/spdk/accel_module.h 00:04:22.337 CXX app/trace/trace.o 00:04:22.337 TEST_HEADER include/spdk/assert.h 00:04:22.337 TEST_HEADER include/spdk/barrier.h 00:04:22.337 TEST_HEADER include/spdk/base64.h 00:04:22.337 TEST_HEADER include/spdk/bdev.h 00:04:22.337 TEST_HEADER include/spdk/bdev_module.h 00:04:22.337 TEST_HEADER include/spdk/bdev_zone.h 00:04:22.337 TEST_HEADER include/spdk/bit_array.h 00:04:22.337 TEST_HEADER include/spdk/bit_pool.h 00:04:22.337 TEST_HEADER include/spdk/blob.h 00:04:22.337 TEST_HEADER include/spdk/blob_bdev.h 00:04:22.337 TEST_HEADER include/spdk/blobfs.h 00:04:22.337 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:22.337 TEST_HEADER include/spdk/conf.h 00:04:22.337 TEST_HEADER include/spdk/config.h 00:04:22.337 TEST_HEADER include/spdk/cpuset.h 00:04:22.337 TEST_HEADER include/spdk/crc16.h 00:04:22.337 TEST_HEADER include/spdk/crc32.h 00:04:22.337 TEST_HEADER include/spdk/crc64.h 00:04:22.337 TEST_HEADER include/spdk/dif.h 00:04:22.337 TEST_HEADER include/spdk/dma.h 00:04:22.337 TEST_HEADER include/spdk/endian.h 00:04:22.337 TEST_HEADER include/spdk/env.h 00:04:22.337 CC test/event/event_perf/event_perf.o 00:04:22.337 TEST_HEADER include/spdk/env_dpdk.h 00:04:22.337 TEST_HEADER include/spdk/event.h 00:04:22.337 TEST_HEADER include/spdk/fd.h 00:04:22.337 TEST_HEADER include/spdk/fd_group.h 00:04:22.337 TEST_HEADER include/spdk/file.h 00:04:22.337 TEST_HEADER include/spdk/ftl.h 00:04:22.337 TEST_HEADER include/spdk/gpt_spec.h 00:04:22.337 CC examples/accel/perf/accel_perf.o 00:04:22.337 TEST_HEADER include/spdk/hexlify.h 00:04:22.337 TEST_HEADER include/spdk/histogram_data.h 00:04:22.337 TEST_HEADER include/spdk/idxd.h 00:04:22.337 TEST_HEADER include/spdk/idxd_spec.h 00:04:22.337 TEST_HEADER include/spdk/init.h 00:04:22.337 TEST_HEADER include/spdk/ioat.h 00:04:22.337 CC test/accel/dif/dif.o 00:04:22.337 TEST_HEADER include/spdk/ioat_spec.h 00:04:22.337 TEST_HEADER include/spdk/iscsi_spec.h 00:04:22.337 TEST_HEADER include/spdk/json.h 00:04:22.337 CC test/blobfs/mkfs/mkfs.o 00:04:22.337 CC test/bdev/bdevio/bdevio.o 00:04:22.337 CC test/dma/test_dma/test_dma.o 00:04:22.337 TEST_HEADER include/spdk/jsonrpc.h 00:04:22.337 TEST_HEADER include/spdk/keyring.h 00:04:22.337 TEST_HEADER include/spdk/keyring_module.h 00:04:22.337 TEST_HEADER include/spdk/likely.h 00:04:22.337 TEST_HEADER include/spdk/log.h 00:04:22.337 TEST_HEADER include/spdk/lvol.h 00:04:22.602 TEST_HEADER include/spdk/memory.h 00:04:22.602 CC test/app/bdev_svc/bdev_svc.o 00:04:22.602 TEST_HEADER include/spdk/mmio.h 00:04:22.602 TEST_HEADER include/spdk/nbd.h 00:04:22.602 TEST_HEADER include/spdk/notify.h 00:04:22.603 TEST_HEADER include/spdk/nvme.h 00:04:22.603 TEST_HEADER include/spdk/nvme_intel.h 00:04:22.603 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:22.603 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:22.603 TEST_HEADER include/spdk/nvme_spec.h 00:04:22.603 TEST_HEADER include/spdk/nvme_zns.h 00:04:22.603 TEST_HEADER include/spdk/nvmf.h 00:04:22.603 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:22.603 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:22.603 TEST_HEADER include/spdk/nvmf_spec.h 00:04:22.603 TEST_HEADER include/spdk/nvmf_transport.h 00:04:22.603 TEST_HEADER include/spdk/opal.h 00:04:22.603 TEST_HEADER include/spdk/opal_spec.h 00:04:22.603 TEST_HEADER include/spdk/pci_ids.h 00:04:22.603 TEST_HEADER include/spdk/pipe.h 00:04:22.603 CC test/env/mem_callbacks/mem_callbacks.o 00:04:22.603 TEST_HEADER include/spdk/queue.h 00:04:22.603 TEST_HEADER include/spdk/reduce.h 00:04:22.603 TEST_HEADER include/spdk/rpc.h 00:04:22.603 TEST_HEADER include/spdk/scheduler.h 00:04:22.603 TEST_HEADER include/spdk/scsi.h 00:04:22.603 TEST_HEADER include/spdk/scsi_spec.h 00:04:22.603 TEST_HEADER include/spdk/sock.h 00:04:22.603 TEST_HEADER include/spdk/stdinc.h 00:04:22.603 TEST_HEADER include/spdk/string.h 00:04:22.603 TEST_HEADER include/spdk/thread.h 00:04:22.603 TEST_HEADER include/spdk/trace.h 00:04:22.603 TEST_HEADER include/spdk/trace_parser.h 00:04:22.603 TEST_HEADER include/spdk/tree.h 00:04:22.603 TEST_HEADER include/spdk/ublk.h 00:04:22.603 TEST_HEADER include/spdk/util.h 00:04:22.603 TEST_HEADER include/spdk/uuid.h 00:04:22.603 TEST_HEADER include/spdk/version.h 00:04:22.603 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:22.603 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:22.603 TEST_HEADER include/spdk/vhost.h 00:04:22.603 TEST_HEADER include/spdk/vmd.h 00:04:22.603 TEST_HEADER include/spdk/xor.h 00:04:22.603 TEST_HEADER include/spdk/zipf.h 00:04:22.603 CXX test/cpp_headers/accel.o 00:04:22.603 LINK event_perf 00:04:22.603 LINK bdev_svc 00:04:22.603 LINK mkfs 00:04:22.866 LINK spdk_trace 00:04:22.866 CXX test/cpp_headers/accel_module.o 00:04:22.866 LINK mem_callbacks 00:04:22.866 LINK bdevio 00:04:22.866 LINK test_dma 00:04:22.866 LINK dif 00:04:22.866 CXX test/cpp_headers/assert.o 00:04:22.866 LINK accel_perf 00:04:23.124 CXX test/cpp_headers/barrier.o 00:04:23.124 CC test/env/vtophys/vtophys.o 00:04:23.124 CC app/trace_record/trace_record.o 00:04:23.124 CXX test/cpp_headers/base64.o 00:04:23.382 LINK vtophys 00:04:23.382 CXX test/cpp_headers/bdev.o 00:04:23.382 CC test/event/reactor/reactor.o 00:04:23.382 LINK spdk_trace_record 00:04:23.642 LINK reactor 00:04:23.642 CXX test/cpp_headers/bdev_module.o 00:04:23.642 CXX test/cpp_headers/bdev_zone.o 00:04:23.901 CXX test/cpp_headers/bit_array.o 00:04:24.161 CC app/nvmf_tgt/nvmf_main.o 00:04:24.161 CXX test/cpp_headers/bit_pool.o 00:04:24.161 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:24.161 CXX test/cpp_headers/blob.o 00:04:24.161 LINK nvmf_tgt 00:04:24.161 CC examples/bdev/hello_world/hello_bdev.o 00:04:24.161 LINK env_dpdk_post_init 00:04:24.420 CC test/event/reactor_perf/reactor_perf.o 00:04:24.420 CXX test/cpp_headers/blob_bdev.o 00:04:24.420 LINK hello_bdev 00:04:24.420 LINK reactor_perf 00:04:24.679 CXX test/cpp_headers/blobfs.o 00:04:24.679 CXX test/cpp_headers/blobfs_bdev.o 00:04:24.939 CXX test/cpp_headers/conf.o 00:04:25.198 CXX test/cpp_headers/config.o 00:04:25.198 CXX test/cpp_headers/cpuset.o 00:04:25.198 CXX test/cpp_headers/crc16.o 00:04:25.551 CC test/event/app_repeat/app_repeat.o 00:04:25.551 CC test/env/memory/memory_ut.o 00:04:25.551 CXX test/cpp_headers/crc32.o 00:04:25.551 LINK app_repeat 00:04:25.551 CXX test/cpp_headers/crc64.o 00:04:25.817 CXX test/cpp_headers/dif.o 00:04:26.076 CXX test/cpp_headers/dma.o 00:04:26.076 CC app/iscsi_tgt/iscsi_tgt.o 00:04:26.076 CXX test/cpp_headers/endian.o 00:04:26.076 LINK memory_ut 00:04:26.076 LINK iscsi_tgt 00:04:26.335 CC test/event/scheduler/scheduler.o 00:04:26.335 CXX test/cpp_headers/env.o 00:04:26.335 CXX test/cpp_headers/env_dpdk.o 00:04:26.335 CXX test/cpp_headers/event.o 00:04:26.335 CC test/env/pci/pci_ut.o 00:04:26.594 LINK scheduler 00:04:26.594 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:26.594 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:26.594 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:26.594 CXX test/cpp_headers/fd.o 00:04:26.594 CXX test/cpp_headers/fd_group.o 00:04:26.594 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:26.854 CXX test/cpp_headers/file.o 00:04:26.854 LINK pci_ut 00:04:26.854 LINK nvme_fuzz 00:04:27.113 CC test/lvol/esnap/esnap.o 00:04:27.113 CXX test/cpp_headers/ftl.o 00:04:27.113 LINK vhost_fuzz 00:04:27.113 CXX test/cpp_headers/gpt_spec.o 00:04:27.372 CXX test/cpp_headers/hexlify.o 00:04:27.372 CXX test/cpp_headers/histogram_data.o 00:04:27.372 CC test/nvme/aer/aer.o 00:04:27.631 CXX test/cpp_headers/idxd.o 00:04:27.631 CC examples/bdev/bdevperf/bdevperf.o 00:04:27.631 CXX test/cpp_headers/idxd_spec.o 00:04:27.631 CC examples/blob/hello_world/hello_blob.o 00:04:27.889 LINK aer 00:04:27.889 CXX test/cpp_headers/init.o 00:04:27.889 LINK hello_blob 00:04:27.889 CXX test/cpp_headers/ioat.o 00:04:28.147 CXX test/cpp_headers/ioat_spec.o 00:04:28.147 LINK iscsi_fuzz 00:04:28.147 CC examples/ioat/perf/perf.o 00:04:28.147 CXX test/cpp_headers/iscsi_spec.o 00:04:28.406 LINK ioat_perf 00:04:28.406 CC examples/ioat/verify/verify.o 00:04:28.406 CXX test/cpp_headers/json.o 00:04:28.406 LINK bdevperf 00:04:28.664 CXX test/cpp_headers/jsonrpc.o 00:04:28.664 LINK verify 00:04:28.664 CXX test/cpp_headers/keyring.o 00:04:28.922 CXX test/cpp_headers/keyring_module.o 00:04:28.922 CXX test/cpp_headers/likely.o 00:04:29.181 CC test/nvme/reset/reset.o 00:04:29.181 CC examples/blob/cli/blobcli.o 00:04:29.181 CXX test/cpp_headers/log.o 00:04:29.438 CC test/nvme/sgl/sgl.o 00:04:29.438 CXX test/cpp_headers/lvol.o 00:04:29.438 LINK reset 00:04:29.696 CC test/app/histogram_perf/histogram_perf.o 00:04:29.696 CXX test/cpp_headers/memory.o 00:04:29.696 LINK sgl 00:04:29.696 LINK blobcli 00:04:29.696 LINK histogram_perf 00:04:29.696 CXX test/cpp_headers/mmio.o 00:04:29.696 CC test/app/jsoncat/jsoncat.o 00:04:29.955 LINK jsoncat 00:04:29.955 CXX test/cpp_headers/nbd.o 00:04:29.955 CXX test/cpp_headers/notify.o 00:04:29.955 CC app/spdk_tgt/spdk_tgt.o 00:04:30.214 CXX test/cpp_headers/nvme.o 00:04:30.214 LINK spdk_tgt 00:04:30.475 CXX test/cpp_headers/nvme_intel.o 00:04:30.475 CC app/spdk_lspci/spdk_lspci.o 00:04:30.475 CXX test/cpp_headers/nvme_ocssd.o 00:04:30.733 CC test/app/stub/stub.o 00:04:30.733 LINK spdk_lspci 00:04:30.733 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:30.733 LINK stub 00:04:30.733 CC app/spdk_nvme_perf/perf.o 00:04:30.992 CXX test/cpp_headers/nvme_spec.o 00:04:30.992 CC test/nvme/e2edp/nvme_dp.o 00:04:30.992 CXX test/cpp_headers/nvme_zns.o 00:04:31.251 CXX test/cpp_headers/nvmf.o 00:04:31.251 CC examples/nvme/hello_world/hello_world.o 00:04:31.251 LINK nvme_dp 00:04:31.510 CXX test/cpp_headers/nvmf_cmd.o 00:04:31.510 LINK hello_world 00:04:31.510 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:31.768 CXX test/cpp_headers/nvmf_spec.o 00:04:31.768 LINK spdk_nvme_perf 00:04:31.768 CC test/nvme/overhead/overhead.o 00:04:31.768 CXX test/cpp_headers/nvmf_transport.o 00:04:31.768 CC test/nvme/err_injection/err_injection.o 00:04:32.027 LINK overhead 00:04:32.286 CXX test/cpp_headers/opal.o 00:04:32.286 LINK err_injection 00:04:32.286 CC test/nvme/startup/startup.o 00:04:32.286 CXX test/cpp_headers/opal_spec.o 00:04:32.545 LINK startup 00:04:32.545 CXX test/cpp_headers/pci_ids.o 00:04:32.545 CC examples/nvme/reconnect/reconnect.o 00:04:32.545 CXX test/cpp_headers/pipe.o 00:04:32.804 CC app/spdk_nvme_identify/identify.o 00:04:32.804 CXX test/cpp_headers/queue.o 00:04:32.804 CXX test/cpp_headers/reduce.o 00:04:33.063 LINK reconnect 00:04:33.063 CXX test/cpp_headers/rpc.o 00:04:33.063 CC test/nvme/reserve/reserve.o 00:04:33.063 LINK esnap 00:04:33.063 CXX test/cpp_headers/scheduler.o 00:04:33.063 CC test/nvme/simple_copy/simple_copy.o 00:04:33.322 LINK reserve 00:04:33.322 CXX test/cpp_headers/scsi.o 00:04:33.322 LINK simple_copy 00:04:33.581 CC examples/sock/hello_world/hello_sock.o 00:04:33.581 CXX test/cpp_headers/scsi_spec.o 00:04:33.581 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:33.581 CC examples/vmd/lsvmd/lsvmd.o 00:04:33.839 LINK spdk_nvme_identify 00:04:33.839 LINK lsvmd 00:04:33.839 CXX test/cpp_headers/sock.o 00:04:33.839 CC examples/nvme/arbitration/arbitration.o 00:04:33.839 LINK hello_sock 00:04:34.121 CXX test/cpp_headers/stdinc.o 00:04:34.121 CC examples/nvme/hotplug/hotplug.o 00:04:34.121 CXX test/cpp_headers/string.o 00:04:34.121 LINK nvme_manage 00:04:34.121 LINK arbitration 00:04:34.379 LINK hotplug 00:04:34.379 CXX test/cpp_headers/thread.o 00:04:34.379 CC test/nvme/connect_stress/connect_stress.o 00:04:34.379 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:34.637 CXX test/cpp_headers/trace.o 00:04:34.637 LINK connect_stress 00:04:34.637 LINK cmb_copy 00:04:34.637 CC test/rpc_client/rpc_client_test.o 00:04:34.637 CC examples/nvme/abort/abort.o 00:04:34.637 CXX test/cpp_headers/trace_parser.o 00:04:34.895 LINK rpc_client_test 00:04:34.895 CXX test/cpp_headers/tree.o 00:04:34.895 CXX test/cpp_headers/ublk.o 00:04:35.152 LINK abort 00:04:35.152 CXX test/cpp_headers/util.o 00:04:35.152 CC app/spdk_nvme_discover/discovery_aer.o 00:04:35.152 CC examples/vmd/led/led.o 00:04:35.152 CXX test/cpp_headers/uuid.o 00:04:35.152 LINK led 00:04:35.410 LINK spdk_nvme_discover 00:04:35.410 CXX test/cpp_headers/version.o 00:04:35.410 CXX test/cpp_headers/vfio_user_pci.o 00:04:35.410 CC app/spdk_top/spdk_top.o 00:04:35.668 CC test/thread/poller_perf/poller_perf.o 00:04:35.926 CC test/nvme/boot_partition/boot_partition.o 00:04:35.926 CXX test/cpp_headers/vfio_user_spec.o 00:04:35.926 LINK poller_perf 00:04:35.926 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:35.926 CC test/thread/lock/spdk_lock.o 00:04:35.926 CXX test/cpp_headers/vhost.o 00:04:35.926 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:04:36.184 LINK boot_partition 00:04:36.184 LINK pmr_persistence 00:04:36.184 CXX test/cpp_headers/vmd.o 00:04:36.184 LINK histogram_ut 00:04:36.184 CC test/nvme/compliance/nvme_compliance.o 00:04:36.442 CXX test/cpp_headers/xor.o 00:04:36.443 CC test/nvme/fused_ordering/fused_ordering.o 00:04:36.443 CXX test/cpp_headers/zipf.o 00:04:36.443 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:36.701 LINK fused_ordering 00:04:36.701 LINK nvme_compliance 00:04:36.701 CC test/unit/lib/accel/accel.c/accel_ut.o 00:04:36.701 CC test/nvme/fdp/fdp.o 00:04:36.702 LINK doorbell_aers 00:04:36.702 CC test/nvme/cuse/cuse.o 00:04:36.960 LINK spdk_top 00:04:36.960 LINK fdp 00:04:37.525 CC examples/nvmf/nvmf/nvmf.o 00:04:37.525 CC examples/util/zipf/zipf.o 00:04:37.783 LINK spdk_lock 00:04:37.783 CC app/vhost/vhost.o 00:04:37.783 LINK zipf 00:04:37.783 LINK nvmf 00:04:38.040 LINK vhost 00:04:38.040 CC app/spdk_dd/spdk_dd.o 00:04:38.040 LINK cuse 00:04:38.040 CC app/fio/nvme/fio_plugin.o 00:04:38.040 CC app/fio/bdev/fio_plugin.o 00:04:38.298 LINK spdk_dd 00:04:38.298 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:04:38.555 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:04:38.555 CC test/unit/lib/blob/blob.c/blob_ut.o 00:04:38.555 CC test/unit/lib/bdev/part.c/part_ut.o 00:04:38.555 LINK spdk_nvme 00:04:38.555 LINK spdk_bdev 00:04:39.486 LINK blob_bdev_ut 00:04:39.486 LINK accel_ut 00:04:39.743 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:04:40.000 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:04:40.000 CC test/unit/lib/dma/dma.c/dma_ut.o 00:04:40.257 LINK tree_ut 00:04:40.822 CC examples/thread/thread/thread_ex.o 00:04:40.822 LINK dma_ut 00:04:41.080 LINK thread 00:04:41.340 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:04:41.340 CC examples/idxd/perf/perf.o 00:04:41.340 LINK blobfs_async_ut 00:04:41.598 LINK scsi_nvme_ut 00:04:41.598 LINK idxd_perf 00:04:41.855 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:04:41.855 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:04:42.113 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:04:42.113 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:04:42.371 LINK part_ut 00:04:42.371 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:04:42.371 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:04:42.371 LINK gpt_ut 00:04:42.629 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:04:42.629 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:04:43.195 LINK bdev_raid_sb_ut 00:04:43.195 LINK blobfs_sync_ut 00:04:43.454 LINK concat_ut 00:04:43.454 LINK raid1_ut 00:04:43.454 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:04:43.454 LINK vbdev_lvol_ut 00:04:43.713 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:04:43.713 LINK bdev_zone_ut 00:04:43.713 LINK bdev_ut 00:04:43.713 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:04:43.970 LINK blobfs_bdev_ut 00:04:43.970 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:04:43.970 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:43.970 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:04:44.230 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:04:44.230 LINK interrupt_tgt 00:04:44.230 CC test/unit/lib/event/app.c/app_ut.o 00:04:44.489 LINK bdev_raid_ut 00:04:44.747 LINK vbdev_zone_block_ut 00:04:44.747 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:04:45.006 LINK raid0_ut 00:04:45.006 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:04:45.006 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:04:45.006 LINK app_ut 00:04:45.265 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:04:45.265 LINK raid5f_ut 00:04:45.523 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:04:45.523 CC test/unit/lib/iscsi/param.c/param_ut.o 00:04:45.523 LINK ioat_ut 00:04:45.782 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:04:45.782 LINK reactor_ut 00:04:45.782 LINK init_grp_ut 00:04:46.041 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:04:46.041 LINK blob_ut 00:04:46.041 LINK bdev_ut 00:04:46.041 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:04:46.041 LINK param_ut 00:04:46.300 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:04:46.300 LINK conn_ut 00:04:46.559 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:04:46.559 LINK json_util_ut 00:04:46.559 CC test/unit/lib/log/log.c/log_ut.o 00:04:46.818 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:04:46.818 LINK portal_grp_ut 00:04:46.818 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:04:46.818 LINK log_ut 00:04:46.818 LINK jsonrpc_server_ut 00:04:47.076 CC test/unit/lib/notify/notify.c/notify_ut.o 00:04:47.077 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:04:47.335 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:04:47.335 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:04:47.335 LINK json_write_ut 00:04:47.336 LINK tgt_node_ut 00:04:47.594 LINK notify_ut 00:04:47.594 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:04:47.594 LINK iscsi_ut 00:04:47.851 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:04:47.851 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:04:48.109 CC test/unit/lib/sock/sock.c/sock_ut.o 00:04:48.367 LINK dev_ut 00:04:48.367 LINK lvol_ut 00:04:48.626 LINK json_parse_ut 00:04:48.626 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:04:48.626 LINK nvme_ut 00:04:48.626 LINK bdev_nvme_ut 00:04:48.883 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:04:48.883 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:04:48.883 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:04:49.141 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:04:49.141 LINK scsi_ut 00:04:49.398 LINK scsi_pr_ut 00:04:49.398 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:04:49.398 LINK lun_ut 00:04:49.655 LINK subsystem_ut 00:04:49.655 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:04:49.912 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:04:49.912 LINK scsi_bdev_ut 00:04:49.912 LINK sock_ut 00:04:50.169 LINK nvme_ctrlr_cmd_ut 00:04:50.169 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:04:50.169 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:04:50.427 CC test/unit/lib/sock/posix.c/posix_ut.o 00:04:50.427 LINK nvme_ctrlr_ocssd_cmd_ut 00:04:50.427 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:04:50.684 LINK nvme_ns_ut 00:04:50.684 LINK ctrlr_ut 00:04:50.684 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:04:50.684 LINK nvme_ctrlr_ut 00:04:50.942 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:04:51.200 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:04:51.200 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:04:51.459 LINK posix_ut 00:04:51.459 LINK tcp_ut 00:04:51.718 LINK ctrlr_bdev_ut 00:04:51.718 LINK ctrlr_discovery_ut 00:04:51.718 LINK nvme_ns_ocssd_cmd_ut 00:04:51.976 CC test/unit/lib/thread/thread.c/thread_ut.o 00:04:51.976 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:04:51.976 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:04:52.234 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:04:52.234 LINK nvme_ns_cmd_ut 00:04:52.234 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:04:52.234 LINK nvmf_ut 00:04:52.492 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:04:52.750 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:04:53.008 LINK auth_ut 00:04:53.008 LINK nvme_quirks_ut 00:04:53.008 LINK iobuf_ut 00:04:53.267 LINK nvme_poll_group_ut 00:04:53.267 LINK nvme_pcie_ut 00:04:53.267 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:04:53.267 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:04:53.525 LINK nvme_qpair_ut 00:04:53.525 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:04:53.525 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:04:53.525 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:04:53.783 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:04:54.373 LINK thread_ut 00:04:54.373 LINK nvme_transport_ut 00:04:54.373 LINK nvme_io_msg_ut 00:04:54.373 LINK nvme_opal_ut 00:04:54.374 LINK nvme_fabric_ut 00:04:54.632 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:04:54.632 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:04:54.632 CC test/unit/lib/util/base64.c/base64_ut.o 00:04:54.891 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:04:54.891 LINK nvme_pcie_common_ut 00:04:54.891 LINK base64_ut 00:04:54.891 LINK rdma_ut 00:04:54.891 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:04:55.148 LINK pci_event_ut 00:04:55.148 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:04:55.148 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:04:55.406 LINK transport_ut 00:04:55.406 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:04:55.406 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:04:55.665 LINK subsystem_ut 00:04:55.665 LINK crc16_ut 00:04:55.665 LINK nvme_tcp_ut 00:04:55.665 LINK cpuset_ut 00:04:55.924 LINK rpc_ut 00:04:55.924 LINK bit_array_ut 00:04:55.924 LINK rpc_ut 00:04:55.924 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:04:55.924 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:04:55.924 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:04:55.924 LINK crc32_ieee_ut 00:04:55.924 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:04:56.181 LINK nvme_rdma_ut 00:04:56.181 LINK crc64_ut 00:04:56.181 LINK crc32c_ut 00:04:56.181 CC test/unit/lib/util/dif.c/dif_ut.o 00:04:56.181 CC test/unit/lib/util/iov.c/iov_ut.o 00:04:56.181 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:04:56.181 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:04:56.439 CC test/unit/lib/util/math.c/math_ut.o 00:04:56.439 LINK iov_ut 00:04:56.439 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:04:56.439 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:04:56.439 LINK nvme_cuse_ut 00:04:56.439 LINK keyring_ut 00:04:56.696 LINK math_ut 00:04:56.696 CC test/unit/lib/rdma/common.c/common_ut.o 00:04:56.696 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:04:56.951 LINK idxd_user_ut 00:04:56.951 CC test/unit/lib/util/string.c/string_ut.o 00:04:56.951 CC test/unit/lib/util/xor.c/xor_ut.o 00:04:56.951 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:04:57.208 LINK pipe_ut 00:04:57.208 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:04:57.208 LINK string_ut 00:04:57.208 LINK dif_ut 00:04:57.208 LINK common_ut 00:04:57.208 LINK ftl_l2p_ut 00:04:57.466 LINK xor_ut 00:04:57.466 LINK idxd_ut 00:04:57.466 CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o 00:04:57.466 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:04:57.466 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:04:57.724 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:04:57.724 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:04:57.724 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:04:57.724 LINK ftl_bitmap_ut 00:04:57.980 LINK ftl_io_ut 00:04:57.980 LINK ftl_mempool_ut 00:04:58.237 LINK ftl_mngt_ut 00:04:58.495 LINK ftl_band_ut 00:04:58.495 LINK ftl_p2l_ut 00:04:58.752 LINK vhost_ut 00:04:59.010 LINK ftl_layout_upgrade_ut 00:04:59.010 LINK ftl_sb_ut 00:04:59.267 00:04:59.267 real 1m48.396s 00:04:59.267 user 8m41.795s 00:04:59.267 sys 2m7.599s 00:04:59.267 07:15:33 unittest_build -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:59.267 07:15:33 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:04:59.267 ************************************ 00:04:59.267 END TEST unittest_build 00:04:59.267 ************************************ 00:04:59.267 07:15:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:59.267 07:15:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:59.267 07:15:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:59.267 07:15:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.267 07:15:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:59.523 07:15:33 -- pm/common@44 -- $ pid=2878 00:04:59.523 07:15:33 -- pm/common@50 -- $ kill -TERM 2878 00:04:59.523 07:15:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.523 07:15:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:59.523 07:15:33 -- pm/common@44 -- $ pid=2880 00:04:59.523 07:15:33 -- pm/common@50 -- $ kill -TERM 2880 00:04:59.523 07:15:33 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:59.523 07:15:33 -- nvmf/common.sh@7 -- # uname -s 00:04:59.523 07:15:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.523 07:15:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.523 07:15:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.523 07:15:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.523 07:15:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.523 07:15:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.523 07:15:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.523 07:15:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.523 07:15:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.523 07:15:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.523 07:15:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae57de1a-28d1-4dd4-b1a6-5ff45b2db4b2 00:04:59.523 07:15:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae57de1a-28d1-4dd4-b1a6-5ff45b2db4b2 00:04:59.523 07:15:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.523 07:15:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.523 07:15:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.523 07:15:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.523 07:15:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.523 07:15:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.523 07:15:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.523 07:15:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.523 07:15:33 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:59.523 07:15:33 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:59.523 07:15:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:59.523 07:15:33 -- paths/export.sh@5 -- # export PATH 00:04:59.523 07:15:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:59.523 07:15:33 -- nvmf/common.sh@47 -- # : 0 00:04:59.523 07:15:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:59.523 07:15:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:59.523 07:15:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.523 07:15:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.523 07:15:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.523 07:15:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:59.524 07:15:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:59.524 07:15:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:59.524 07:15:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:59.524 07:15:33 -- spdk/autotest.sh@32 -- # uname -s 00:04:59.524 07:15:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:59.524 07:15:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:04:59.524 07:15:33 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:59.524 07:15:33 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:59.524 07:15:33 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:59.524 07:15:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:59.524 07:15:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:59.524 07:15:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:04:59.524 07:15:33 -- spdk/autotest.sh@48 -- # udevadm_pid=112415 00:04:59.524 07:15:33 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:04:59.524 07:15:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:59.524 07:15:33 -- pm/common@17 -- # local monitor 00:04:59.524 07:15:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.524 07:15:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.524 07:15:33 -- pm/common@25 -- # sleep 1 00:04:59.524 07:15:33 -- pm/common@21 -- # date +%s 00:04:59.524 07:15:33 -- pm/common@21 -- # date +%s 00:04:59.524 07:15:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720768533 00:04:59.524 07:15:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720768533 00:04:59.524 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720768533_collect-vmstat.pm.log 00:04:59.524 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720768533_collect-cpu-load.pm.log 00:05:00.457 07:15:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:00.457 07:15:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:00.457 07:15:34 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:00.457 07:15:34 -- common/autotest_common.sh@10 -- # set +x 00:05:00.457 07:15:34 -- spdk/autotest.sh@59 -- # create_test_list 00:05:00.457 07:15:34 -- common/autotest_common.sh@744 -- # xtrace_disable 00:05:00.457 07:15:34 -- common/autotest_common.sh@10 -- # set +x 00:05:00.716 07:15:34 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:00.716 07:15:34 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:00.716 07:15:34 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:00.716 07:15:34 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:00.716 07:15:34 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:00.716 07:15:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:00.716 07:15:34 -- common/autotest_common.sh@1451 -- # uname 00:05:00.716 07:15:34 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:05:00.716 07:15:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:00.716 07:15:34 -- common/autotest_common.sh@1471 -- # uname 00:05:00.716 07:15:34 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:05:00.716 07:15:34 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:00.716 07:15:34 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:05:00.716 07:15:34 -- spdk/autotest.sh@72 -- # hash lcov 00:05:00.716 07:15:34 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:00.716 07:15:34 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:05:00.716 --rc lcov_branch_coverage=1 00:05:00.716 --rc lcov_function_coverage=1 00:05:00.716 --rc genhtml_branch_coverage=1 00:05:00.716 --rc genhtml_function_coverage=1 00:05:00.716 --rc genhtml_legend=1 00:05:00.716 --rc geninfo_all_blocks=1 00:05:00.716 ' 00:05:00.716 07:15:34 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:05:00.716 --rc lcov_branch_coverage=1 00:05:00.716 --rc lcov_function_coverage=1 00:05:00.716 --rc genhtml_branch_coverage=1 00:05:00.716 --rc genhtml_function_coverage=1 00:05:00.716 --rc genhtml_legend=1 00:05:00.716 --rc geninfo_all_blocks=1 00:05:00.716 ' 00:05:00.716 07:15:34 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:05:00.716 --rc lcov_branch_coverage=1 00:05:00.716 --rc lcov_function_coverage=1 00:05:00.716 --rc genhtml_branch_coverage=1 00:05:00.716 --rc genhtml_function_coverage=1 00:05:00.716 --rc genhtml_legend=1 00:05:00.716 --rc geninfo_all_blocks=1 00:05:00.716 --no-external' 00:05:00.716 07:15:34 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:05:00.716 --rc lcov_branch_coverage=1 00:05:00.716 --rc lcov_function_coverage=1 00:05:00.716 --rc genhtml_branch_coverage=1 00:05:00.716 --rc genhtml_function_coverage=1 00:05:00.716 --rc genhtml_legend=1 00:05:00.716 --rc geninfo_all_blocks=1 00:05:00.716 --no-external' 00:05:00.716 07:15:34 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:05:00.716 lcov: LCOV version 1.15 00:05:00.716 07:15:34 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:07.278 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:07.278 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:53.983 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:53.983 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:53.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:53.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:53.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:53.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:53.985 07:16:23 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:53.985 07:16:23 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:53.985 07:16:23 -- common/autotest_common.sh@10 -- # set +x 00:05:53.985 07:16:23 -- spdk/autotest.sh@91 -- # rm -f 00:05:53.985 07:16:23 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:53.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:53.985 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:53.985 07:16:24 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:53.985 07:16:24 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:53.985 07:16:24 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:53.985 07:16:24 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:53.985 07:16:24 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:53.985 07:16:24 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:53.985 07:16:24 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:53.985 07:16:24 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:53.985 07:16:24 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:53.985 07:16:24 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:53.985 07:16:24 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:53.985 07:16:24 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:53.985 07:16:24 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:53.985 07:16:24 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:53.985 07:16:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:53.985 No valid GPT data, bailing 00:05:53.985 07:16:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:53.985 07:16:24 -- scripts/common.sh@391 -- # pt= 00:05:53.985 07:16:24 -- scripts/common.sh@392 -- # return 1 00:05:53.985 07:16:24 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:53.985 1+0 records in 00:05:53.985 1+0 records out 00:05:53.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00665199 s, 158 MB/s 00:05:53.985 07:16:24 -- spdk/autotest.sh@118 -- # sync 00:05:53.985 07:16:24 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:53.985 07:16:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:53.985 07:16:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:53.985 07:16:26 -- spdk/autotest.sh@124 -- # uname -s 00:05:53.985 07:16:26 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:53.985 07:16:26 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:53.985 07:16:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:53.985 07:16:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.985 07:16:26 -- common/autotest_common.sh@10 -- # set +x 00:05:53.985 ************************************ 00:05:53.985 START TEST setup.sh 00:05:53.985 ************************************ 00:05:53.985 07:16:26 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:53.985 * Looking for test storage... 00:05:53.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:53.985 07:16:26 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:53.985 07:16:26 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:53.985 07:16:26 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:53.985 07:16:26 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:53.985 07:16:26 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.985 07:16:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:53.985 ************************************ 00:05:53.986 START TEST acl 00:05:53.986 ************************************ 00:05:53.986 07:16:26 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:53.986 * Looking for test storage... 00:05:53.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:53.986 07:16:26 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:53.986 07:16:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:53.986 07:16:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:53.986 07:16:26 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:53.986 07:16:26 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:53.986 07:16:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:53.986 07:16:26 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:53.986 07:16:26 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:53.986 07:16:26 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:53.986 07:16:26 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:53.986 07:16:26 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:53.986 07:16:26 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:53.986 07:16:26 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:53.986 07:16:26 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:53.986 07:16:26 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:53.986 07:16:26 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:53.986 07:16:27 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.986 07:16:27 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.986 Hugepages 00:05:53.986 node hugesize free / total 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.986 00:05:53.986 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.986 07:16:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.244 07:16:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:54.244 07:16:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:54.244 07:16:27 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:54.244 07:16:27 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:54.244 07:16:27 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:54.244 07:16:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:54.244 07:16:27 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:54.244 07:16:27 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:54.244 07:16:27 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.244 07:16:27 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.244 07:16:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:54.244 ************************************ 00:05:54.244 START TEST denied 00:05:54.244 ************************************ 00:05:54.244 07:16:27 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:05:54.244 07:16:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:54.244 07:16:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:54.244 07:16:27 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:54.244 07:16:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:54.244 07:16:27 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:55.622 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:55.622 07:16:29 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:55.622 07:16:29 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:55.622 07:16:29 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:55.622 07:16:29 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:55.622 07:16:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:55.622 07:16:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:55.622 07:16:29 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:55.622 07:16:29 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:55.622 07:16:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:55.622 07:16:29 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:56.189 ************************************ 00:05:56.189 END TEST denied 00:05:56.189 ************************************ 00:05:56.189 00:05:56.189 real 0m2.009s 00:05:56.189 user 0m0.530s 00:05:56.189 sys 0m1.550s 00:05:56.189 07:16:29 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.189 07:16:29 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:56.189 07:16:30 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:56.189 07:16:30 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.189 07:16:30 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.189 07:16:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:56.189 ************************************ 00:05:56.189 START TEST allowed 00:05:56.189 ************************************ 00:05:56.189 07:16:30 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:05:56.189 07:16:30 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:56.189 07:16:30 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:56.189 07:16:30 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:56.189 07:16:30 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:56.189 07:16:30 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:58.094 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:58.094 07:16:31 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:58.094 07:16:31 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:58.094 07:16:31 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:58.094 07:16:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:58.094 07:16:31 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:58.352 00:05:58.352 real 0m2.178s 00:05:58.352 user 0m0.502s 00:05:58.352 sys 0m1.690s 00:05:58.352 07:16:32 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.352 07:16:32 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:58.352 ************************************ 00:05:58.352 END TEST allowed 00:05:58.352 ************************************ 00:05:58.613 00:05:58.613 real 0m5.804s 00:05:58.613 user 0m1.719s 00:05:58.613 sys 0m4.266s 00:05:58.613 07:16:32 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.613 ************************************ 00:05:58.613 END TEST acl 00:05:58.613 07:16:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:58.613 ************************************ 00:05:58.613 07:16:32 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:58.613 07:16:32 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.613 07:16:32 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.613 07:16:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:58.613 ************************************ 00:05:58.613 START TEST hugepages 00:05:58.613 ************************************ 00:05:58.613 07:16:32 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:58.613 * Looking for test storage... 00:05:58.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 1418456 kB' 'MemAvailable: 7386916 kB' 'Buffers: 46064 kB' 'Cached: 6007556 kB' 'SwapCached: 0 kB' 'Active: 1664116 kB' 'Inactive: 4512504 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 133628 kB' 'Active(file): 1663060 kB' 'Inactive(file): 4378876 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 152208 kB' 'Mapped: 68964 kB' 'Shmem: 2600 kB' 'KReclaimable: 248040 kB' 'Slab: 320196 kB' 'SReclaimable: 248040 kB' 'SUnreclaim: 72156 kB' 'KernelStack: 5080 kB' 'PageTables: 3824 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024332 kB' 'Committed_AS: 517320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.613 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:58.614 07:16:32 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:58.614 07:16:32 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.614 07:16:32 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.614 07:16:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:58.614 ************************************ 00:05:58.614 START TEST default_setup 00:05:58.614 ************************************ 00:05:58.614 07:16:32 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:58.614 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:58.614 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:58.614 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:58.614 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:58.614 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:58.614 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:58.614 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:58.614 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:58.614 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:58.614 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:58.614 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:58.615 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:58.615 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:58.615 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:58.615 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:58.615 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:58.615 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:58.615 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:58.615 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:58.615 07:16:32 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:58.615 07:16:32 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.615 07:16:32 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:59.182 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:59.441 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3499572 kB' 'MemAvailable: 9468112 kB' 'Buffers: 46064 kB' 'Cached: 6007564 kB' 'SwapCached: 0 kB' 'Active: 1664168 kB' 'Inactive: 4526816 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 147948 kB' 'Active(file): 1663096 kB' 'Inactive(file): 4378868 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 708 kB' 'Writeback: 0 kB' 'AnonPages: 166584 kB' 'Mapped: 69316 kB' 'Shmem: 2596 kB' 'KReclaimable: 248092 kB' 'Slab: 320576 kB' 'SReclaimable: 248092 kB' 'SUnreclaim: 72484 kB' 'KernelStack: 4964 kB' 'PageTables: 3660 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 528616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.380 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.381 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3499824 kB' 'MemAvailable: 9468364 kB' 'Buffers: 46064 kB' 'Cached: 6007564 kB' 'SwapCached: 0 kB' 'Active: 1664168 kB' 'Inactive: 4526452 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 147584 kB' 'Active(file): 1663096 kB' 'Inactive(file): 4378868 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 708 kB' 'Writeback: 0 kB' 'AnonPages: 166240 kB' 'Mapped: 69316 kB' 'Shmem: 2596 kB' 'KReclaimable: 248092 kB' 'Slab: 320576 kB' 'SReclaimable: 248092 kB' 'SUnreclaim: 72484 kB' 'KernelStack: 4948 kB' 'PageTables: 3624 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 528616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.382 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3499824 kB' 'MemAvailable: 9468364 kB' 'Buffers: 46064 kB' 'Cached: 6007564 kB' 'SwapCached: 0 kB' 'Active: 1664160 kB' 'Inactive: 4526404 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 147536 kB' 'Active(file): 1663096 kB' 'Inactive(file): 4378868 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 716 kB' 'Writeback: 0 kB' 'AnonPages: 165956 kB' 'Mapped: 69280 kB' 'Shmem: 2596 kB' 'KReclaimable: 248092 kB' 'Slab: 320600 kB' 'SReclaimable: 248092 kB' 'SUnreclaim: 72508 kB' 'KernelStack: 4980 kB' 'PageTables: 3700 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 528616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.383 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.384 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:06:00.385 nr_hugepages=1024 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:00.385 resv_hugepages=0 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:00.385 surplus_hugepages=0 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:00.385 anon_hugepages=0 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3499824 kB' 'MemAvailable: 9468364 kB' 'Buffers: 46064 kB' 'Cached: 6007564 kB' 'SwapCached: 0 kB' 'Active: 1664160 kB' 'Inactive: 4526160 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 147292 kB' 'Active(file): 1663096 kB' 'Inactive(file): 4378868 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 716 kB' 'Writeback: 0 kB' 'AnonPages: 165680 kB' 'Mapped: 69280 kB' 'Shmem: 2596 kB' 'KReclaimable: 248092 kB' 'Slab: 320600 kB' 'SReclaimable: 248092 kB' 'SUnreclaim: 72508 kB' 'KernelStack: 4848 kB' 'PageTables: 3548 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 528616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.385 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.386 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3499824 kB' 'MemUsed: 8743148 kB' 'SwapCached: 0 kB' 'Active: 1664160 kB' 'Inactive: 4526328 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 147460 kB' 'Active(file): 1663096 kB' 'Inactive(file): 4378868 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 716 kB' 'Writeback: 0 kB' 'FilePages: 6053628 kB' 'Mapped: 69280 kB' 'AnonPages: 165848 kB' 'Shmem: 2596 kB' 'KernelStack: 4884 kB' 'PageTables: 3468 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 248092 kB' 'Slab: 320600 kB' 'SReclaimable: 248092 kB' 'SUnreclaim: 72508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.387 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:00.388 node0=1024 expecting 1024 00:06:00.388 ************************************ 00:06:00.388 END TEST default_setup 00:06:00.388 ************************************ 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:00.388 00:06:00.388 real 0m1.682s 00:06:00.388 user 0m0.375s 00:06:00.388 sys 0m1.317s 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.388 07:16:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:06:00.388 07:16:34 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:00.388 07:16:34 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.388 07:16:34 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.388 07:16:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:00.388 ************************************ 00:06:00.388 START TEST per_node_1G_alloc 00:06:00.388 ************************************ 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:00.388 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:00.955 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:00.955 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4547060 kB' 'MemAvailable: 10515608 kB' 'Buffers: 46064 kB' 'Cached: 6007568 kB' 'SwapCached: 0 kB' 'Active: 1664184 kB' 'Inactive: 4526244 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 147392 kB' 'Active(file): 1663120 kB' 'Inactive(file): 4378852 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 720 kB' 'Writeback: 0 kB' 'AnonPages: 166032 kB' 'Mapped: 69284 kB' 'Shmem: 2596 kB' 'KReclaimable: 248092 kB' 'Slab: 320704 kB' 'SReclaimable: 248092 kB' 'SUnreclaim: 72612 kB' 'KernelStack: 5024 kB' 'PageTables: 3768 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 528616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.217 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.218 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4547060 kB' 'MemAvailable: 10515608 kB' 'Buffers: 46064 kB' 'Cached: 6007568 kB' 'SwapCached: 0 kB' 'Active: 1664184 kB' 'Inactive: 4526472 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 147620 kB' 'Active(file): 1663120 kB' 'Inactive(file): 4378852 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 720 kB' 'Writeback: 0 kB' 'AnonPages: 166260 kB' 'Mapped: 69284 kB' 'Shmem: 2596 kB' 'KReclaimable: 248092 kB' 'Slab: 320704 kB' 'SReclaimable: 248092 kB' 'SUnreclaim: 72612 kB' 'KernelStack: 5008 kB' 'PageTables: 3728 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 528616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.219 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.220 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4547572 kB' 'MemAvailable: 10516120 kB' 'Buffers: 46072 kB' 'Cached: 6007560 kB' 'SwapCached: 0 kB' 'Active: 1664176 kB' 'Inactive: 4526684 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 147832 kB' 'Active(file): 1663120 kB' 'Inactive(file): 4378852 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 166212 kB' 'Mapped: 69188 kB' 'Shmem: 2596 kB' 'KReclaimable: 248092 kB' 'Slab: 320708 kB' 'SReclaimable: 248092 kB' 'SUnreclaim: 72616 kB' 'KernelStack: 4952 kB' 'PageTables: 3676 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 528616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.221 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.222 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:01.223 nr_hugepages=512 00:06:01.223 resv_hugepages=0 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:01.223 surplus_hugepages=0 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:01.223 anon_hugepages=0 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4547572 kB' 'MemAvailable: 10516120 kB' 'Buffers: 46072 kB' 'Cached: 6007572 kB' 'SwapCached: 0 kB' 'Active: 1664184 kB' 'Inactive: 4526824 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 147968 kB' 'Active(file): 1663120 kB' 'Inactive(file): 4378856 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 166380 kB' 'Mapped: 69188 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320704 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72616 kB' 'KernelStack: 4984 kB' 'PageTables: 3748 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 528616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.223 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.224 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:01.225 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4547832 kB' 'MemUsed: 7695140 kB' 'SwapCached: 0 kB' 'Active: 1664184 kB' 'Inactive: 4526372 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 147516 kB' 'Active(file): 1663120 kB' 'Inactive(file): 4378856 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 6053644 kB' 'Mapped: 69188 kB' 'AnonPages: 166232 kB' 'Shmem: 2596 kB' 'KernelStack: 5032 kB' 'PageTables: 3868 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 248088 kB' 'Slab: 320704 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.485 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:01.486 node0=512 expecting 512 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:01.486 00:06:01.486 real 0m0.888s 00:06:01.486 user 0m0.338s 00:06:01.486 sys 0m0.549s 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.486 07:16:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:01.486 ************************************ 00:06:01.486 END TEST per_node_1G_alloc 00:06:01.486 ************************************ 00:06:01.486 07:16:35 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:01.486 07:16:35 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.486 07:16:35 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.486 07:16:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:01.486 ************************************ 00:06:01.486 START TEST even_2G_alloc 00:06:01.486 ************************************ 00:06:01.486 07:16:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:06:01.486 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:01.486 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:01.486 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:01.486 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:01.486 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:01.486 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:01.486 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:01.486 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:01.486 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:01.486 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:01.486 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:01.486 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:01.486 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:01.486 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:01.487 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:01.487 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:06:01.487 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:01.487 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:01.487 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:01.487 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:01.487 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:01.487 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:06:01.487 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:01.487 07:16:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:01.745 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:01.745 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3502288 kB' 'MemAvailable: 9470840 kB' 'Buffers: 46072 kB' 'Cached: 6007568 kB' 'SwapCached: 0 kB' 'Active: 1664220 kB' 'Inactive: 4526340 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 147516 kB' 'Active(file): 1663156 kB' 'Inactive(file): 4378824 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 166148 kB' 'Mapped: 69032 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320452 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72364 kB' 'KernelStack: 4912 kB' 'PageTables: 3496 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 528616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.685 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.686 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3502812 kB' 'MemAvailable: 9471364 kB' 'Buffers: 46072 kB' 'Cached: 6007568 kB' 'SwapCached: 0 kB' 'Active: 1664220 kB' 'Inactive: 4526600 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 147776 kB' 'Active(file): 1663156 kB' 'Inactive(file): 4378824 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 166408 kB' 'Mapped: 69032 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320452 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72364 kB' 'KernelStack: 4912 kB' 'PageTables: 3496 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 528616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.687 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3503592 kB' 'MemAvailable: 9472144 kB' 'Buffers: 46072 kB' 'Cached: 6007568 kB' 'SwapCached: 0 kB' 'Active: 1664220 kB' 'Inactive: 4523480 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 144656 kB' 'Active(file): 1663156 kB' 'Inactive(file): 4378824 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 163028 kB' 'Mapped: 68252 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320452 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72364 kB' 'KernelStack: 4912 kB' 'PageTables: 3496 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 519928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.688 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.689 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.690 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.951 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.951 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.951 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.951 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.951 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.951 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.951 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.951 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.951 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.951 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:02.952 nr_hugepages=1024 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:02.952 resv_hugepages=0 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:02.952 surplus_hugepages=0 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:02.952 anon_hugepages=0 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3503852 kB' 'MemAvailable: 9472404 kB' 'Buffers: 46072 kB' 'Cached: 6007568 kB' 'SwapCached: 0 kB' 'Active: 1664212 kB' 'Inactive: 4523424 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 144600 kB' 'Active(file): 1663156 kB' 'Inactive(file): 4378824 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 162988 kB' 'Mapped: 68188 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320452 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72364 kB' 'KernelStack: 4944 kB' 'PageTables: 3548 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 519928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.952 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.953 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3503852 kB' 'MemUsed: 8739120 kB' 'SwapCached: 0 kB' 'Active: 1664208 kB' 'Inactive: 4523092 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144268 kB' 'Active(file): 1663156 kB' 'Inactive(file): 4378824 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 6053640 kB' 'Mapped: 67996 kB' 'AnonPages: 162912 kB' 'Shmem: 2596 kB' 'KernelStack: 5012 kB' 'PageTables: 3536 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 248088 kB' 'Slab: 320444 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.954 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:02.955 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:02.955 node0=1024 expecting 1024 00:06:02.956 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:02.956 07:16:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:02.956 00:06:02.956 real 0m1.445s 00:06:02.956 user 0m0.303s 00:06:02.956 sys 0m1.192s 00:06:02.956 07:16:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.956 07:16:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:02.956 ************************************ 00:06:02.956 END TEST even_2G_alloc 00:06:02.956 ************************************ 00:06:02.956 07:16:36 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:02.956 07:16:36 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.956 07:16:36 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.956 07:16:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:02.956 ************************************ 00:06:02.956 START TEST odd_alloc 00:06:02.956 ************************************ 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:02.956 07:16:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:03.215 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:03.473 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3505284 kB' 'MemAvailable: 9473836 kB' 'Buffers: 46072 kB' 'Cached: 6007568 kB' 'SwapCached: 0 kB' 'Active: 1664260 kB' 'Inactive: 4523164 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144384 kB' 'Active(file): 1663200 kB' 'Inactive(file): 4378780 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 163084 kB' 'Mapped: 68036 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320396 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72308 kB' 'KernelStack: 4916 kB' 'PageTables: 3288 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 519928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.044 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.045 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3505060 kB' 'MemAvailable: 9473616 kB' 'Buffers: 46072 kB' 'Cached: 6007572 kB' 'SwapCached: 0 kB' 'Active: 1664252 kB' 'Inactive: 4522972 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144188 kB' 'Active(file): 1663200 kB' 'Inactive(file): 4378784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 162956 kB' 'Mapped: 67992 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320396 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72308 kB' 'KernelStack: 4944 kB' 'PageTables: 3532 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 519928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.046 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3505060 kB' 'MemAvailable: 9473616 kB' 'Buffers: 46072 kB' 'Cached: 6007572 kB' 'SwapCached: 0 kB' 'Active: 1664252 kB' 'Inactive: 4523044 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144260 kB' 'Active(file): 1663200 kB' 'Inactive(file): 4378784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 162916 kB' 'Mapped: 67992 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320396 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72308 kB' 'KernelStack: 4848 kB' 'PageTables: 3272 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 519928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.047 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.048 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:04.049 nr_hugepages=1025 00:06:04.049 resv_hugepages=0 00:06:04.049 surplus_hugepages=0 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:04.049 anon_hugepages=0 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3505060 kB' 'MemAvailable: 9473616 kB' 'Buffers: 46072 kB' 'Cached: 6007572 kB' 'SwapCached: 0 kB' 'Active: 1664252 kB' 'Inactive: 4522784 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144000 kB' 'Active(file): 1663200 kB' 'Inactive(file): 4378784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 162656 kB' 'Mapped: 67992 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320396 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72308 kB' 'KernelStack: 4916 kB' 'PageTables: 3272 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 519928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.049 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:06:04.050 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3505060 kB' 'MemUsed: 8737912 kB' 'SwapCached: 0 kB' 'Active: 1664252 kB' 'Inactive: 4522784 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144000 kB' 'Active(file): 1663200 kB' 'Inactive(file): 4378784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'FilePages: 6053644 kB' 'Mapped: 67992 kB' 'AnonPages: 162656 kB' 'Shmem: 2596 kB' 'KernelStack: 4848 kB' 'PageTables: 3532 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 248088 kB' 'Slab: 320396 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.051 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:04.052 node0=1025 expecting 1025 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:06:04.052 00:06:04.052 real 0m1.169s 00:06:04.052 user 0m0.301s 00:06:04.052 sys 0m0.925s 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.052 ************************************ 00:06:04.052 END TEST odd_alloc 00:06:04.052 ************************************ 00:06:04.052 07:16:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:04.052 07:16:37 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:04.052 07:16:37 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.052 07:16:37 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.052 07:16:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:04.310 ************************************ 00:06:04.310 START TEST custom_alloc 00:06:04.310 ************************************ 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:04.311 07:16:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:04.568 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:04.568 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.139 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4553124 kB' 'MemAvailable: 10521680 kB' 'Buffers: 46072 kB' 'Cached: 6007572 kB' 'SwapCached: 0 kB' 'Active: 1664260 kB' 'Inactive: 4523336 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144552 kB' 'Active(file): 1663200 kB' 'Inactive(file): 4378784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 163396 kB' 'Mapped: 68236 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320260 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72172 kB' 'KernelStack: 5032 kB' 'PageTables: 3548 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 520056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.140 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4553388 kB' 'MemAvailable: 10521944 kB' 'Buffers: 46072 kB' 'Cached: 6007572 kB' 'SwapCached: 0 kB' 'Active: 1664260 kB' 'Inactive: 4523372 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144588 kB' 'Active(file): 1663200 kB' 'Inactive(file): 4378784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 348 kB' 'Writeback: 0 kB' 'AnonPages: 163212 kB' 'Mapped: 68332 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320260 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72172 kB' 'KernelStack: 5032 kB' 'PageTables: 3608 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 520056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.141 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.142 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4553404 kB' 'MemAvailable: 10521960 kB' 'Buffers: 46072 kB' 'Cached: 6007572 kB' 'SwapCached: 0 kB' 'Active: 1664260 kB' 'Inactive: 4523060 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144276 kB' 'Active(file): 1663200 kB' 'Inactive(file): 4378784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 162856 kB' 'Mapped: 68076 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320308 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72220 kB' 'KernelStack: 4860 kB' 'PageTables: 3376 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 520056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.143 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.144 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:05.145 nr_hugepages=512 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:05.145 resv_hugepages=0 00:06:05.145 surplus_hugepages=0 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:05.145 anon_hugepages=0 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4553404 kB' 'MemAvailable: 10521960 kB' 'Buffers: 46072 kB' 'Cached: 6007572 kB' 'SwapCached: 0 kB' 'Active: 1664260 kB' 'Inactive: 4523060 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144276 kB' 'Active(file): 1663200 kB' 'Inactive(file): 4378784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'AnonPages: 162856 kB' 'Mapped: 68076 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320308 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72220 kB' 'KernelStack: 4928 kB' 'PageTables: 3376 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 520056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.145 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.146 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4553656 kB' 'MemUsed: 7689316 kB' 'SwapCached: 0 kB' 'Active: 1664260 kB' 'Inactive: 4522908 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144124 kB' 'Active(file): 1663200 kB' 'Inactive(file): 4378784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 352 kB' 'Writeback: 0 kB' 'FilePages: 6053644 kB' 'Mapped: 68076 kB' 'AnonPages: 162968 kB' 'Shmem: 2596 kB' 'KernelStack: 4844 kB' 'PageTables: 3596 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 248088 kB' 'Slab: 320308 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.147 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:05.148 node0=512 expecting 512 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:05.148 00:06:05.148 real 0m1.009s 00:06:05.148 user 0m0.308s 00:06:05.148 sys 0m0.759s 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.148 07:16:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:05.148 ************************************ 00:06:05.148 END TEST custom_alloc 00:06:05.148 ************************************ 00:06:05.148 07:16:38 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:05.148 07:16:38 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.148 07:16:38 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.148 07:16:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:05.148 ************************************ 00:06:05.148 START TEST no_shrink_alloc 00:06:05.148 ************************************ 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:05.148 07:16:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:05.714 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:05.714 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3509048 kB' 'MemAvailable: 9477600 kB' 'Buffers: 46080 kB' 'Cached: 6007560 kB' 'SwapCached: 0 kB' 'Active: 1664260 kB' 'Inactive: 4523152 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144372 kB' 'Active(file): 1663200 kB' 'Inactive(file): 4378780 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 356 kB' 'Writeback: 0 kB' 'AnonPages: 163100 kB' 'Mapped: 67992 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320204 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72116 kB' 'KernelStack: 4948 kB' 'PageTables: 3664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 520056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.653 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.654 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3509312 kB' 'MemAvailable: 9477864 kB' 'Buffers: 46080 kB' 'Cached: 6007560 kB' 'SwapCached: 0 kB' 'Active: 1664260 kB' 'Inactive: 4523208 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 144428 kB' 'Active(file): 1663200 kB' 'Inactive(file): 4378780 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 360 kB' 'Writeback: 0 kB' 'AnonPages: 163132 kB' 'Mapped: 67992 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320204 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72116 kB' 'KernelStack: 4916 kB' 'PageTables: 3572 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 520056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.655 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.656 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3510832 kB' 'MemAvailable: 9479384 kB' 'Buffers: 46080 kB' 'Cached: 6007560 kB' 'SwapCached: 0 kB' 'Active: 1664268 kB' 'Inactive: 4523040 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 144260 kB' 'Active(file): 1663200 kB' 'Inactive(file): 4378780 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 360 kB' 'Writeback: 0 kB' 'AnonPages: 162676 kB' 'Mapped: 67992 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320204 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72116 kB' 'KernelStack: 4804 kB' 'PageTables: 3248 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 520056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.657 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:06.658 nr_hugepages=1024 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:06.658 resv_hugepages=0 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:06.658 surplus_hugepages=0 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:06.658 anon_hugepages=0 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.658 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3510588 kB' 'MemAvailable: 9479140 kB' 'Buffers: 46080 kB' 'Cached: 6007560 kB' 'SwapCached: 0 kB' 'Active: 1664252 kB' 'Inactive: 4522532 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 143752 kB' 'Active(file): 1663200 kB' 'Inactive(file): 4378780 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 360 kB' 'Writeback: 0 kB' 'AnonPages: 162412 kB' 'Mapped: 67956 kB' 'Shmem: 2596 kB' 'KReclaimable: 248088 kB' 'Slab: 320284 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72196 kB' 'KernelStack: 4796 kB' 'PageTables: 3332 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 520056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.659 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3510588 kB' 'MemUsed: 8732384 kB' 'SwapCached: 0 kB' 'Active: 1664252 kB' 'Inactive: 4522492 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 143712 kB' 'Active(file): 1663200 kB' 'Inactive(file): 4378780 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 360 kB' 'Writeback: 0 kB' 'FilePages: 6053640 kB' 'Mapped: 67956 kB' 'AnonPages: 162372 kB' 'Shmem: 2596 kB' 'KernelStack: 4848 kB' 'PageTables: 3292 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 248088 kB' 'Slab: 320284 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.660 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:06.661 node0=1024 expecting 1024 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.661 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:06.923 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:06.923 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:06.923 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3508612 kB' 'MemAvailable: 9477176 kB' 'Buffers: 46080 kB' 'Cached: 6007564 kB' 'SwapCached: 0 kB' 'Active: 1664260 kB' 'Inactive: 4523576 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144792 kB' 'Active(file): 1663208 kB' 'Inactive(file): 4378784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 163424 kB' 'Mapped: 68020 kB' 'Shmem: 2588 kB' 'KReclaimable: 248088 kB' 'Slab: 320428 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72340 kB' 'KernelStack: 4928 kB' 'PageTables: 3720 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 519264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20436 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.923 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.924 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3508612 kB' 'MemAvailable: 9477176 kB' 'Buffers: 46080 kB' 'Cached: 6007564 kB' 'SwapCached: 0 kB' 'Active: 1664260 kB' 'Inactive: 4524356 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 145572 kB' 'Active(file): 1663208 kB' 'Inactive(file): 4378784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 164204 kB' 'Mapped: 68020 kB' 'Shmem: 2588 kB' 'KReclaimable: 248088 kB' 'Slab: 320428 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72340 kB' 'KernelStack: 4928 kB' 'PageTables: 3720 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 565396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20420 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.925 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.926 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3508884 kB' 'MemAvailable: 9477448 kB' 'Buffers: 46080 kB' 'Cached: 6007564 kB' 'SwapCached: 0 kB' 'Active: 1664260 kB' 'Inactive: 4523044 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144260 kB' 'Active(file): 1663208 kB' 'Inactive(file): 4378784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 162892 kB' 'Mapped: 68020 kB' 'Shmem: 2588 kB' 'KReclaimable: 248088 kB' 'Slab: 320428 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72340 kB' 'KernelStack: 4912 kB' 'PageTables: 3680 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 520056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.189 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:07.190 nr_hugepages=1024 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:07.190 resv_hugepages=0 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:07.190 surplus_hugepages=0 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:07.190 anon_hugepages=0 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:07.190 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3508884 kB' 'MemAvailable: 9477448 kB' 'Buffers: 46080 kB' 'Cached: 6007564 kB' 'SwapCached: 0 kB' 'Active: 1664260 kB' 'Inactive: 4523268 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 144484 kB' 'Active(file): 1663208 kB' 'Inactive(file): 4378784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 428 kB' 'Writeback: 0 kB' 'AnonPages: 163120 kB' 'Mapped: 68060 kB' 'Shmem: 2588 kB' 'KReclaimable: 248088 kB' 'Slab: 320428 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72340 kB' 'KernelStack: 4912 kB' 'PageTables: 3744 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 520056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 2988032 kB' 'DirectMap1G: 11534336 kB' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.191 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 3508868 kB' 'MemUsed: 8734104 kB' 'SwapCached: 0 kB' 'Active: 1664256 kB' 'Inactive: 4522784 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 144000 kB' 'Active(file): 1663208 kB' 'Inactive(file): 4378784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 636 kB' 'Writeback: 0 kB' 'FilePages: 6053656 kB' 'Mapped: 68044 kB' 'AnonPages: 162672 kB' 'Shmem: 2596 kB' 'KernelStack: 4840 kB' 'PageTables: 3572 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 248088 kB' 'Slab: 320504 kB' 'SReclaimable: 248088 kB' 'SUnreclaim: 72416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.192 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.193 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.194 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.194 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:07.194 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:07.194 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:07.194 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:07.194 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:07.194 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:07.194 node0=1024 expecting 1024 00:06:07.194 07:16:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:07.194 00:06:07.194 real 0m1.886s 00:06:07.194 user 0m0.662s 00:06:07.194 sys 0m1.331s 00:06:07.194 07:16:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.194 07:16:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:07.194 ************************************ 00:06:07.194 END TEST no_shrink_alloc 00:06:07.194 ************************************ 00:06:07.194 07:16:40 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:06:07.194 07:16:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:07.194 07:16:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:07.194 07:16:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:07.194 07:16:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:07.194 07:16:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:07.194 07:16:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:07.194 07:16:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:07.194 07:16:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:07.194 ************************************ 00:06:07.194 END TEST hugepages 00:06:07.194 ************************************ 00:06:07.194 00:06:07.194 real 0m8.649s 00:06:07.194 user 0m2.561s 00:06:07.194 sys 0m6.379s 00:06:07.194 07:16:40 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.194 07:16:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:07.194 07:16:41 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:07.194 07:16:41 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.194 07:16:41 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.194 07:16:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:07.194 ************************************ 00:06:07.194 START TEST driver 00:06:07.194 ************************************ 00:06:07.194 07:16:41 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:07.453 * Looking for test storage... 00:06:07.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:07.453 07:16:41 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:07.453 07:16:41 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:07.453 07:16:41 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:08.021 07:16:41 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:08.021 07:16:41 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:08.021 07:16:41 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.021 07:16:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:08.021 ************************************ 00:06:08.021 START TEST guess_driver 00:06:08.021 ************************************ 00:06:08.021 07:16:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:06:08.021 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:08.021 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:08.021 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:08.021 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:08.021 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:08.021 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:08.021 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:08.021 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:06:08.021 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ N == Y ]] 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:06:08.022 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:08.022 Looking for driver=uio_pci_generic 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:08.022 07:16:41 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:08.590 07:16:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:06:08.590 07:16:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:06:08.590 07:16:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:08.591 07:16:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:08.591 07:16:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:08.591 07:16:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:09.526 07:16:43 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:09.526 07:16:43 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:09.526 07:16:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:09.526 07:16:43 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:10.092 00:06:10.092 real 0m2.212s 00:06:10.092 user 0m0.480s 00:06:10.092 sys 0m1.736s 00:06:10.092 07:16:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.092 ************************************ 00:06:10.092 END TEST guess_driver 00:06:10.092 ************************************ 00:06:10.092 07:16:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:10.350 00:06:10.350 real 0m2.972s 00:06:10.350 user 0m0.797s 00:06:10.350 sys 0m2.223s 00:06:10.350 07:16:43 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.350 07:16:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:10.350 ************************************ 00:06:10.350 END TEST driver 00:06:10.350 ************************************ 00:06:10.350 07:16:44 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:10.350 07:16:44 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.350 07:16:44 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.350 07:16:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:10.350 ************************************ 00:06:10.350 START TEST devices 00:06:10.350 ************************************ 00:06:10.350 07:16:44 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:10.350 * Looking for test storage... 00:06:10.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:10.350 07:16:44 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:10.350 07:16:44 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:10.350 07:16:44 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:10.350 07:16:44 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:10.917 07:16:44 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:10.917 07:16:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:06:10.917 07:16:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:06:10.917 07:16:44 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:06:10.917 07:16:44 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:10.917 07:16:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:06:10.917 07:16:44 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:06:10.917 07:16:44 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:10.917 07:16:44 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:10.917 07:16:44 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:10.917 07:16:44 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:10.917 07:16:44 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:10.917 07:16:44 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:10.917 07:16:44 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:10.917 07:16:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:10.917 07:16:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:10.917 07:16:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:10.917 07:16:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:10.917 07:16:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:10.917 07:16:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:10.917 07:16:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:10.917 07:16:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:11.175 No valid GPT data, bailing 00:06:11.175 07:16:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:11.175 07:16:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:11.175 07:16:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:11.175 07:16:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:11.175 07:16:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:11.175 07:16:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:11.175 07:16:44 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:06:11.175 07:16:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:11.175 07:16:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:11.175 07:16:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:11.175 07:16:44 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:06:11.175 07:16:44 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:11.175 07:16:44 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:11.175 07:16:44 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.175 07:16:44 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.175 07:16:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:11.175 ************************************ 00:06:11.175 START TEST nvme_mount 00:06:11.175 ************************************ 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:11.175 07:16:44 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:12.118 Creating new GPT entries in memory. 00:06:12.118 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:12.118 other utilities. 00:06:12.118 07:16:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:12.118 07:16:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:12.118 07:16:45 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:12.118 07:16:45 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:12.118 07:16:45 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:13.495 Creating new GPT entries in memory. 00:06:13.495 The operation has completed successfully. 00:06:13.495 07:16:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:13.495 07:16:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:13.495 07:16:46 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 116882 00:06:13.495 07:16:46 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.495 07:16:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:13.495 07:16:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.495 07:16:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:13.496 07:16:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:13.496 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:13.754 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:13.754 07:16:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:14.691 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:14.691 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:14.691 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:14.691 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:14.691 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:14.691 07:16:48 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:14.692 07:16:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:14.950 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:14.950 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:14.950 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:14.950 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.950 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:14.950 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:15.209 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:15.209 07:16:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:16.145 07:16:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:16.404 07:16:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:16.404 07:16:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:16.404 07:16:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:16.404 07:16:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.404 07:16:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:16.404 07:16:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.663 07:16:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:16.663 07:16:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:18.577 07:16:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:18.577 07:16:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:18.577 07:16:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:18.577 07:16:52 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:18.577 07:16:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:18.577 07:16:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:18.577 07:16:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:18.577 07:16:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:18.577 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:18.577 00:06:18.577 real 0m7.230s 00:06:18.577 user 0m0.772s 00:06:18.577 sys 0m4.483s 00:06:18.577 07:16:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.577 ************************************ 00:06:18.577 END TEST nvme_mount 00:06:18.577 ************************************ 00:06:18.577 07:16:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:18.577 07:16:52 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:18.577 07:16:52 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.577 07:16:52 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.577 07:16:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:18.577 ************************************ 00:06:18.577 START TEST dm_mount 00:06:18.577 ************************************ 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:18.577 07:16:52 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:19.526 Creating new GPT entries in memory. 00:06:19.526 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:19.526 other utilities. 00:06:19.526 07:16:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:19.526 07:16:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:19.526 07:16:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:19.526 07:16:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:19.526 07:16:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:20.460 Creating new GPT entries in memory. 00:06:20.460 The operation has completed successfully. 00:06:20.460 07:16:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:20.460 07:16:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:20.460 07:16:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:20.460 07:16:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:20.460 07:16:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:21.393 The operation has completed successfully. 00:06:21.393 07:16:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:21.393 07:16:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:21.393 07:16:55 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 117382 00:06:21.393 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:21.393 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:21.393 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:21.393 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:21.651 07:16:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:21.910 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:21.910 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:21.910 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:21.910 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.910 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:21.910 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.910 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:21.910 07:16:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:23.813 07:16:57 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:24.071 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:24.071 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:24.071 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:24.071 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.071 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:24.071 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.330 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:06:24.330 07:16:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:26.231 07:16:59 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:26.231 07:16:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:26.231 07:16:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:26.231 07:16:59 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:26.231 07:16:59 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:26.231 07:16:59 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:26.231 07:16:59 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:26.231 07:16:59 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:26.231 07:16:59 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:26.231 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:26.231 07:16:59 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:26.231 07:16:59 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:26.231 00:06:26.231 real 0m7.691s 00:06:26.231 user 0m0.553s 00:06:26.231 sys 0m4.067s 00:06:26.231 07:16:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.231 07:16:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:26.231 ************************************ 00:06:26.231 END TEST dm_mount 00:06:26.231 ************************************ 00:06:26.231 07:16:59 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:26.231 07:16:59 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:26.231 07:16:59 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:26.231 07:16:59 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:26.231 07:16:59 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:26.231 07:16:59 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:26.231 07:16:59 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:26.231 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:26.231 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:26.231 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:26.231 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:26.231 07:16:59 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:26.231 07:16:59 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:26.231 07:16:59 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:26.231 07:16:59 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:26.231 07:16:59 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:26.231 07:16:59 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:26.231 07:16:59 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:26.231 00:06:26.231 real 0m15.924s 00:06:26.231 user 0m1.755s 00:06:26.231 sys 0m9.131s 00:06:26.231 07:16:59 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.231 07:16:59 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:26.231 ************************************ 00:06:26.231 END TEST devices 00:06:26.231 ************************************ 00:06:26.231 00:06:26.231 real 0m33.721s 00:06:26.231 user 0m7.018s 00:06:26.231 sys 0m22.198s 00:06:26.231 07:17:00 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.231 07:17:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:26.231 ************************************ 00:06:26.231 END TEST setup.sh 00:06:26.231 ************************************ 00:06:26.231 07:17:00 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:26.799 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:26.799 Hugepages 00:06:26.799 node hugesize free / total 00:06:26.799 node0 1048576kB 0 / 0 00:06:26.799 node0 2048kB 2048 / 2048 00:06:26.799 00:06:26.799 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:27.058 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:27.058 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:27.058 07:17:00 -- spdk/autotest.sh@130 -- # uname -s 00:06:27.058 07:17:00 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:27.058 07:17:00 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:27.058 07:17:00 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:27.624 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:27.624 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:29.521 07:17:03 -- common/autotest_common.sh@1528 -- # sleep 1 00:06:30.455 07:17:04 -- common/autotest_common.sh@1529 -- # bdfs=() 00:06:30.455 07:17:04 -- common/autotest_common.sh@1529 -- # local bdfs 00:06:30.455 07:17:04 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:06:30.455 07:17:04 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:06:30.455 07:17:04 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:30.455 07:17:04 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:30.455 07:17:04 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:30.455 07:17:04 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:30.455 07:17:04 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:30.714 07:17:04 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:06:30.714 07:17:04 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:06:30.714 07:17:04 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:30.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:30.975 Waiting for block devices as requested 00:06:30.975 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:31.234 07:17:04 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:31.234 07:17:04 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:31.234 07:17:04 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:06:31.234 07:17:04 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:06:31.234 07:17:04 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:06:31.234 07:17:04 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:06:31.234 07:17:04 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:06:31.234 07:17:04 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:06:31.234 07:17:04 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:06:31.234 07:17:04 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:06:31.234 07:17:04 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:06:31.234 07:17:04 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:31.234 07:17:04 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:31.234 07:17:04 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:31.234 07:17:04 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:31.234 07:17:04 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:31.234 07:17:04 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:06:31.234 07:17:04 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:31.234 07:17:04 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:31.234 07:17:04 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:31.234 07:17:04 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:31.234 07:17:04 -- common/autotest_common.sh@1553 -- # continue 00:06:31.234 07:17:04 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:31.234 07:17:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.234 07:17:04 -- common/autotest_common.sh@10 -- # set +x 00:06:31.234 07:17:05 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:31.234 07:17:05 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:31.234 07:17:05 -- common/autotest_common.sh@10 -- # set +x 00:06:31.234 07:17:05 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:31.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:31.801 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:33.709 07:17:07 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:33.709 07:17:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.709 07:17:07 -- common/autotest_common.sh@10 -- # set +x 00:06:33.709 07:17:07 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:33.709 07:17:07 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:06:33.709 07:17:07 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:06:33.709 07:17:07 -- common/autotest_common.sh@1573 -- # bdfs=() 00:06:33.709 07:17:07 -- common/autotest_common.sh@1573 -- # local bdfs 00:06:33.709 07:17:07 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:06:33.709 07:17:07 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:33.709 07:17:07 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:33.709 07:17:07 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:33.709 07:17:07 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:33.709 07:17:07 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:33.709 07:17:07 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:06:33.709 07:17:07 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:06:33.709 07:17:07 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:33.709 07:17:07 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:33.709 07:17:07 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:33.709 07:17:07 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:33.709 07:17:07 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:06:33.709 07:17:07 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:06:33.709 07:17:07 -- common/autotest_common.sh@1589 -- # return 0 00:06:33.709 07:17:07 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:06:33.709 07:17:07 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:33.709 07:17:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:33.709 07:17:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:33.709 07:17:07 -- common/autotest_common.sh@10 -- # set +x 00:06:33.709 ************************************ 00:06:33.709 START TEST unittest 00:06:33.709 ************************************ 00:06:33.709 07:17:07 unittest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:33.709 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:33.709 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:06:33.709 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:06:33.709 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:33.709 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:06:33.709 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:33.709 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:33.709 ++ rpc_py=rpc_cmd 00:06:33.709 ++ set -e 00:06:33.709 ++ shopt -s nullglob 00:06:33.709 ++ shopt -s extglob 00:06:33.709 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:33.709 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:33.709 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:33.709 +++ CONFIG_WPDK_DIR= 00:06:33.709 +++ CONFIG_ASAN=y 00:06:33.709 +++ CONFIG_VBDEV_COMPRESS=n 00:06:33.709 +++ CONFIG_HAVE_EXECINFO_H=y 00:06:33.709 +++ CONFIG_USDT=n 00:06:33.709 +++ CONFIG_CUSTOMOCF=n 00:06:33.709 +++ CONFIG_PREFIX=/usr/local 00:06:33.709 +++ CONFIG_RBD=n 00:06:33.709 +++ CONFIG_LIBDIR= 00:06:33.709 +++ CONFIG_IDXD=y 00:06:33.709 +++ CONFIG_NVME_CUSE=y 00:06:33.709 +++ CONFIG_SMA=n 00:06:33.709 +++ CONFIG_VTUNE=n 00:06:33.709 +++ CONFIG_TSAN=n 00:06:33.709 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:33.709 +++ CONFIG_VFIO_USER_DIR= 00:06:33.709 +++ CONFIG_PGO_CAPTURE=n 00:06:33.709 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:33.709 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:33.709 +++ CONFIG_LTO=n 00:06:33.709 +++ CONFIG_ISCSI_INITIATOR=y 00:06:33.709 +++ CONFIG_CET=n 00:06:33.709 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:33.709 +++ CONFIG_OCF_PATH= 00:06:33.709 +++ CONFIG_RDMA_SET_TOS=y 00:06:33.709 +++ CONFIG_HAVE_ARC4RANDOM=n 00:06:33.709 +++ CONFIG_HAVE_LIBARCHIVE=n 00:06:33.709 +++ CONFIG_UBLK=n 00:06:33.709 +++ CONFIG_ISAL_CRYPTO=y 00:06:33.709 +++ CONFIG_OPENSSL_PATH= 00:06:33.709 +++ CONFIG_OCF=n 00:06:33.709 +++ CONFIG_FUSE=n 00:06:33.709 +++ CONFIG_VTUNE_DIR= 00:06:33.709 +++ CONFIG_FUZZER_LIB= 00:06:33.709 +++ CONFIG_FUZZER=n 00:06:33.709 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:06:33.709 +++ CONFIG_CRYPTO=n 00:06:33.709 +++ CONFIG_PGO_USE=n 00:06:33.709 +++ CONFIG_VHOST=y 00:06:33.709 +++ CONFIG_DAOS=n 00:06:33.709 +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:06:33.709 +++ CONFIG_DAOS_DIR= 00:06:33.709 +++ CONFIG_UNIT_TESTS=y 00:06:33.709 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:33.709 +++ CONFIG_VIRTIO=y 00:06:33.709 +++ CONFIG_DPDK_UADK=n 00:06:33.709 +++ CONFIG_COVERAGE=y 00:06:33.709 +++ CONFIG_RDMA=y 00:06:33.709 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:33.709 +++ CONFIG_URING_PATH= 00:06:33.709 +++ CONFIG_XNVME=n 00:06:33.709 +++ CONFIG_VFIO_USER=n 00:06:33.709 +++ CONFIG_ARCH=native 00:06:33.709 +++ CONFIG_HAVE_EVP_MAC=y 00:06:33.709 +++ CONFIG_URING_ZNS=n 00:06:33.709 +++ CONFIG_WERROR=y 00:06:33.709 +++ CONFIG_HAVE_LIBBSD=n 00:06:33.709 +++ CONFIG_UBSAN=y 00:06:33.709 +++ CONFIG_IPSEC_MB_DIR= 00:06:33.709 +++ CONFIG_GOLANG=n 00:06:33.709 +++ CONFIG_ISAL=y 00:06:33.709 +++ CONFIG_IDXD_KERNEL=n 00:06:33.709 +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:33.709 +++ CONFIG_RDMA_PROV=verbs 00:06:33.709 +++ CONFIG_APPS=y 00:06:33.709 +++ CONFIG_SHARED=n 00:06:33.709 +++ CONFIG_HAVE_KEYUTILS=y 00:06:33.709 +++ CONFIG_FC_PATH= 00:06:33.709 +++ CONFIG_DPDK_PKG_CONFIG=n 00:06:33.709 +++ CONFIG_FC=n 00:06:33.709 +++ CONFIG_AVAHI=n 00:06:33.709 +++ CONFIG_FIO_PLUGIN=y 00:06:33.709 +++ CONFIG_RAID5F=y 00:06:33.709 +++ CONFIG_EXAMPLES=y 00:06:33.709 +++ CONFIG_TESTS=y 00:06:33.709 +++ CONFIG_CRYPTO_MLX5=n 00:06:33.709 +++ CONFIG_MAX_LCORES= 00:06:33.709 +++ CONFIG_IPSEC_MB=n 00:06:33.709 +++ CONFIG_PGO_DIR= 00:06:33.709 +++ CONFIG_DEBUG=y 00:06:33.709 +++ CONFIG_DPDK_COMPRESSDEV=n 00:06:33.709 +++ CONFIG_CROSS_PREFIX= 00:06:33.709 +++ CONFIG_URING=n 00:06:33.709 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:33.709 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:33.709 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:33.709 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:33.709 +++ _root=/home/vagrant/spdk_repo/spdk 00:06:33.709 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:33.709 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:33.709 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:33.709 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:33.709 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:33.709 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:33.709 +++ VHOST_APP=("$_app_dir/vhost") 00:06:33.709 +++ DD_APP=("$_app_dir/spdk_dd") 00:06:33.709 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:06:33.709 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:33.709 +++ [[ #ifndef SPDK_CONFIG_H 00:06:33.709 #define SPDK_CONFIG_H 00:06:33.709 #define SPDK_CONFIG_APPS 1 00:06:33.709 #define SPDK_CONFIG_ARCH native 00:06:33.709 #define SPDK_CONFIG_ASAN 1 00:06:33.709 #undef SPDK_CONFIG_AVAHI 00:06:33.709 #undef SPDK_CONFIG_CET 00:06:33.709 #define SPDK_CONFIG_COVERAGE 1 00:06:33.709 #define SPDK_CONFIG_CROSS_PREFIX 00:06:33.709 #undef SPDK_CONFIG_CRYPTO 00:06:33.709 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:33.709 #undef SPDK_CONFIG_CUSTOMOCF 00:06:33.709 #undef SPDK_CONFIG_DAOS 00:06:33.709 #define SPDK_CONFIG_DAOS_DIR 00:06:33.709 #define SPDK_CONFIG_DEBUG 1 00:06:33.709 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:33.709 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:06:33.709 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:06:33.709 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:06:33.709 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:33.709 #undef SPDK_CONFIG_DPDK_UADK 00:06:33.709 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:33.709 #define SPDK_CONFIG_EXAMPLES 1 00:06:33.709 #undef SPDK_CONFIG_FC 00:06:33.709 #define SPDK_CONFIG_FC_PATH 00:06:33.709 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:33.709 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:33.709 #undef SPDK_CONFIG_FUSE 00:06:33.709 #undef SPDK_CONFIG_FUZZER 00:06:33.709 #define SPDK_CONFIG_FUZZER_LIB 00:06:33.709 #undef SPDK_CONFIG_GOLANG 00:06:33.709 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:06:33.709 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:33.709 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:33.709 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:33.709 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:33.709 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:33.709 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:33.709 #define SPDK_CONFIG_IDXD 1 00:06:33.709 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:33.709 #undef SPDK_CONFIG_IPSEC_MB 00:06:33.709 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:33.709 #define SPDK_CONFIG_ISAL 1 00:06:33.709 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:33.709 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:33.709 #define SPDK_CONFIG_LIBDIR 00:06:33.709 #undef SPDK_CONFIG_LTO 00:06:33.709 #define SPDK_CONFIG_MAX_LCORES 00:06:33.709 #define SPDK_CONFIG_NVME_CUSE 1 00:06:33.709 #undef SPDK_CONFIG_OCF 00:06:33.709 #define SPDK_CONFIG_OCF_PATH 00:06:33.709 #define SPDK_CONFIG_OPENSSL_PATH 00:06:33.709 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:33.709 #define SPDK_CONFIG_PGO_DIR 00:06:33.709 #undef SPDK_CONFIG_PGO_USE 00:06:33.709 #define SPDK_CONFIG_PREFIX /usr/local 00:06:33.709 #define SPDK_CONFIG_RAID5F 1 00:06:33.709 #undef SPDK_CONFIG_RBD 00:06:33.709 #define SPDK_CONFIG_RDMA 1 00:06:33.710 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:33.710 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:33.710 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:33.710 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:33.710 #undef SPDK_CONFIG_SHARED 00:06:33.710 #undef SPDK_CONFIG_SMA 00:06:33.710 #define SPDK_CONFIG_TESTS 1 00:06:33.710 #undef SPDK_CONFIG_TSAN 00:06:33.710 #undef SPDK_CONFIG_UBLK 00:06:33.710 #define SPDK_CONFIG_UBSAN 1 00:06:33.710 #define SPDK_CONFIG_UNIT_TESTS 1 00:06:33.710 #undef SPDK_CONFIG_URING 00:06:33.710 #define SPDK_CONFIG_URING_PATH 00:06:33.710 #undef SPDK_CONFIG_URING_ZNS 00:06:33.710 #undef SPDK_CONFIG_USDT 00:06:33.710 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:33.710 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:33.710 #undef SPDK_CONFIG_VFIO_USER 00:06:33.710 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:33.710 #define SPDK_CONFIG_VHOST 1 00:06:33.710 #define SPDK_CONFIG_VIRTIO 1 00:06:33.710 #undef SPDK_CONFIG_VTUNE 00:06:33.710 #define SPDK_CONFIG_VTUNE_DIR 00:06:33.710 #define SPDK_CONFIG_WERROR 1 00:06:33.710 #define SPDK_CONFIG_WPDK_DIR 00:06:33.710 #undef SPDK_CONFIG_XNVME 00:06:33.710 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:33.710 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:33.710 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:33.710 +++ [[ -e /bin/wpdk_common.sh ]] 00:06:33.710 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.710 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.710 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:33.710 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:33.710 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:33.710 ++++ export PATH 00:06:33.710 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:33.710 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:33.710 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:33.710 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:33.710 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:33.710 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:33.710 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:33.710 +++ TEST_TAG=N/A 00:06:33.710 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:33.710 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:06:33.710 ++++ uname -s 00:06:33.710 +++ PM_OS=Linux 00:06:33.710 +++ MONITOR_RESOURCES_SUDO=() 00:06:33.710 +++ declare -A MONITOR_RESOURCES_SUDO 00:06:33.710 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:33.710 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:33.710 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:33.710 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:33.710 +++ SUDO[0]= 00:06:33.710 +++ SUDO[1]='sudo -E' 00:06:33.710 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:33.710 +++ [[ Linux == FreeBSD ]] 00:06:33.710 +++ [[ Linux == Linux ]] 00:06:33.710 +++ [[ QEMU != QEMU ]] 00:06:33.710 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:06:33.710 ++ : 1 00:06:33.710 ++ export RUN_NIGHTLY 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_RUN_VALGRIND 00:06:33.710 ++ : 1 00:06:33.710 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:06:33.710 ++ : 1 00:06:33.710 ++ export SPDK_TEST_UNITTEST 00:06:33.710 ++ : 00:06:33.710 ++ export SPDK_TEST_AUTOBUILD 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_RELEASE_BUILD 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_ISAL 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_ISCSI 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_ISCSI_INITIATOR 00:06:33.710 ++ : 1 00:06:33.710 ++ export SPDK_TEST_NVME 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_NVME_PMR 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_NVME_BP 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_NVME_CLI 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_NVME_CUSE 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_NVME_FDP 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_NVMF 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_VFIOUSER 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_VFIOUSER_QEMU 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_FUZZER 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_FUZZER_SHORT 00:06:33.710 ++ : rdma 00:06:33.710 ++ export SPDK_TEST_NVMF_TRANSPORT 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_RBD 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_VHOST 00:06:33.710 ++ : 1 00:06:33.710 ++ export SPDK_TEST_BLOCKDEV 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_IOAT 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_BLOBFS 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_VHOST_INIT 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_LVOL 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_VBDEV_COMPRESS 00:06:33.710 ++ : 1 00:06:33.710 ++ export SPDK_RUN_ASAN 00:06:33.710 ++ : 1 00:06:33.710 ++ export SPDK_RUN_UBSAN 00:06:33.710 ++ : /home/vagrant/spdk_repo/dpdk/build 00:06:33.710 ++ export SPDK_RUN_EXTERNAL_DPDK 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_RUN_NON_ROOT 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_CRYPTO 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_FTL 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_OCF 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_VMD 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_OPAL 00:06:33.710 ++ : v22.11.4 00:06:33.710 ++ export SPDK_TEST_NATIVE_DPDK 00:06:33.710 ++ : true 00:06:33.710 ++ export SPDK_AUTOTEST_X 00:06:33.710 ++ : 1 00:06:33.710 ++ export SPDK_TEST_RAID5 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_URING 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_USDT 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_USE_IGB_UIO 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_SCHEDULER 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_SCANBUILD 00:06:33.710 ++ : 00:06:33.710 ++ export SPDK_TEST_NVMF_NICS 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_SMA 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_DAOS 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_XNVME 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_ACCEL_DSA 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_ACCEL_IAA 00:06:33.710 ++ : 00:06:33.710 ++ export SPDK_TEST_FUZZER_TARGET 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_TEST_NVMF_MDNS 00:06:33.710 ++ : 0 00:06:33.710 ++ export SPDK_JSONRPC_GO_CLIENT 00:06:33.710 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:33.710 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:33.710 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:33.710 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:33.710 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:33.710 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:33.710 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:33.710 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:33.710 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:33.710 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:06:33.710 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:33.710 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:33.710 ++ export PYTHONDONTWRITEBYTECODE=1 00:06:33.710 ++ PYTHONDONTWRITEBYTECODE=1 00:06:33.710 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:33.710 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:33.710 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:33.710 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:33.710 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:06:33.710 ++ rm -rf /var/tmp/asan_suppression_file 00:06:33.710 ++ cat 00:06:33.710 ++ echo leak:libfuse3.so 00:06:33.710 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:33.710 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:33.710 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:33.710 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:33.710 ++ '[' -z /var/spdk/dependencies ']' 00:06:33.710 ++ export DEPENDENCY_DIR 00:06:33.710 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:33.710 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:33.710 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:33.710 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:33.710 ++ export QEMU_BIN= 00:06:33.710 ++ QEMU_BIN= 00:06:33.710 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:33.710 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:33.710 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:33.710 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:33.710 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:33.710 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:33.710 ++ '[' 0 -eq 0 ']' 00:06:33.711 ++ export valgrind= 00:06:33.711 ++ valgrind= 00:06:33.711 +++ uname -s 00:06:33.711 ++ '[' Linux = Linux ']' 00:06:33.711 ++ HUGEMEM=4096 00:06:33.711 ++ export CLEAR_HUGE=yes 00:06:33.711 ++ CLEAR_HUGE=yes 00:06:33.711 ++ [[ 0 -eq 1 ]] 00:06:33.711 ++ [[ 0 -eq 1 ]] 00:06:33.711 ++ MAKE=make 00:06:33.711 +++ nproc 00:06:33.711 ++ MAKEFLAGS=-j10 00:06:33.711 ++ export HUGEMEM=4096 00:06:33.711 ++ HUGEMEM=4096 00:06:33.711 ++ NO_HUGE=() 00:06:33.711 ++ TEST_MODE= 00:06:33.711 ++ [[ -z '' ]] 00:06:33.711 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:33.711 ++ exec 00:06:33.711 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:33.711 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:06:33.711 ++ set_test_storage 2147483648 00:06:33.711 ++ [[ -v testdir ]] 00:06:33.711 ++ local requested_size=2147483648 00:06:33.711 ++ local mount target_dir 00:06:33.711 ++ local -A mounts fss sizes avails uses 00:06:33.711 ++ local source fs size avail mount use 00:06:33.711 ++ local storage_fallback storage_candidates 00:06:33.711 +++ mktemp -udt spdk.XXXXXX 00:06:33.711 ++ storage_fallback=/tmp/spdk.YhfTBt 00:06:33.711 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:33.711 ++ [[ -n '' ]] 00:06:33.711 ++ [[ -n '' ]] 00:06:33.711 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.YhfTBt/tests/unit /tmp/spdk.YhfTBt 00:06:33.711 ++ requested_size=2214592512 00:06:33.711 ++ read -r source fs size use avail _ mount 00:06:33.711 +++ df -T 00:06:33.711 +++ grep -v Filesystem 00:06:33.969 ++ mounts["$mount"]=tmpfs 00:06:33.969 ++ fss["$mount"]=tmpfs 00:06:33.970 ++ avails["$mount"]=1252601856 00:06:33.970 ++ sizes["$mount"]=1253683200 00:06:33.970 ++ uses["$mount"]=1081344 00:06:33.970 ++ read -r source fs size use avail _ mount 00:06:33.970 ++ mounts["$mount"]=/dev/vda1 00:06:33.970 ++ fss["$mount"]=ext4 00:06:33.970 ++ avails["$mount"]=9204109312 00:06:33.970 ++ sizes["$mount"]=20616794112 00:06:33.970 ++ uses["$mount"]=11395907584 00:06:33.970 ++ read -r source fs size use avail _ mount 00:06:33.970 ++ mounts["$mount"]=tmpfs 00:06:33.970 ++ fss["$mount"]=tmpfs 00:06:33.970 ++ avails["$mount"]=6268399616 00:06:33.970 ++ sizes["$mount"]=6268399616 00:06:33.970 ++ uses["$mount"]=0 00:06:33.970 ++ read -r source fs size use avail _ mount 00:06:33.970 ++ mounts["$mount"]=tmpfs 00:06:33.970 ++ fss["$mount"]=tmpfs 00:06:33.970 ++ avails["$mount"]=5242880 00:06:33.970 ++ sizes["$mount"]=5242880 00:06:33.970 ++ uses["$mount"]=0 00:06:33.970 ++ read -r source fs size use avail _ mount 00:06:33.970 ++ mounts["$mount"]=/dev/vda15 00:06:33.970 ++ fss["$mount"]=vfat 00:06:33.970 ++ avails["$mount"]=103061504 00:06:33.970 ++ sizes["$mount"]=109395968 00:06:33.970 ++ uses["$mount"]=6334464 00:06:33.970 ++ read -r source fs size use avail _ mount 00:06:33.970 ++ mounts["$mount"]=tmpfs 00:06:33.970 ++ fss["$mount"]=tmpfs 00:06:33.970 ++ avails["$mount"]=1253675008 00:06:33.970 ++ sizes["$mount"]=1253679104 00:06:33.970 ++ uses["$mount"]=4096 00:06:33.970 ++ read -r source fs size use avail _ mount 00:06:33.970 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:06:33.970 ++ fss["$mount"]=fuse.sshfs 00:06:33.970 ++ avails["$mount"]=95030005760 00:06:33.970 ++ sizes["$mount"]=105088212992 00:06:33.970 ++ uses["$mount"]=4672774144 00:06:33.970 ++ read -r source fs size use avail _ mount 00:06:33.970 ++ printf '* Looking for test storage...\n' 00:06:33.970 * Looking for test storage... 00:06:33.970 ++ local target_space new_size 00:06:33.970 ++ for target_dir in "${storage_candidates[@]}" 00:06:33.970 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:06:33.970 +++ awk '$1 !~ /Filesystem/{print $6}' 00:06:33.970 ++ mount=/ 00:06:33.970 ++ target_space=9204109312 00:06:33.970 ++ (( target_space == 0 || target_space < requested_size )) 00:06:33.970 ++ (( target_space >= requested_size )) 00:06:33.970 ++ [[ ext4 == tmpfs ]] 00:06:33.970 ++ [[ ext4 == ramfs ]] 00:06:33.970 ++ [[ / == / ]] 00:06:33.970 ++ new_size=13610500096 00:06:33.970 ++ (( new_size * 100 / sizes[/] > 95 )) 00:06:33.970 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:33.970 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:33.970 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:06:33.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:06:33.970 ++ return 0 00:06:33.970 ++ set -o errtrace 00:06:33.970 ++ shopt -s extdebug 00:06:33.970 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:06:33.970 ++ PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:33.970 07:17:07 unittest -- common/autotest_common.sh@1683 -- # true 00:06:33.970 07:17:07 unittest -- common/autotest_common.sh@1685 -- # xtrace_fd 00:06:33.970 07:17:07 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:06:33.970 07:17:07 unittest -- common/autotest_common.sh@29 -- # exec 00:06:33.970 07:17:07 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:33.970 07:17:07 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:33.970 07:17:07 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:33.970 07:17:07 unittest -- common/autotest_common.sh@18 -- # set -x 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=gcc 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@181 -- # hash lcov 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@181 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@181 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@182 -- # cov_avail=yes 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@186 -- # '[' yes = yes ']' 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@188 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@191 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@193 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@201 -- # export 'LCOV_OPTS= 00:06:33.970 --rc lcov_branch_coverage=1 00:06:33.970 --rc lcov_function_coverage=1 00:06:33.970 --rc genhtml_branch_coverage=1 00:06:33.970 --rc genhtml_function_coverage=1 00:06:33.970 --rc genhtml_legend=1 00:06:33.970 --rc geninfo_all_blocks=1 00:06:33.970 ' 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@201 -- # LCOV_OPTS=' 00:06:33.970 --rc lcov_branch_coverage=1 00:06:33.970 --rc lcov_function_coverage=1 00:06:33.970 --rc genhtml_branch_coverage=1 00:06:33.970 --rc genhtml_function_coverage=1 00:06:33.970 --rc genhtml_legend=1 00:06:33.970 --rc geninfo_all_blocks=1 00:06:33.970 ' 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@202 -- # export 'LCOV=lcov 00:06:33.970 --rc lcov_branch_coverage=1 00:06:33.970 --rc lcov_function_coverage=1 00:06:33.970 --rc genhtml_branch_coverage=1 00:06:33.970 --rc genhtml_function_coverage=1 00:06:33.970 --rc genhtml_legend=1 00:06:33.970 --rc geninfo_all_blocks=1 00:06:33.970 --no-external' 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@202 -- # LCOV='lcov 00:06:33.970 --rc lcov_branch_coverage=1 00:06:33.970 --rc lcov_function_coverage=1 00:06:33.970 --rc genhtml_branch_coverage=1 00:06:33.970 --rc genhtml_function_coverage=1 00:06:33.970 --rc genhtml_legend=1 00:06:33.970 --rc geninfo_all_blocks=1 00:06:33.970 --no-external' 00:06:33.970 07:17:07 unittest -- unit/unittest.sh@204 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:06:40.567 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:40.567 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:19.300 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:07:19.300 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:07:19.300 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:07:19.300 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:07:19.300 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:07:19.300 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:07:19.300 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:07:19.300 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:07:19.300 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:07:19.300 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:07:19.300 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:07:19.300 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:07:19.300 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:07:19.300 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:07:19.300 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:07:19.300 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:07:19.300 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:07:19.300 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:07:19.300 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:07:19.300 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:07:19.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:07:19.301 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:07:19.302 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:07:19.302 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:07:19.303 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:07:19.303 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:07:19.562 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:07:19.562 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:07:19.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:07:19.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:07:19.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:07:19.820 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:07:19.820 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:07:19.821 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:07:19.821 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:07:19.821 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:07:19.821 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:07:19.821 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:07:19.821 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:07:19.821 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:07:19.821 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:07:19.821 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:07:22.364 07:17:55 unittest -- unit/unittest.sh@208 -- # uname -m 00:07:22.364 07:17:55 unittest -- unit/unittest.sh@208 -- # '[' x86_64 = aarch64 ']' 00:07:22.364 07:17:55 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:07:22.364 07:17:55 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:22.365 07:17:55 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.365 07:17:55 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:22.365 ************************************ 00:07:22.365 START TEST unittest_pci_event 00:07:22.365 ************************************ 00:07:22.365 07:17:55 unittest.unittest_pci_event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:07:22.365 00:07:22.365 00:07:22.365 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.365 http://cunit.sourceforge.net/ 00:07:22.365 00:07:22.365 00:07:22.365 Suite: pci_event 00:07:22.365 Test: test_pci_parse_event ...[2024-07-12 07:17:55.768377] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:07:22.365 [2024-07-12 07:17:55.769614] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:07:22.365 passed 00:07:22.365 00:07:22.365 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.365 suites 1 1 n/a 0 0 00:07:22.365 tests 1 1 1 0 0 00:07:22.365 asserts 15 15 15 0 n/a 00:07:22.365 00:07:22.365 Elapsed time = 0.001 seconds 00:07:22.365 00:07:22.365 real 0m0.044s 00:07:22.365 user 0m0.021s 00:07:22.365 sys 0m0.019s 00:07:22.365 07:17:55 unittest.unittest_pci_event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.365 07:17:55 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:07:22.365 ************************************ 00:07:22.365 END TEST unittest_pci_event 00:07:22.365 ************************************ 00:07:22.365 07:17:55 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:07:22.365 07:17:55 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:22.365 07:17:55 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.365 07:17:55 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:22.365 ************************************ 00:07:22.365 START TEST unittest_include 00:07:22.365 ************************************ 00:07:22.365 07:17:55 unittest.unittest_include -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:07:22.365 00:07:22.365 00:07:22.365 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.365 http://cunit.sourceforge.net/ 00:07:22.365 00:07:22.365 00:07:22.365 Suite: histogram 00:07:22.365 Test: histogram_test ...passed 00:07:22.365 Test: histogram_merge ...passed 00:07:22.365 00:07:22.365 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.365 suites 1 1 n/a 0 0 00:07:22.365 tests 2 2 2 0 0 00:07:22.365 asserts 50 50 50 0 n/a 00:07:22.365 00:07:22.365 Elapsed time = 0.006 seconds 00:07:22.365 00:07:22.365 real 0m0.046s 00:07:22.365 user 0m0.034s 00:07:22.365 sys 0m0.013s 00:07:22.365 07:17:55 unittest.unittest_include -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.365 07:17:55 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:07:22.365 ************************************ 00:07:22.365 END TEST unittest_include 00:07:22.365 ************************************ 00:07:22.365 07:17:55 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:07:22.365 07:17:55 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:22.365 07:17:55 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.365 07:17:55 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:22.365 ************************************ 00:07:22.365 START TEST unittest_bdev 00:07:22.365 ************************************ 00:07:22.365 07:17:55 unittest.unittest_bdev -- common/autotest_common.sh@1121 -- # unittest_bdev 00:07:22.365 07:17:55 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:07:22.365 00:07:22.365 00:07:22.365 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.365 http://cunit.sourceforge.net/ 00:07:22.365 00:07:22.365 00:07:22.365 Suite: bdev 00:07:22.365 Test: bytes_to_blocks_test ...passed 00:07:22.365 Test: num_blocks_test ...passed 00:07:22.365 Test: io_valid_test ...passed 00:07:22.365 Test: open_write_test ...[2024-07-12 07:17:56.111714] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:07:22.365 [2024-07-12 07:17:56.112156] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:07:22.365 [2024-07-12 07:17:56.112322] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:07:22.365 passed 00:07:22.365 Test: claim_test ...passed 00:07:22.623 Test: alias_add_del_test ...[2024-07-12 07:17:56.265249] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:07:22.623 [2024-07-12 07:17:56.265460] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4610:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:07:22.623 [2024-07-12 07:17:56.265522] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:07:22.623 passed 00:07:22.623 Test: get_device_stat_test ...passed 00:07:22.623 Test: bdev_io_types_test ...passed 00:07:22.623 Test: bdev_io_wait_test ...passed 00:07:22.623 Test: bdev_io_spans_split_test ...passed 00:07:22.623 Test: bdev_io_boundary_split_test ...passed 00:07:22.881 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-12 07:17:56.516669] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:07:22.881 passed 00:07:22.881 Test: bdev_io_mix_split_test ...passed 00:07:22.881 Test: bdev_io_split_with_io_wait ...passed 00:07:22.881 Test: bdev_io_write_unit_split_test ...[2024-07-12 07:17:56.685305] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:07:22.881 [2024-07-12 07:17:56.685435] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:07:22.881 [2024-07-12 07:17:56.685464] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:07:22.881 [2024-07-12 07:17:56.685507] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:07:22.881 passed 00:07:23.140 Test: bdev_io_alignment_with_boundary ...passed 00:07:23.140 Test: bdev_io_alignment ...passed 00:07:23.140 Test: bdev_histograms ...passed 00:07:23.140 Test: bdev_write_zeroes ...passed 00:07:23.140 Test: bdev_compare_and_write ...passed 00:07:23.398 Test: bdev_compare ...passed 00:07:23.398 Test: bdev_compare_emulated ...passed 00:07:23.398 Test: bdev_zcopy_write ...passed 00:07:23.657 Test: bdev_zcopy_read ...passed 00:07:23.657 Test: bdev_open_while_hotremove ...passed 00:07:23.657 Test: bdev_close_while_hotremove ...passed 00:07:23.657 Test: bdev_open_ext_test ...[2024-07-12 07:17:57.334334] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8141:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:07:23.657 passed 00:07:23.657 Test: bdev_open_ext_unregister ...[2024-07-12 07:17:57.334564] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8141:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:07:23.657 passed 00:07:23.657 Test: bdev_set_io_timeout ...passed 00:07:23.657 Test: bdev_set_qd_sampling ...passed 00:07:23.657 Test: lba_range_overlap ...passed 00:07:23.657 Test: lock_lba_range_check_ranges ...passed 00:07:23.915 Test: lock_lba_range_with_io_outstanding ...passed 00:07:23.915 Test: lock_lba_range_overlapped ...passed 00:07:23.915 Test: bdev_quiesce ...[2024-07-12 07:17:57.621187] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10064:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:07:23.915 passed 00:07:23.915 Test: bdev_io_abort ...passed 00:07:23.915 Test: bdev_unmap ...passed 00:07:24.173 Test: bdev_write_zeroes_split_test ...passed 00:07:24.173 Test: bdev_set_options_test ...passed 00:07:24.173 Test: bdev_get_memory_domains ...passed 00:07:24.173 Test: bdev_io_ext ...[2024-07-12 07:17:57.806319] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:07:24.173 passed 00:07:24.173 Test: bdev_io_ext_no_opts ...passed 00:07:24.173 Test: bdev_io_ext_invalid_opts ...passed 00:07:24.173 Test: bdev_io_ext_split ...passed 00:07:24.434 Test: bdev_io_ext_bounce_buffer ...passed 00:07:24.434 Test: bdev_register_uuid_alias ...[2024-07-12 07:17:58.088636] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 55d3c1f8-8eca-4023-aa1c-c5afd87844de already exists 00:07:24.434 [2024-07-12 07:17:58.088743] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:55d3c1f8-8eca-4023-aa1c-c5afd87844de alias for bdev bdev0 00:07:24.434 passed 00:07:24.434 Test: bdev_unregister_by_name ...[2024-07-12 07:17:58.119515] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7931:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:07:24.434 passed 00:07:24.434 Test: for_each_bdev_test ...[2024-07-12 07:17:58.119608] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7939:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:07:24.434 passed 00:07:24.434 Test: bdev_seek_test ...passed 00:07:24.434 Test: bdev_copy ...passed 00:07:24.434 Test: bdev_copy_split_test ...passed 00:07:24.434 Test: examine_locks ...passed 00:07:24.434 Test: claim_v2_rwo ...[2024-07-12 07:17:58.286997] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:24.434 [2024-07-12 07:17:58.287079] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8665:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:24.434 [2024-07-12 07:17:58.287097] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:24.434 [2024-07-12 07:17:58.287153] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:24.434 [2024-07-12 07:17:58.287175] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:24.434 passed 00:07:24.435 Test: claim_v2_rom ...[2024-07-12 07:17:58.287211] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8660:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:07:24.435 [2024-07-12 07:17:58.287337] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:24.435 [2024-07-12 07:17:58.287392] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:24.435 [2024-07-12 07:17:58.287411] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:24.435 [2024-07-12 07:17:58.287437] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:24.435 [2024-07-12 07:17:58.287480] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8703:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:07:24.435 [2024-07-12 07:17:58.287518] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:24.435 passed 00:07:24.435 Test: claim_v2_rwm ...[2024-07-12 07:17:58.287685] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8733:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:07:24.435 [2024-07-12 07:17:58.287753] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:24.435 [2024-07-12 07:17:58.287792] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:24.435 [2024-07-12 07:17:58.287818] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:24.435 [2024-07-12 07:17:58.287836] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:24.435 [2024-07-12 07:17:58.287862] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8753:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:07:24.435 passed 00:07:24.435 Test: claim_v2_existing_writer ...[2024-07-12 07:17:58.287902] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8733:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:07:24.435 [2024-07-12 07:17:58.288027] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:24.435 passed 00:07:24.435 Test: claim_v2_existing_v1 ...[2024-07-12 07:17:58.288059] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:07:24.435 [2024-07-12 07:17:58.288164] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:24.435 [2024-07-12 07:17:58.288191] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:24.435 passed 00:07:24.435 Test: claim_v1_existing_v2 ...[2024-07-12 07:17:58.288209] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:07:24.435 [2024-07-12 07:17:58.288310] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:07:24.435 passed 00:07:24.435 Test: examine_claimed ...[2024-07-12 07:17:58.288360] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:07:24.435 [2024-07-12 07:17:58.288392] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:07:24.435 [2024-07-12 07:17:58.288648] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:07:24.435 passed 00:07:24.435 00:07:24.435 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.435 suites 1 1 n/a 0 0 00:07:24.435 tests 59 59 59 0 0 00:07:24.435 asserts 4599 4599 4599 0 n/a 00:07:24.435 00:07:24.435 Elapsed time = 2.288 seconds 00:07:24.694 07:17:58 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:07:24.694 00:07:24.695 00:07:24.695 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.695 http://cunit.sourceforge.net/ 00:07:24.695 00:07:24.695 00:07:24.695 Suite: nvme 00:07:24.695 Test: test_create_ctrlr ...passed 00:07:24.695 Test: test_reset_ctrlr ...[2024-07-12 07:17:58.356642] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 passed 00:07:24.695 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:07:24.695 Test: test_failover_ctrlr ...passed 00:07:24.695 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-12 07:17:58.359869] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 [2024-07-12 07:17:58.360149] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 [2024-07-12 07:17:58.360436] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 passed 00:07:24.695 Test: test_pending_reset ...[2024-07-12 07:17:58.362306] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 [2024-07-12 07:17:58.362581] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 passed 00:07:24.695 Test: test_attach_ctrlr ...[2024-07-12 07:17:58.363938] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:07:24.695 passed 00:07:24.695 Test: test_aer_cb ...passed 00:07:24.695 Test: test_submit_nvme_cmd ...passed 00:07:24.695 Test: test_add_remove_trid ...passed 00:07:24.695 Test: test_abort ...[2024-07-12 07:17:58.367731] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7453:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:07:24.695 passed 00:07:24.695 Test: test_get_io_qpair ...passed 00:07:24.695 Test: test_bdev_unregister ...passed 00:07:24.695 Test: test_compare_ns ...passed 00:07:24.695 Test: test_init_ana_log_page ...passed 00:07:24.695 Test: test_get_memory_domains ...passed 00:07:24.695 Test: test_reconnect_qpair ...passed 00:07:24.695 Test: test_create_bdev_ctrlr ...[2024-07-12 07:17:58.370752] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 [2024-07-12 07:17:58.371318] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5379:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:07:24.695 passed 00:07:24.695 Test: test_add_multi_ns_to_bdev ...[2024-07-12 07:17:58.372842] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4570:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:07:24.695 passed 00:07:24.695 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:07:24.695 Test: test_admin_path ...passed 00:07:24.695 Test: test_reset_bdev_ctrlr ...passed 00:07:24.695 Test: test_find_io_path ...passed 00:07:24.695 Test: test_retry_io_if_ana_state_is_updating ...passed 00:07:24.695 Test: test_retry_io_for_io_path_error ...passed 00:07:24.695 Test: test_retry_io_count ...passed 00:07:24.695 Test: test_concurrent_read_ana_log_page ...passed 00:07:24.695 Test: test_retry_io_for_ana_error ...passed 00:07:24.695 Test: test_check_io_error_resiliency_params ...passed 00:07:24.695 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:07:24.695 Test: test_reconnect_ctrlr ...[2024-07-12 07:17:58.380538] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6073:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:07:24.695 [2024-07-12 07:17:58.380623] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6077:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:07:24.695 [2024-07-12 07:17:58.380653] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6086:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:07:24.695 [2024-07-12 07:17:58.380688] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6089:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:07:24.695 [2024-07-12 07:17:58.380724] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6101:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:07:24.695 [2024-07-12 07:17:58.380771] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6101:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:07:24.695 [2024-07-12 07:17:58.380807] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6081:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:07:24.695 [2024-07-12 07:17:58.380871] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6096:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:07:24.695 [2024-07-12 07:17:58.380906] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6093:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:07:24.695 [2024-07-12 07:17:58.381840] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 [2024-07-12 07:17:58.381977] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 passed 00:07:24.695 Test: test_retry_failover_ctrlr ...[2024-07-12 07:17:58.382294] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 [2024-07-12 07:17:58.382449] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 [2024-07-12 07:17:58.382645] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 passed 00:07:24.695 Test: test_fail_path ...[2024-07-12 07:17:58.383008] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 [2024-07-12 07:17:58.383594] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 [2024-07-12 07:17:58.383754] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 [2024-07-12 07:17:58.383873] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 [2024-07-12 07:17:58.383998] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 [2024-07-12 07:17:58.384117] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 passed 00:07:24.695 Test: test_nvme_ns_cmp ...passed 00:07:24.695 Test: test_ana_transition ...passed 00:07:24.695 Test: test_set_preferred_path ...passed 00:07:24.695 Test: test_find_next_io_path ...passed 00:07:24.695 Test: test_find_io_path_min_qd ...passed 00:07:24.695 Test: test_disable_auto_failback ...[2024-07-12 07:17:58.385958] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 passed 00:07:24.695 Test: test_set_multipath_policy ...passed 00:07:24.695 Test: test_uuid_generation ...passed 00:07:24.695 Test: test_retry_io_to_same_path ...passed 00:07:24.695 Test: test_race_between_reset_and_disconnected ...passed 00:07:24.695 Test: test_ctrlr_op_rpc ...passed 00:07:24.695 Test: test_bdev_ctrlr_op_rpc ...passed 00:07:24.695 Test: test_disable_enable_ctrlr ...[2024-07-12 07:17:58.389828] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 [2024-07-12 07:17:58.390009] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:24.695 passed 00:07:24.695 Test: test_delete_ctrlr_done ...passed 00:07:24.695 Test: test_ns_remove_during_reset ...passed 00:07:24.695 00:07:24.695 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.695 suites 1 1 n/a 0 0 00:07:24.695 tests 48 48 48 0 0 00:07:24.695 asserts 3565 3565 3565 0 n/a 00:07:24.695 00:07:24.695 Elapsed time = 0.036 seconds 00:07:24.695 07:17:58 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:07:24.695 00:07:24.695 00:07:24.695 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.695 http://cunit.sourceforge.net/ 00:07:24.695 00:07:24.695 Test Options 00:07:24.695 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:07:24.695 00:07:24.695 Suite: raid 00:07:24.695 Test: test_create_raid ...passed 00:07:24.695 Test: test_create_raid_superblock ...passed 00:07:24.695 Test: test_delete_raid ...passed 00:07:24.695 Test: test_create_raid_invalid_args ...[2024-07-12 07:17:58.447380] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:07:24.695 [2024-07-12 07:17:58.447979] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:07:24.695 [2024-07-12 07:17:58.448602] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:07:24.695 [2024-07-12 07:17:58.448922] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:07:24.695 [2024-07-12 07:17:58.449082] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:07:24.695 [2024-07-12 07:17:58.449908] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:07:24.695 [2024-07-12 07:17:58.450044] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:07:24.695 passed 00:07:24.695 Test: test_delete_raid_invalid_args ...passed 00:07:24.695 Test: test_io_channel ...passed 00:07:24.695 Test: test_reset_io ...passed 00:07:24.695 Test: test_multi_raid ...passed 00:07:24.695 Test: test_io_type_supported ...passed 00:07:24.695 Test: test_raid_json_dump_info ...passed 00:07:24.695 Test: test_context_size ...passed 00:07:24.695 Test: test_raid_level_conversions ...passed 00:07:24.695 Test: test_raid_io_split ...passed 00:07:24.695 Test: test_raid_process ...passed 00:07:24.695 00:07:24.695 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.695 suites 1 1 n/a 0 0 00:07:24.695 tests 14 14 14 0 0 00:07:24.695 asserts 6183 6183 6183 0 n/a 00:07:24.695 00:07:24.695 Elapsed time = 0.020 seconds 00:07:24.695 07:17:58 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:07:24.695 00:07:24.695 00:07:24.695 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.695 http://cunit.sourceforge.net/ 00:07:24.695 00:07:24.695 00:07:24.695 Suite: raid_sb 00:07:24.695 Test: test_raid_bdev_write_superblock ...passed 00:07:24.695 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:07:24.695 Test: test_raid_bdev_parse_superblock ...[2024-07-12 07:17:58.512914] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:07:24.695 passed 00:07:24.695 Suite: raid_sb_md 00:07:24.696 Test: test_raid_bdev_write_superblock ...passed 00:07:24.696 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:07:24.696 Test: test_raid_bdev_parse_superblock ...[2024-07-12 07:17:58.514760] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:07:24.696 passed 00:07:24.696 Suite: raid_sb_md_interleaved 00:07:24.696 Test: test_raid_bdev_write_superblock ...passed 00:07:24.696 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:07:24.696 Test: test_raid_bdev_parse_superblock ...[2024-07-12 07:17:58.516256] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:07:24.696 passed 00:07:24.696 00:07:24.696 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.696 suites 3 3 n/a 0 0 00:07:24.696 tests 9 9 9 0 0 00:07:24.696 asserts 139 139 139 0 n/a 00:07:24.696 00:07:24.696 Elapsed time = 0.004 seconds 00:07:24.696 07:17:58 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:07:24.696 00:07:24.696 00:07:24.696 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.696 http://cunit.sourceforge.net/ 00:07:24.696 00:07:24.696 00:07:24.696 Suite: concat 00:07:24.696 Test: test_concat_start ...passed 00:07:24.696 Test: test_concat_rw ...passed 00:07:24.696 Test: test_concat_null_payload ...passed 00:07:24.696 00:07:24.696 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.696 suites 1 1 n/a 0 0 00:07:24.696 tests 3 3 3 0 0 00:07:24.696 asserts 8460 8460 8460 0 n/a 00:07:24.696 00:07:24.696 Elapsed time = 0.006 seconds 00:07:24.955 07:17:58 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:07:24.955 00:07:24.955 00:07:24.955 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.955 http://cunit.sourceforge.net/ 00:07:24.955 00:07:24.955 00:07:24.955 Suite: raid0 00:07:24.955 Test: test_write_io ...passed 00:07:24.955 Test: test_read_io ...passed 00:07:24.955 Test: test_unmap_io ...passed 00:07:24.955 Test: test_io_failure ...passed 00:07:24.955 Suite: raid0_dif 00:07:24.955 Test: test_write_io ...passed 00:07:24.955 Test: test_read_io ...passed 00:07:24.955 Test: test_unmap_io ...passed 00:07:24.955 Test: test_io_failure ...passed 00:07:24.955 00:07:24.955 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.955 suites 2 2 n/a 0 0 00:07:24.955 tests 8 8 8 0 0 00:07:24.955 asserts 368291 368291 368291 0 n/a 00:07:24.955 00:07:24.955 Elapsed time = 0.143 seconds 00:07:24.955 07:17:58 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:07:24.955 00:07:24.955 00:07:24.955 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.955 http://cunit.sourceforge.net/ 00:07:24.955 00:07:24.955 00:07:24.955 Suite: raid1 00:07:24.955 Test: test_raid1_start ...passed 00:07:24.955 Test: test_raid1_read_balancing ...passed 00:07:24.955 Test: test_raid1_write_error ...passed 00:07:24.955 Test: test_raid1_read_error ...passed 00:07:24.955 00:07:24.955 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.955 suites 1 1 n/a 0 0 00:07:24.955 tests 4 4 4 0 0 00:07:24.955 asserts 4374 4374 4374 0 n/a 00:07:24.955 00:07:24.955 Elapsed time = 0.004 seconds 00:07:25.215 07:17:58 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:07:25.215 00:07:25.215 00:07:25.215 CUnit - A unit testing framework for C - Version 2.1-3 00:07:25.215 http://cunit.sourceforge.net/ 00:07:25.215 00:07:25.215 00:07:25.215 Suite: zone 00:07:25.215 Test: test_zone_get_operation ...passed 00:07:25.215 Test: test_bdev_zone_get_info ...passed 00:07:25.215 Test: test_bdev_zone_management ...passed 00:07:25.215 Test: test_bdev_zone_append ...passed 00:07:25.215 Test: test_bdev_zone_append_with_md ...passed 00:07:25.215 Test: test_bdev_zone_appendv ...passed 00:07:25.215 Test: test_bdev_zone_appendv_with_md ...passed 00:07:25.215 Test: test_bdev_io_get_append_location ...passed 00:07:25.215 00:07:25.215 Run Summary: Type Total Ran Passed Failed Inactive 00:07:25.215 suites 1 1 n/a 0 0 00:07:25.215 tests 8 8 8 0 0 00:07:25.215 asserts 94 94 94 0 n/a 00:07:25.215 00:07:25.215 Elapsed time = 0.000 seconds 00:07:25.215 07:17:58 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:07:25.215 00:07:25.215 00:07:25.215 CUnit - A unit testing framework for C - Version 2.1-3 00:07:25.215 http://cunit.sourceforge.net/ 00:07:25.215 00:07:25.215 00:07:25.215 Suite: gpt_parse 00:07:25.215 Test: test_parse_mbr_and_primary ...[2024-07-12 07:17:58.904494] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:25.215 [2024-07-12 07:17:58.905137] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:25.215 [2024-07-12 07:17:58.905350] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:07:25.215 [2024-07-12 07:17:58.905604] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:07:25.215 [2024-07-12 07:17:58.905779] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:07:25.215 [2024-07-12 07:17:58.906006] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:07:25.215 passed 00:07:25.215 Test: test_parse_secondary ...[2024-07-12 07:17:58.907028] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:07:25.215 [2024-07-12 07:17:58.907261] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:07:25.215 [2024-07-12 07:17:58.907495] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:07:25.215 [2024-07-12 07:17:58.907741] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:07:25.215 passed 00:07:25.215 Test: test_check_mbr ...[2024-07-12 07:17:58.909066] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:25.215 [2024-07-12 07:17:58.909464] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:07:25.215 passed 00:07:25.215 Test: test_read_header ...[2024-07-12 07:17:58.909841] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:07:25.215 [2024-07-12 07:17:58.910107] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:07:25.215 [2024-07-12 07:17:58.910331] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:07:25.215 [2024-07-12 07:17:58.910517] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:07:25.215 [2024-07-12 07:17:58.910678] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:07:25.215 [2024-07-12 07:17:58.910852] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:07:25.215 passed 00:07:25.215 Test: test_read_partitions ...[2024-07-12 07:17:58.911158] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:07:25.215 [2024-07-12 07:17:58.911337] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:07:25.215 [2024-07-12 07:17:58.911528] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:07:25.215 [2024-07-12 07:17:58.911623] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:07:25.215 [2024-07-12 07:17:58.912125] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:07:25.215 passed 00:07:25.215 00:07:25.215 Run Summary: Type Total Ran Passed Failed Inactive 00:07:25.215 suites 1 1 n/a 0 0 00:07:25.215 tests 5 5 5 0 0 00:07:25.215 asserts 33 33 33 0 n/a 00:07:25.215 00:07:25.215 Elapsed time = 0.006 seconds 00:07:25.215 07:17:58 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:07:25.215 00:07:25.215 00:07:25.215 CUnit - A unit testing framework for C - Version 2.1-3 00:07:25.215 http://cunit.sourceforge.net/ 00:07:25.215 00:07:25.215 00:07:25.215 Suite: bdev_part 00:07:25.215 Test: part_test ...[2024-07-12 07:17:58.960770] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:07:25.215 passed 00:07:25.215 Test: part_free_test ...passed 00:07:25.215 Test: part_get_io_channel_test ...passed 00:07:25.215 Test: part_construct_ext ...passed 00:07:25.215 00:07:25.215 Run Summary: Type Total Ran Passed Failed Inactive 00:07:25.215 suites 1 1 n/a 0 0 00:07:25.215 tests 4 4 4 0 0 00:07:25.215 asserts 48 48 48 0 n/a 00:07:25.215 00:07:25.215 Elapsed time = 0.054 seconds 00:07:25.215 07:17:59 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:07:25.215 00:07:25.215 00:07:25.215 CUnit - A unit testing framework for C - Version 2.1-3 00:07:25.215 http://cunit.sourceforge.net/ 00:07:25.215 00:07:25.215 00:07:25.215 Suite: scsi_nvme_suite 00:07:25.215 Test: scsi_nvme_translate_test ...passed 00:07:25.215 00:07:25.215 Run Summary: Type Total Ran Passed Failed Inactive 00:07:25.215 suites 1 1 n/a 0 0 00:07:25.215 tests 1 1 1 0 0 00:07:25.215 asserts 104 104 104 0 n/a 00:07:25.215 00:07:25.215 Elapsed time = 0.000 seconds 00:07:25.215 07:17:59 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:07:25.475 00:07:25.475 00:07:25.475 CUnit - A unit testing framework for C - Version 2.1-3 00:07:25.475 http://cunit.sourceforge.net/ 00:07:25.475 00:07:25.475 00:07:25.475 Suite: lvol 00:07:25.475 Test: ut_lvs_init ...[2024-07-12 07:17:59.101516] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:07:25.475 [2024-07-12 07:17:59.102097] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:07:25.475 passed 00:07:25.475 Test: ut_lvol_init ...passed 00:07:25.475 Test: ut_lvol_snapshot ...passed 00:07:25.475 Test: ut_lvol_clone ...passed 00:07:25.475 Test: ut_lvs_destroy ...passed 00:07:25.475 Test: ut_lvs_unload ...passed 00:07:25.475 Test: ut_lvol_resize ...[2024-07-12 07:17:59.104559] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:07:25.475 passed 00:07:25.475 Test: ut_lvol_set_read_only ...passed 00:07:25.475 Test: ut_lvol_hotremove ...passed 00:07:25.475 Test: ut_vbdev_lvol_get_io_channel ...passed 00:07:25.475 Test: ut_vbdev_lvol_io_type_supported ...passed 00:07:25.475 Test: ut_lvol_read_write ...passed 00:07:25.475 Test: ut_vbdev_lvol_submit_request ...passed 00:07:25.475 Test: ut_lvol_examine_config ...passed 00:07:25.475 Test: ut_lvol_examine_disk ...[2024-07-12 07:17:59.105914] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:07:25.475 passed 00:07:25.475 Test: ut_lvol_rename ...[2024-07-12 07:17:59.106995] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:07:25.475 [2024-07-12 07:17:59.107175] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:07:25.475 passed 00:07:25.475 Test: ut_bdev_finish ...passed 00:07:25.475 Test: ut_lvs_rename ...passed 00:07:25.475 Test: ut_lvol_seek ...passed 00:07:25.475 Test: ut_esnap_dev_create ...[2024-07-12 07:17:59.108416] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:07:25.475 [2024-07-12 07:17:59.108609] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:07:25.475 [2024-07-12 07:17:59.108677] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:07:25.475 [2024-07-12 07:17:59.108913] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1911:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:07:25.475 passed 00:07:25.475 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-12 07:17:59.109176] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:07:25.475 [2024-07-12 07:17:59.109348] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:07:25.475 passed 00:07:25.475 Test: ut_lvol_shallow_copy ...[2024-07-12 07:17:59.109723] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:07:25.475 [2024-07-12 07:17:59.109891] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:07:25.475 passed 00:07:25.476 Test: ut_lvol_set_external_parent ...[2024-07-12 07:17:59.110166] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:07:25.476 passed 00:07:25.476 00:07:25.476 Run Summary: Type Total Ran Passed Failed Inactive 00:07:25.476 suites 1 1 n/a 0 0 00:07:25.476 tests 23 23 23 0 0 00:07:25.476 asserts 798 798 798 0 n/a 00:07:25.476 00:07:25.476 Elapsed time = 0.006 seconds 00:07:25.476 07:17:59 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:07:25.476 00:07:25.476 00:07:25.476 CUnit - A unit testing framework for C - Version 2.1-3 00:07:25.476 http://cunit.sourceforge.net/ 00:07:25.476 00:07:25.476 00:07:25.476 Suite: zone_block 00:07:25.476 Test: test_zone_block_create ...passed 00:07:25.476 Test: test_zone_block_create_invalid ...[2024-07-12 07:17:59.187379] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:07:25.476 [2024-07-12 07:17:59.187987] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-12 07:17:59.188373] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:07:25.476 [2024-07-12 07:17:59.188570] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-12 07:17:59.188926] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:07:25.476 [2024-07-12 07:17:59.189079] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-12 07:17:59.189345] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:07:25.476 [2024-07-12 07:17:59.189536] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:07:25.476 Test: test_get_zone_info ...[2024-07-12 07:17:59.190564] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.190765] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.190958] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 passed 00:07:25.476 Test: test_supported_io_types ...passed 00:07:25.476 Test: test_reset_zone ...[2024-07-12 07:17:59.192612] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.192800] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 passed 00:07:25.476 Test: test_open_zone ...[2024-07-12 07:17:59.193721] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.194648] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.194864] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 passed 00:07:25.476 Test: test_zone_write ...[2024-07-12 07:17:59.195840] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:07:25.476 [2024-07-12 07:17:59.196026] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.196256] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:07:25.476 [2024-07-12 07:17:59.196433] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.204793] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:07:25.476 [2024-07-12 07:17:59.205005] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.205252] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:07:25.476 [2024-07-12 07:17:59.205408] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.213657] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:07:25.476 [2024-07-12 07:17:59.213891] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 passed 00:07:25.476 Test: test_zone_read ...[2024-07-12 07:17:59.214818] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:07:25.476 [2024-07-12 07:17:59.214977] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.215157] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:07:25.476 [2024-07-12 07:17:59.215343] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.216099] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:07:25.476 [2024-07-12 07:17:59.216267] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 passed 00:07:25.476 Test: test_close_zone ...[2024-07-12 07:17:59.216893] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.216994] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.217347] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.217557] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 passed 00:07:25.476 Test: test_finish_zone ...[2024-07-12 07:17:59.218680] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.218867] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 passed 00:07:25.476 Test: test_append_zone ...[2024-07-12 07:17:59.219682] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:07:25.476 [2024-07-12 07:17:59.219854] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.220057] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:07:25.476 [2024-07-12 07:17:59.220196] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 [2024-07-12 07:17:59.235984] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:07:25.476 [2024-07-12 07:17:59.236233] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:07:25.476 passed 00:07:25.476 00:07:25.476 Run Summary: Type Total Ran Passed Failed Inactive 00:07:25.476 suites 1 1 n/a 0 0 00:07:25.476 tests 11 11 11 0 0 00:07:25.476 asserts 3437 3437 3437 0 n/a 00:07:25.476 00:07:25.476 Elapsed time = 0.046 seconds 00:07:25.476 07:17:59 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:07:25.735 00:07:25.735 00:07:25.735 CUnit - A unit testing framework for C - Version 2.1-3 00:07:25.735 http://cunit.sourceforge.net/ 00:07:25.735 00:07:25.735 00:07:25.735 Suite: bdev 00:07:25.735 Test: basic ...[2024-07-12 07:17:59.430722] thread.c:2369:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x5647f005b8a1): Operation not permitted (rc=-1) 00:07:25.735 [2024-07-12 07:17:59.431710] thread.c:2369:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x5647f005b860): Operation not permitted (rc=-1) 00:07:25.735 [2024-07-12 07:17:59.431991] thread.c:2369:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x5647f005b8a1): Operation not permitted (rc=-1) 00:07:25.735 passed 00:07:25.735 Test: unregister_and_close ...passed 00:07:25.994 Test: unregister_and_close_different_threads ...passed 00:07:25.994 Test: basic_qos ...passed 00:07:25.994 Test: put_channel_during_reset ...passed 00:07:25.994 Test: aborted_reset ...passed 00:07:26.253 Test: aborted_reset_no_outstanding_io ...passed 00:07:26.253 Test: io_during_reset ...passed 00:07:26.253 Test: reset_completions ...passed 00:07:26.253 Test: io_during_qos_queue ...passed 00:07:26.511 Test: io_during_qos_reset ...passed 00:07:26.511 Test: enomem ...passed 00:07:26.511 Test: enomem_multi_bdev ...passed 00:07:26.511 Test: enomem_multi_bdev_unregister ...passed 00:07:26.769 Test: enomem_multi_io_target ...passed 00:07:26.769 Test: qos_dynamic_enable ...passed 00:07:26.769 Test: bdev_histograms_mt ...passed 00:07:26.769 Test: bdev_set_io_timeout_mt ...[2024-07-12 07:18:00.638756] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:07:26.769 passed 00:07:27.027 Test: lock_lba_range_then_submit_io ...[2024-07-12 07:18:00.667515] thread.c:2173:spdk_io_device_register: *ERROR*: io_device 0x5647f005b820 already registered (old:0x6130000003c0 new:0x613000000c80) 00:07:27.027 passed 00:07:27.027 Test: unregister_during_reset ...passed 00:07:27.027 Test: event_notify_and_close ...passed 00:07:27.027 Test: unregister_and_qos_poller ...passed 00:07:27.027 Suite: bdev_wrong_thread 00:07:27.027 Test: spdk_bdev_register_wt ...[2024-07-12 07:18:00.891545] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8459:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:07:27.027 passed 00:07:27.027 Test: spdk_bdev_examine_wt ...[2024-07-12 07:18:00.891981] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 810:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:07:27.027 passed 00:07:27.027 00:07:27.027 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.027 suites 2 2 n/a 0 0 00:07:27.027 tests 24 24 24 0 0 00:07:27.027 asserts 621 621 621 0 n/a 00:07:27.027 00:07:27.027 Elapsed time = 1.505 seconds 00:07:27.286 00:07:27.286 real 0m4.953s 00:07:27.286 user 0m2.077s 00:07:27.286 sys 0m2.846s 00:07:27.286 07:18:00 unittest.unittest_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.286 07:18:00 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:27.286 ************************************ 00:07:27.286 END TEST unittest_bdev 00:07:27.286 ************************************ 00:07:27.286 07:18:00 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:27.286 07:18:00 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:27.286 07:18:00 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:27.286 07:18:00 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:27.286 07:18:00 unittest -- unit/unittest.sh@230 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:07:27.286 07:18:00 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:27.286 07:18:00 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.286 07:18:00 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:27.286 ************************************ 00:07:27.286 START TEST unittest_bdev_raid5f 00:07:27.286 ************************************ 00:07:27.286 07:18:01 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:07:27.286 00:07:27.286 00:07:27.286 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.286 http://cunit.sourceforge.net/ 00:07:27.286 00:07:27.286 00:07:27.286 Suite: raid5f 00:07:27.286 Test: test_raid5f_start ...passed 00:07:28.222 Test: test_raid5f_submit_read_request ...passed 00:07:28.222 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:07:33.492 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:08:00.040 Test: test_raid5f_chunk_write_error ...passed 00:08:08.156 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:08:12.343 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:08:51.054 Test: test_raid5f_submit_read_request_degraded ...passed 00:08:51.054 00:08:51.054 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.054 suites 1 1 n/a 0 0 00:08:51.054 tests 8 8 8 0 0 00:08:51.054 asserts 518158 518158 518158 0 n/a 00:08:51.054 00:08:51.054 Elapsed time = 80.160 seconds 00:08:51.054 ************************************ 00:08:51.054 END TEST unittest_bdev_raid5f 00:08:51.054 ************************************ 00:08:51.054 00:08:51.054 real 1m20.284s 00:08:51.054 user 1m15.054s 00:08:51.054 sys 0m5.216s 00:08:51.054 07:19:21 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:51.054 07:19:21 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:08:51.054 07:19:21 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:08:51.054 07:19:21 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:51.054 07:19:21 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:51.054 07:19:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:51.055 ************************************ 00:08:51.055 START TEST unittest_blob_blobfs 00:08:51.055 ************************************ 00:08:51.055 07:19:21 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1121 -- # unittest_blob 00:08:51.055 07:19:21 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:08:51.055 07:19:21 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:08:51.055 00:08:51.055 00:08:51.055 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.055 http://cunit.sourceforge.net/ 00:08:51.055 00:08:51.055 00:08:51.055 Suite: blob_nocopy_noextent 00:08:51.055 Test: blob_init ...[2024-07-12 07:19:21.401173] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:51.055 passed 00:08:51.055 Test: blob_thin_provision ...passed 00:08:51.055 Test: blob_read_only ...passed 00:08:51.055 Test: bs_load ...[2024-07-12 07:19:21.551856] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:51.055 passed 00:08:51.055 Test: bs_load_custom_cluster_size ...passed 00:08:51.055 Test: bs_load_after_failed_grow ...passed 00:08:51.055 Test: bs_cluster_sz ...[2024-07-12 07:19:21.600978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:51.055 [2024-07-12 07:19:21.601712] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:51.055 [2024-07-12 07:19:21.602085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:51.055 passed 00:08:51.055 Test: bs_resize_md ...passed 00:08:51.055 Test: bs_destroy ...passed 00:08:51.055 Test: bs_type ...passed 00:08:51.055 Test: bs_super_block ...passed 00:08:51.055 Test: bs_test_recover_cluster_count ...passed 00:08:51.055 Test: bs_grow_live ...passed 00:08:51.055 Test: bs_grow_live_no_space ...passed 00:08:51.055 Test: bs_test_grow ...passed 00:08:51.055 Test: blob_serialize_test ...passed 00:08:51.055 Test: super_block_crc ...passed 00:08:51.055 Test: blob_thin_prov_write_count_io ...passed 00:08:51.055 Test: blob_thin_prov_unmap_cluster ...passed 00:08:51.055 Test: bs_load_iter_test ...passed 00:08:51.055 Test: blob_relations ...[2024-07-12 07:19:21.906017] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:51.055 [2024-07-12 07:19:21.906570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:51.055 [2024-07-12 07:19:21.907730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:51.055 [2024-07-12 07:19:21.907982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:51.055 passed 00:08:51.055 Test: blob_relations2 ...[2024-07-12 07:19:21.929742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:51.055 [2024-07-12 07:19:21.930109] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:51.055 [2024-07-12 07:19:21.930264] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:51.055 [2024-07-12 07:19:21.930544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:51.055 [2024-07-12 07:19:21.932099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:51.055 [2024-07-12 07:19:21.932350] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:51.055 [2024-07-12 07:19:21.932934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:51.055 [2024-07-12 07:19:21.933158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:51.055 passed 00:08:51.055 Test: blob_relations3 ...passed 00:08:51.055 Test: blobstore_clean_power_failure ...passed 00:08:51.055 Test: blob_delete_snapshot_power_failure ...[2024-07-12 07:19:22.198680] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:51.055 [2024-07-12 07:19:22.219354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:51.055 [2024-07-12 07:19:22.219902] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:51.055 [2024-07-12 07:19:22.220168] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:51.055 [2024-07-12 07:19:22.240318] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:51.055 [2024-07-12 07:19:22.240706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:51.055 [2024-07-12 07:19:22.240977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:51.055 [2024-07-12 07:19:22.241239] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:51.055 [2024-07-12 07:19:22.261324] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:51.055 [2024-07-12 07:19:22.261763] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:51.055 [2024-07-12 07:19:22.282020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:51.055 [2024-07-12 07:19:22.282477] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:51.055 [2024-07-12 07:19:22.303125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:51.055 [2024-07-12 07:19:22.303728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:51.055 passed 00:08:51.055 Test: blob_create_snapshot_power_failure ...[2024-07-12 07:19:22.364512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:51.055 [2024-07-12 07:19:22.404435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:51.055 [2024-07-12 07:19:22.425123] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:51.055 passed 00:08:51.055 Test: blob_io_unit ...passed 00:08:51.055 Test: blob_io_unit_compatibility ...passed 00:08:51.055 Test: blob_ext_md_pages ...passed 00:08:51.055 Test: blob_esnap_io_4096_4096 ...passed 00:08:51.055 Test: blob_esnap_io_512_512 ...passed 00:08:51.055 Test: blob_esnap_io_4096_512 ...passed 00:08:51.055 Test: blob_esnap_io_512_4096 ...passed 00:08:51.055 Test: blob_esnap_clone_resize ...passed 00:08:51.055 Suite: blob_bs_nocopy_noextent 00:08:51.055 Test: blob_open ...passed 00:08:51.055 Test: blob_create ...[2024-07-12 07:19:22.871277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:51.055 passed 00:08:51.055 Test: blob_create_loop ...passed 00:08:51.055 Test: blob_create_fail ...[2024-07-12 07:19:23.009922] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:51.055 passed 00:08:51.055 Test: blob_create_internal ...passed 00:08:51.055 Test: blob_create_zero_extent ...passed 00:08:51.055 Test: blob_snapshot ...passed 00:08:51.055 Test: blob_clone ...passed 00:08:51.055 Test: blob_inflate ...[2024-07-12 07:19:23.323277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:51.055 passed 00:08:51.055 Test: blob_delete ...passed 00:08:51.055 Test: blob_resize_test ...[2024-07-12 07:19:23.435245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:51.055 passed 00:08:51.055 Test: blob_resize_thin_test ...passed 00:08:51.055 Test: channel_ops ...passed 00:08:51.055 Test: blob_super ...passed 00:08:51.055 Test: blob_rw_verify_iov ...passed 00:08:51.055 Test: blob_unmap ...passed 00:08:51.055 Test: blob_iter ...passed 00:08:51.055 Test: blob_parse_md ...passed 00:08:51.055 Test: bs_load_pending_removal ...passed 00:08:51.055 Test: bs_unload ...[2024-07-12 07:19:23.948324] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:51.055 passed 00:08:51.055 Test: bs_usable_clusters ...passed 00:08:51.055 Test: blob_crc ...[2024-07-12 07:19:24.062481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:51.055 [2024-07-12 07:19:24.063129] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:51.055 passed 00:08:51.055 Test: blob_flags ...passed 00:08:51.055 Test: bs_version ...passed 00:08:51.055 Test: blob_set_xattrs_test ...[2024-07-12 07:19:24.234405] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:51.055 [2024-07-12 07:19:24.235035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:51.055 passed 00:08:51.055 Test: blob_thin_prov_alloc ...passed 00:08:51.055 Test: blob_insert_cluster_msg_test ...passed 00:08:51.055 Test: blob_thin_prov_rw ...passed 00:08:51.055 Test: blob_thin_prov_rle ...passed 00:08:51.055 Test: blob_thin_prov_rw_iov ...passed 00:08:51.055 Test: blob_snapshot_rw ...passed 00:08:51.055 Test: blob_snapshot_rw_iov ...passed 00:08:51.315 Test: blob_inflate_rw ...passed 00:08:51.315 Test: blob_snapshot_freeze_io ...passed 00:08:51.575 Test: blob_operation_split_rw ...passed 00:08:51.575 Test: blob_operation_split_rw_iov ...passed 00:08:51.575 Test: blob_simultaneous_operations ...[2024-07-12 07:19:25.428406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:51.575 [2024-07-12 07:19:25.428726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:51.575 [2024-07-12 07:19:25.430212] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:51.575 [2024-07-12 07:19:25.430393] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:51.575 [2024-07-12 07:19:25.445437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:51.575 [2024-07-12 07:19:25.445690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:51.575 [2024-07-12 07:19:25.445864] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:51.575 [2024-07-12 07:19:25.446076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:51.835 passed 00:08:51.835 Test: blob_persist_test ...passed 00:08:51.835 Test: blob_decouple_snapshot ...passed 00:08:51.835 Test: blob_seek_io_unit ...passed 00:08:52.094 Test: blob_nested_freezes ...passed 00:08:52.094 Test: blob_clone_resize ...passed 00:08:52.094 Test: blob_shallow_copy ...[2024-07-12 07:19:25.883049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:52.094 [2024-07-12 07:19:25.883707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:52.094 [2024-07-12 07:19:25.884101] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:52.094 passed 00:08:52.094 Suite: blob_blob_nocopy_noextent 00:08:52.094 Test: blob_write ...passed 00:08:52.352 Test: blob_read ...passed 00:08:52.352 Test: blob_rw_verify ...passed 00:08:52.352 Test: blob_rw_verify_iov_nomem ...passed 00:08:52.352 Test: blob_rw_iov_read_only ...passed 00:08:52.612 Test: blob_xattr ...passed 00:08:52.612 Test: blob_dirty_shutdown ...passed 00:08:52.612 Test: blob_is_degraded ...passed 00:08:52.612 Suite: blob_esnap_bs_nocopy_noextent 00:08:52.612 Test: blob_esnap_create ...passed 00:08:52.871 Test: blob_esnap_thread_add_remove ...passed 00:08:52.871 Test: blob_esnap_clone_snapshot ...passed 00:08:52.871 Test: blob_esnap_clone_inflate ...passed 00:08:52.871 Test: blob_esnap_clone_decouple ...passed 00:08:52.871 Test: blob_esnap_clone_reload ...passed 00:08:53.131 Test: blob_esnap_hotplug ...passed 00:08:53.131 Test: blob_set_parent ...[2024-07-12 07:19:26.829440] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:53.131 [2024-07-12 07:19:26.829844] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:53.131 [2024-07-12 07:19:26.830212] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:53.131 [2024-07-12 07:19:26.830361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:53.131 [2024-07-12 07:19:26.831006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:53.131 passed 00:08:53.131 Test: blob_set_external_parent ...[2024-07-12 07:19:26.888036] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:53.131 [2024-07-12 07:19:26.888341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:53.131 [2024-07-12 07:19:26.888498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:53.131 [2024-07-12 07:19:26.888974] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:53.131 passed 00:08:53.131 Suite: blob_nocopy_extent 00:08:53.131 Test: blob_init ...[2024-07-12 07:19:26.908429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:53.131 passed 00:08:53.131 Test: blob_thin_provision ...passed 00:08:53.131 Test: blob_read_only ...passed 00:08:53.131 Test: bs_load ...[2024-07-12 07:19:26.985744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:53.131 passed 00:08:53.131 Test: bs_load_custom_cluster_size ...passed 00:08:53.391 Test: bs_load_after_failed_grow ...passed 00:08:53.391 Test: bs_cluster_sz ...[2024-07-12 07:19:27.028206] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:53.391 [2024-07-12 07:19:27.028565] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:53.391 [2024-07-12 07:19:27.028823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:53.391 passed 00:08:53.391 Test: bs_resize_md ...passed 00:08:53.391 Test: bs_destroy ...passed 00:08:53.391 Test: bs_type ...passed 00:08:53.391 Test: bs_super_block ...passed 00:08:53.391 Test: bs_test_recover_cluster_count ...passed 00:08:53.391 Test: bs_grow_live ...passed 00:08:53.391 Test: bs_grow_live_no_space ...passed 00:08:53.391 Test: bs_test_grow ...passed 00:08:53.391 Test: blob_serialize_test ...passed 00:08:53.391 Test: super_block_crc ...passed 00:08:53.391 Test: blob_thin_prov_write_count_io ...passed 00:08:53.650 Test: blob_thin_prov_unmap_cluster ...passed 00:08:53.650 Test: bs_load_iter_test ...passed 00:08:53.650 Test: blob_relations ...[2024-07-12 07:19:27.316244] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:53.650 [2024-07-12 07:19:27.316585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:53.650 [2024-07-12 07:19:27.317552] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:53.650 [2024-07-12 07:19:27.317707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:53.650 passed 00:08:53.650 Test: blob_relations2 ...[2024-07-12 07:19:27.339228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:53.650 [2024-07-12 07:19:27.339497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:53.650 [2024-07-12 07:19:27.339565] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:53.650 [2024-07-12 07:19:27.339666] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:53.650 [2024-07-12 07:19:27.341015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:53.650 [2024-07-12 07:19:27.341186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:53.650 [2024-07-12 07:19:27.341636] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:53.650 [2024-07-12 07:19:27.341786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:53.650 passed 00:08:53.650 Test: blob_relations3 ...passed 00:08:53.910 Test: blobstore_clean_power_failure ...passed 00:08:53.910 Test: blob_delete_snapshot_power_failure ...[2024-07-12 07:19:27.611400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:53.910 [2024-07-12 07:19:27.631766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:53.910 [2024-07-12 07:19:27.652106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:53.910 [2024-07-12 07:19:27.652410] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:53.910 [2024-07-12 07:19:27.652489] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:53.910 [2024-07-12 07:19:27.672724] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:53.910 [2024-07-12 07:19:27.673018] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:53.910 [2024-07-12 07:19:27.673084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:53.910 [2024-07-12 07:19:27.673195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:53.910 [2024-07-12 07:19:27.693297] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:53.910 [2024-07-12 07:19:27.693586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:53.910 [2024-07-12 07:19:27.693650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:53.910 [2024-07-12 07:19:27.693769] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:53.910 [2024-07-12 07:19:27.714144] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:53.910 [2024-07-12 07:19:27.714424] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:53.910 [2024-07-12 07:19:27.734794] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:53.910 [2024-07-12 07:19:27.735065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:53.910 [2024-07-12 07:19:27.755575] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:53.910 [2024-07-12 07:19:27.755809] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:54.169 passed 00:08:54.169 Test: blob_create_snapshot_power_failure ...[2024-07-12 07:19:27.816180] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:54.169 [2024-07-12 07:19:27.836095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:54.169 [2024-07-12 07:19:27.876209] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:54.169 [2024-07-12 07:19:27.896531] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:54.169 passed 00:08:54.169 Test: blob_io_unit ...passed 00:08:54.169 Test: blob_io_unit_compatibility ...passed 00:08:54.169 Test: blob_ext_md_pages ...passed 00:08:54.428 Test: blob_esnap_io_4096_4096 ...passed 00:08:54.428 Test: blob_esnap_io_512_512 ...passed 00:08:54.428 Test: blob_esnap_io_4096_512 ...passed 00:08:54.428 Test: blob_esnap_io_512_4096 ...passed 00:08:54.428 Test: blob_esnap_clone_resize ...passed 00:08:54.428 Suite: blob_bs_nocopy_extent 00:08:54.428 Test: blob_open ...passed 00:08:54.687 Test: blob_create ...[2024-07-12 07:19:28.327629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:54.687 passed 00:08:54.687 Test: blob_create_loop ...passed 00:08:54.687 Test: blob_create_fail ...[2024-07-12 07:19:28.470958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:54.687 passed 00:08:54.687 Test: blob_create_internal ...passed 00:08:54.946 Test: blob_create_zero_extent ...passed 00:08:54.946 Test: blob_snapshot ...passed 00:08:54.946 Test: blob_clone ...passed 00:08:54.946 Test: blob_inflate ...[2024-07-12 07:19:28.780326] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:54.946 passed 00:08:55.205 Test: blob_delete ...passed 00:08:55.205 Test: blob_resize_test ...[2024-07-12 07:19:28.894064] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:55.205 passed 00:08:55.205 Test: blob_resize_thin_test ...passed 00:08:55.205 Test: channel_ops ...passed 00:08:55.205 Test: blob_super ...passed 00:08:55.464 Test: blob_rw_verify_iov ...passed 00:08:55.464 Test: blob_unmap ...passed 00:08:55.464 Test: blob_iter ...passed 00:08:55.464 Test: blob_parse_md ...passed 00:08:55.723 Test: bs_load_pending_removal ...passed 00:08:55.723 Test: bs_unload ...[2024-07-12 07:19:29.412181] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:55.723 passed 00:08:55.723 Test: bs_usable_clusters ...passed 00:08:55.723 Test: blob_crc ...[2024-07-12 07:19:29.525930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:55.723 [2024-07-12 07:19:29.526322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:55.723 passed 00:08:55.723 Test: blob_flags ...passed 00:08:55.982 Test: bs_version ...passed 00:08:55.982 Test: blob_set_xattrs_test ...[2024-07-12 07:19:29.697015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:55.982 [2024-07-12 07:19:29.697281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:55.982 passed 00:08:55.982 Test: blob_thin_prov_alloc ...passed 00:08:56.241 Test: blob_insert_cluster_msg_test ...passed 00:08:56.241 Test: blob_thin_prov_rw ...passed 00:08:56.241 Test: blob_thin_prov_rle ...passed 00:08:56.241 Test: blob_thin_prov_rw_iov ...passed 00:08:56.500 Test: blob_snapshot_rw ...passed 00:08:56.500 Test: blob_snapshot_rw_iov ...passed 00:08:56.758 Test: blob_inflate_rw ...passed 00:08:56.758 Test: blob_snapshot_freeze_io ...passed 00:08:57.017 Test: blob_operation_split_rw ...passed 00:08:57.017 Test: blob_operation_split_rw_iov ...passed 00:08:57.276 Test: blob_simultaneous_operations ...[2024-07-12 07:19:30.906032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:57.276 [2024-07-12 07:19:30.906335] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:57.276 [2024-07-12 07:19:30.907858] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:57.276 [2024-07-12 07:19:30.908042] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:57.276 [2024-07-12 07:19:30.922495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:57.276 [2024-07-12 07:19:30.922704] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:57.276 [2024-07-12 07:19:30.922870] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:57.276 [2024-07-12 07:19:30.922979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:57.276 passed 00:08:57.276 Test: blob_persist_test ...passed 00:08:57.276 Test: blob_decouple_snapshot ...passed 00:08:57.535 Test: blob_seek_io_unit ...passed 00:08:57.535 Test: blob_nested_freezes ...passed 00:08:57.535 Test: blob_clone_resize ...passed 00:08:57.535 Test: blob_shallow_copy ...[2024-07-12 07:19:31.361698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:57.535 [2024-07-12 07:19:31.362307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:57.535 [2024-07-12 07:19:31.362668] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:57.535 passed 00:08:57.535 Suite: blob_blob_nocopy_extent 00:08:57.794 Test: blob_write ...passed 00:08:57.794 Test: blob_read ...passed 00:08:57.794 Test: blob_rw_verify ...passed 00:08:57.794 Test: blob_rw_verify_iov_nomem ...passed 00:08:57.794 Test: blob_rw_iov_read_only ...passed 00:08:58.053 Test: blob_xattr ...passed 00:08:58.053 Test: blob_dirty_shutdown ...passed 00:08:58.053 Test: blob_is_degraded ...passed 00:08:58.053 Suite: blob_esnap_bs_nocopy_extent 00:08:58.053 Test: blob_esnap_create ...passed 00:08:58.312 Test: blob_esnap_thread_add_remove ...passed 00:08:58.312 Test: blob_esnap_clone_snapshot ...passed 00:08:58.312 Test: blob_esnap_clone_inflate ...passed 00:08:58.312 Test: blob_esnap_clone_decouple ...passed 00:08:58.571 Test: blob_esnap_clone_reload ...passed 00:08:58.571 Test: blob_esnap_hotplug ...passed 00:08:58.571 Test: blob_set_parent ...[2024-07-12 07:19:32.293786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:58.571 [2024-07-12 07:19:32.294189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:58.571 [2024-07-12 07:19:32.294444] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:58.571 [2024-07-12 07:19:32.294575] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:58.571 [2024-07-12 07:19:32.295070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:58.571 passed 00:08:58.571 Test: blob_set_external_parent ...[2024-07-12 07:19:32.352757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:58.571 [2024-07-12 07:19:32.353060] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:58.571 [2024-07-12 07:19:32.353229] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:58.571 [2024-07-12 07:19:32.353660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:58.571 passed 00:08:58.571 Suite: blob_copy_noextent 00:08:58.571 Test: blob_init ...[2024-07-12 07:19:32.372999] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:58.571 passed 00:08:58.571 Test: blob_thin_provision ...passed 00:08:58.571 Test: blob_read_only ...passed 00:08:58.571 Test: bs_load ...[2024-07-12 07:19:32.449865] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:58.571 passed 00:08:58.830 Test: bs_load_custom_cluster_size ...passed 00:08:58.830 Test: bs_load_after_failed_grow ...passed 00:08:58.830 Test: bs_cluster_sz ...[2024-07-12 07:19:32.490531] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:58.830 [2024-07-12 07:19:32.490786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:58.830 [2024-07-12 07:19:32.491039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:58.830 passed 00:08:58.830 Test: bs_resize_md ...passed 00:08:58.830 Test: bs_destroy ...passed 00:08:58.830 Test: bs_type ...passed 00:08:58.830 Test: bs_super_block ...passed 00:08:58.830 Test: bs_test_recover_cluster_count ...passed 00:08:58.830 Test: bs_grow_live ...passed 00:08:58.830 Test: bs_grow_live_no_space ...passed 00:08:58.830 Test: bs_test_grow ...passed 00:08:58.830 Test: blob_serialize_test ...passed 00:08:58.830 Test: super_block_crc ...passed 00:08:58.830 Test: blob_thin_prov_write_count_io ...passed 00:08:59.089 Test: blob_thin_prov_unmap_cluster ...passed 00:08:59.089 Test: bs_load_iter_test ...passed 00:08:59.090 Test: blob_relations ...[2024-07-12 07:19:32.788166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:59.090 [2024-07-12 07:19:32.788460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:59.090 [2024-07-12 07:19:32.789051] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:59.090 [2024-07-12 07:19:32.789175] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:59.090 passed 00:08:59.090 Test: blob_relations2 ...[2024-07-12 07:19:32.809828] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:59.090 [2024-07-12 07:19:32.810041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:59.090 [2024-07-12 07:19:32.810111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:59.090 [2024-07-12 07:19:32.810183] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:59.090 [2024-07-12 07:19:32.811172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:59.090 [2024-07-12 07:19:32.811319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:59.090 [2024-07-12 07:19:32.811632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:59.090 [2024-07-12 07:19:32.811736] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:59.090 passed 00:08:59.090 Test: blob_relations3 ...passed 00:08:59.359 Test: blobstore_clean_power_failure ...passed 00:08:59.359 Test: blob_delete_snapshot_power_failure ...[2024-07-12 07:19:33.094026] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:59.359 [2024-07-12 07:19:33.113965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:59.359 [2024-07-12 07:19:33.114275] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:59.359 [2024-07-12 07:19:33.114340] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:59.359 [2024-07-12 07:19:33.133966] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:59.359 [2024-07-12 07:19:33.134263] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:59.359 [2024-07-12 07:19:33.134329] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:59.359 [2024-07-12 07:19:33.134451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:59.359 [2024-07-12 07:19:33.154107] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:59.359 [2024-07-12 07:19:33.154477] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:59.359 [2024-07-12 07:19:33.174273] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:59.359 [2024-07-12 07:19:33.174673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:59.359 [2024-07-12 07:19:33.194618] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:59.359 [2024-07-12 07:19:33.194977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:59.359 passed 00:08:59.650 Test: blob_create_snapshot_power_failure ...[2024-07-12 07:19:33.254667] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:59.650 [2024-07-12 07:19:33.294015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:59.650 [2024-07-12 07:19:33.314169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:59.650 passed 00:08:59.650 Test: blob_io_unit ...passed 00:08:59.650 Test: blob_io_unit_compatibility ...passed 00:08:59.650 Test: blob_ext_md_pages ...passed 00:08:59.650 Test: blob_esnap_io_4096_4096 ...passed 00:08:59.909 Test: blob_esnap_io_512_512 ...passed 00:08:59.909 Test: blob_esnap_io_4096_512 ...passed 00:08:59.909 Test: blob_esnap_io_512_4096 ...passed 00:08:59.909 Test: blob_esnap_clone_resize ...passed 00:08:59.909 Suite: blob_bs_copy_noextent 00:08:59.909 Test: blob_open ...passed 00:08:59.909 Test: blob_create ...[2024-07-12 07:19:33.743443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:59.909 passed 00:09:00.167 Test: blob_create_loop ...passed 00:09:00.167 Test: blob_create_fail ...[2024-07-12 07:19:33.879717] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:00.167 passed 00:09:00.167 Test: blob_create_internal ...passed 00:09:00.167 Test: blob_create_zero_extent ...passed 00:09:00.425 Test: blob_snapshot ...passed 00:09:00.425 Test: blob_clone ...passed 00:09:00.426 Test: blob_inflate ...[2024-07-12 07:19:34.171383] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:00.426 passed 00:09:00.426 Test: blob_delete ...passed 00:09:00.426 Test: blob_resize_test ...[2024-07-12 07:19:34.283018] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:00.426 passed 00:09:00.751 Test: blob_resize_thin_test ...passed 00:09:00.751 Test: channel_ops ...passed 00:09:00.751 Test: blob_super ...passed 00:09:00.751 Test: blob_rw_verify_iov ...passed 00:09:01.033 Test: blob_unmap ...passed 00:09:01.033 Test: blob_iter ...passed 00:09:01.033 Test: blob_parse_md ...passed 00:09:01.033 Test: bs_load_pending_removal ...passed 00:09:01.033 Test: bs_unload ...[2024-07-12 07:19:34.797995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:01.033 passed 00:09:01.033 Test: bs_usable_clusters ...passed 00:09:01.033 Test: blob_crc ...[2024-07-12 07:19:34.911627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:01.033 [2024-07-12 07:19:34.911955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:01.333 passed 00:09:01.333 Test: blob_flags ...passed 00:09:01.333 Test: bs_version ...passed 00:09:01.333 Test: blob_set_xattrs_test ...[2024-07-12 07:19:35.084888] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:01.333 [2024-07-12 07:19:35.085196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:01.333 passed 00:09:01.614 Test: blob_thin_prov_alloc ...passed 00:09:01.614 Test: blob_insert_cluster_msg_test ...passed 00:09:01.614 Test: blob_thin_prov_rw ...passed 00:09:01.614 Test: blob_thin_prov_rle ...passed 00:09:01.973 Test: blob_thin_prov_rw_iov ...passed 00:09:01.973 Test: blob_snapshot_rw ...passed 00:09:01.973 Test: blob_snapshot_rw_iov ...passed 00:09:02.246 Test: blob_inflate_rw ...passed 00:09:02.246 Test: blob_snapshot_freeze_io ...passed 00:09:02.246 Test: blob_operation_split_rw ...passed 00:09:02.505 Test: blob_operation_split_rw_iov ...passed 00:09:02.505 Test: blob_simultaneous_operations ...[2024-07-12 07:19:36.288466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:02.505 [2024-07-12 07:19:36.288770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:02.505 [2024-07-12 07:19:36.289353] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:02.505 [2024-07-12 07:19:36.289515] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:02.505 [2024-07-12 07:19:36.292908] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:02.505 [2024-07-12 07:19:36.293062] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:02.505 [2024-07-12 07:19:36.293199] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:02.505 [2024-07-12 07:19:36.293376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:02.505 passed 00:09:02.505 Test: blob_persist_test ...passed 00:09:02.764 Test: blob_decouple_snapshot ...passed 00:09:02.764 Test: blob_seek_io_unit ...passed 00:09:02.764 Test: blob_nested_freezes ...passed 00:09:02.764 Test: blob_clone_resize ...passed 00:09:03.022 Test: blob_shallow_copy ...[2024-07-12 07:19:36.680300] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:09:03.022 [2024-07-12 07:19:36.680930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:09:03.022 [2024-07-12 07:19:36.681776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:09:03.022 passed 00:09:03.022 Suite: blob_blob_copy_noextent 00:09:03.022 Test: blob_write ...passed 00:09:03.022 Test: blob_read ...passed 00:09:03.022 Test: blob_rw_verify ...passed 00:09:03.281 Test: blob_rw_verify_iov_nomem ...passed 00:09:03.281 Test: blob_rw_iov_read_only ...passed 00:09:03.281 Test: blob_xattr ...passed 00:09:03.281 Test: blob_dirty_shutdown ...passed 00:09:03.539 Test: blob_is_degraded ...passed 00:09:03.539 Suite: blob_esnap_bs_copy_noextent 00:09:03.539 Test: blob_esnap_create ...passed 00:09:03.539 Test: blob_esnap_thread_add_remove ...passed 00:09:03.539 Test: blob_esnap_clone_snapshot ...passed 00:09:03.539 Test: blob_esnap_clone_inflate ...passed 00:09:03.798 Test: blob_esnap_clone_decouple ...passed 00:09:03.798 Test: blob_esnap_clone_reload ...passed 00:09:03.798 Test: blob_esnap_hotplug ...passed 00:09:03.798 Test: blob_set_parent ...[2024-07-12 07:19:37.607467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:09:03.798 [2024-07-12 07:19:37.607786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:09:03.798 [2024-07-12 07:19:37.608073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:09:03.798 [2024-07-12 07:19:37.608221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:09:03.798 [2024-07-12 07:19:37.608696] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:03.798 passed 00:09:03.798 Test: blob_set_external_parent ...[2024-07-12 07:19:37.666037] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:09:03.798 [2024-07-12 07:19:37.666380] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:09:03.798 [2024-07-12 07:19:37.666573] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:09:03.798 [2024-07-12 07:19:37.666937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:04.056 passed 00:09:04.056 Suite: blob_copy_extent 00:09:04.056 Test: blob_init ...[2024-07-12 07:19:37.686422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:09:04.056 passed 00:09:04.056 Test: blob_thin_provision ...passed 00:09:04.056 Test: blob_read_only ...passed 00:09:04.056 Test: bs_load ...[2024-07-12 07:19:37.763379] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:09:04.056 passed 00:09:04.056 Test: bs_load_custom_cluster_size ...passed 00:09:04.056 Test: bs_load_after_failed_grow ...passed 00:09:04.056 Test: bs_cluster_sz ...[2024-07-12 07:19:37.804098] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:09:04.056 [2024-07-12 07:19:37.804336] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:09:04.056 [2024-07-12 07:19:37.804464] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:09:04.056 passed 00:09:04.056 Test: bs_resize_md ...passed 00:09:04.056 Test: bs_destroy ...passed 00:09:04.056 Test: bs_type ...passed 00:09:04.056 Test: bs_super_block ...passed 00:09:04.056 Test: bs_test_recover_cluster_count ...passed 00:09:04.056 Test: bs_grow_live ...passed 00:09:04.056 Test: bs_grow_live_no_space ...passed 00:09:04.314 Test: bs_test_grow ...passed 00:09:04.314 Test: blob_serialize_test ...passed 00:09:04.314 Test: super_block_crc ...passed 00:09:04.314 Test: blob_thin_prov_write_count_io ...passed 00:09:04.314 Test: blob_thin_prov_unmap_cluster ...passed 00:09:04.314 Test: bs_load_iter_test ...passed 00:09:04.314 Test: blob_relations ...[2024-07-12 07:19:38.095360] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:04.314 [2024-07-12 07:19:38.095643] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.314 [2024-07-12 07:19:38.096291] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:04.314 [2024-07-12 07:19:38.096433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.314 passed 00:09:04.314 Test: blob_relations2 ...[2024-07-12 07:19:38.117604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:04.314 [2024-07-12 07:19:38.117852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.314 [2024-07-12 07:19:38.117930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:04.314 [2024-07-12 07:19:38.118026] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.314 [2024-07-12 07:19:38.119022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:04.314 [2024-07-12 07:19:38.119166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.314 [2024-07-12 07:19:38.119546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:09:04.314 [2024-07-12 07:19:38.119681] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.314 passed 00:09:04.314 Test: blob_relations3 ...passed 00:09:04.573 Test: blobstore_clean_power_failure ...passed 00:09:04.573 Test: blob_delete_snapshot_power_failure ...[2024-07-12 07:19:38.384451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:04.573 [2024-07-12 07:19:38.404409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:04.573 [2024-07-12 07:19:38.424438] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:04.573 [2024-07-12 07:19:38.424803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:04.573 [2024-07-12 07:19:38.424869] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.573 [2024-07-12 07:19:38.444823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:04.573 [2024-07-12 07:19:38.445203] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:04.573 [2024-07-12 07:19:38.445264] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:04.573 [2024-07-12 07:19:38.445376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.832 [2024-07-12 07:19:38.465141] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:04.832 [2024-07-12 07:19:38.469354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:09:04.832 [2024-07-12 07:19:38.469510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:09:04.832 [2024-07-12 07:19:38.469639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.832 [2024-07-12 07:19:38.489427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:09:04.832 [2024-07-12 07:19:38.489731] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.832 [2024-07-12 07:19:38.509592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:09:04.832 [2024-07-12 07:19:38.509936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.832 [2024-07-12 07:19:38.529773] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:09:04.832 [2024-07-12 07:19:38.530111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:04.832 passed 00:09:04.832 Test: blob_create_snapshot_power_failure ...[2024-07-12 07:19:38.589589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:09:04.832 [2024-07-12 07:19:38.609203] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:09:04.832 [2024-07-12 07:19:38.648044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:09:04.832 [2024-07-12 07:19:38.668085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:09:05.090 passed 00:09:05.090 Test: blob_io_unit ...passed 00:09:05.090 Test: blob_io_unit_compatibility ...passed 00:09:05.090 Test: blob_ext_md_pages ...passed 00:09:05.090 Test: blob_esnap_io_4096_4096 ...passed 00:09:05.090 Test: blob_esnap_io_512_512 ...passed 00:09:05.090 Test: blob_esnap_io_4096_512 ...passed 00:09:05.090 Test: blob_esnap_io_512_4096 ...passed 00:09:05.348 Test: blob_esnap_clone_resize ...passed 00:09:05.348 Suite: blob_bs_copy_extent 00:09:05.348 Test: blob_open ...passed 00:09:05.348 Test: blob_create ...[2024-07-12 07:19:39.109171] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:09:05.348 passed 00:09:05.348 Test: blob_create_loop ...passed 00:09:05.606 Test: blob_create_fail ...[2024-07-12 07:19:39.249237] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:05.606 passed 00:09:05.606 Test: blob_create_internal ...passed 00:09:05.606 Test: blob_create_zero_extent ...passed 00:09:05.606 Test: blob_snapshot ...passed 00:09:05.864 Test: blob_clone ...passed 00:09:05.864 Test: blob_inflate ...[2024-07-12 07:19:39.540574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:09:05.864 passed 00:09:05.864 Test: blob_delete ...passed 00:09:05.864 Test: blob_resize_test ...[2024-07-12 07:19:39.651247] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:09:05.864 passed 00:09:05.864 Test: blob_resize_thin_test ...passed 00:09:06.122 Test: channel_ops ...passed 00:09:06.122 Test: blob_super ...passed 00:09:06.122 Test: blob_rw_verify_iov ...passed 00:09:06.122 Test: blob_unmap ...passed 00:09:06.399 Test: blob_iter ...passed 00:09:06.399 Test: blob_parse_md ...passed 00:09:06.399 Test: bs_load_pending_removal ...passed 00:09:06.399 Test: bs_unload ...[2024-07-12 07:19:40.163304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:09:06.399 passed 00:09:06.399 Test: bs_usable_clusters ...passed 00:09:06.399 Test: blob_crc ...[2024-07-12 07:19:40.276067] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:06.399 [2024-07-12 07:19:40.276464] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:09:06.659 passed 00:09:06.659 Test: blob_flags ...passed 00:09:06.659 Test: bs_version ...passed 00:09:06.659 Test: blob_set_xattrs_test ...[2024-07-12 07:19:40.447195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:06.659 [2024-07-12 07:19:40.447517] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:09:06.659 passed 00:09:06.917 Test: blob_thin_prov_alloc ...passed 00:09:06.917 Test: blob_insert_cluster_msg_test ...passed 00:09:06.917 Test: blob_thin_prov_rw ...passed 00:09:06.917 Test: blob_thin_prov_rle ...passed 00:09:07.175 Test: blob_thin_prov_rw_iov ...passed 00:09:07.175 Test: blob_snapshot_rw ...passed 00:09:07.175 Test: blob_snapshot_rw_iov ...passed 00:09:07.434 Test: blob_inflate_rw ...passed 00:09:07.434 Test: blob_snapshot_freeze_io ...passed 00:09:07.693 Test: blob_operation_split_rw ...passed 00:09:07.951 Test: blob_operation_split_rw_iov ...passed 00:09:07.951 Test: blob_simultaneous_operations ...[2024-07-12 07:19:41.628534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:07.951 [2024-07-12 07:19:41.628874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:07.951 [2024-07-12 07:19:41.629513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:07.951 [2024-07-12 07:19:41.629680] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:07.951 [2024-07-12 07:19:41.632946] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:07.951 [2024-07-12 07:19:41.633108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:07.951 [2024-07-12 07:19:41.633252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:09:07.951 [2024-07-12 07:19:41.633463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:09:07.951 passed 00:09:07.951 Test: blob_persist_test ...passed 00:09:07.951 Test: blob_decouple_snapshot ...passed 00:09:08.209 Test: blob_seek_io_unit ...passed 00:09:08.209 Test: blob_nested_freezes ...passed 00:09:08.209 Test: blob_clone_resize ...passed 00:09:08.209 Test: blob_shallow_copy ...[2024-07-12 07:19:42.029230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:09:08.209 [2024-07-12 07:19:42.029828] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:09:08.209 [2024-07-12 07:19:42.030207] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:09:08.209 passed 00:09:08.209 Suite: blob_blob_copy_extent 00:09:08.467 Test: blob_write ...passed 00:09:08.467 Test: blob_read ...passed 00:09:08.467 Test: blob_rw_verify ...passed 00:09:08.467 Test: blob_rw_verify_iov_nomem ...passed 00:09:08.467 Test: blob_rw_iov_read_only ...passed 00:09:08.724 Test: blob_xattr ...passed 00:09:08.724 Test: blob_dirty_shutdown ...passed 00:09:08.724 Test: blob_is_degraded ...passed 00:09:08.724 Suite: blob_esnap_bs_copy_extent 00:09:08.724 Test: blob_esnap_create ...passed 00:09:08.983 Test: blob_esnap_thread_add_remove ...passed 00:09:08.983 Test: blob_esnap_clone_snapshot ...passed 00:09:08.983 Test: blob_esnap_clone_inflate ...passed 00:09:08.983 Test: blob_esnap_clone_decouple ...passed 00:09:09.240 Test: blob_esnap_clone_reload ...passed 00:09:09.240 Test: blob_esnap_hotplug ...passed 00:09:09.240 Test: blob_set_parent ...[2024-07-12 07:19:42.963471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:09:09.240 [2024-07-12 07:19:42.963813] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:09:09.240 [2024-07-12 07:19:42.964057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:09:09.240 [2024-07-12 07:19:42.964188] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:09:09.240 [2024-07-12 07:19:42.964814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:09.240 passed 00:09:09.240 Test: blob_set_external_parent ...[2024-07-12 07:19:43.022505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:09:09.240 [2024-07-12 07:19:43.022858] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:09:09.240 [2024-07-12 07:19:43.022979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:09:09.240 [2024-07-12 07:19:43.023479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:09:09.240 passed 00:09:09.240 00:09:09.240 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.240 suites 16 16 n/a 0 0 00:09:09.240 tests 376 376 376 0 0 00:09:09.240 asserts 143965 143965 143965 0 n/a 00:09:09.240 00:09:09.240 Elapsed time = 21.505 seconds 00:09:09.498 07:19:43 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:09:09.498 00:09:09.498 00:09:09.498 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.498 http://cunit.sourceforge.net/ 00:09:09.498 00:09:09.498 00:09:09.498 Suite: blob_bdev 00:09:09.498 Test: create_bs_dev ...passed 00:09:09.498 Test: create_bs_dev_ro ...[2024-07-12 07:19:43.175126] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:09:09.498 passed 00:09:09.498 Test: create_bs_dev_rw ...passed 00:09:09.498 Test: claim_bs_dev ...[2024-07-12 07:19:43.176158] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:09:09.498 passed 00:09:09.498 Test: claim_bs_dev_ro ...passed 00:09:09.498 Test: deferred_destroy_refs ...passed 00:09:09.498 Test: deferred_destroy_channels ...passed 00:09:09.498 Test: deferred_destroy_threads ...passed 00:09:09.498 00:09:09.498 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.498 suites 1 1 n/a 0 0 00:09:09.498 tests 8 8 8 0 0 00:09:09.498 asserts 119 119 119 0 n/a 00:09:09.498 00:09:09.498 Elapsed time = 0.002 seconds 00:09:09.498 07:19:43 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:09:09.498 00:09:09.498 00:09:09.498 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.498 http://cunit.sourceforge.net/ 00:09:09.498 00:09:09.498 00:09:09.498 Suite: tree 00:09:09.498 Test: blobfs_tree_op_test ...passed 00:09:09.498 00:09:09.498 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.498 suites 1 1 n/a 0 0 00:09:09.498 tests 1 1 1 0 0 00:09:09.498 asserts 27 27 27 0 n/a 00:09:09.498 00:09:09.498 Elapsed time = 0.000 seconds 00:09:09.498 07:19:43 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:09:09.498 00:09:09.498 00:09:09.498 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.498 http://cunit.sourceforge.net/ 00:09:09.498 00:09:09.498 00:09:09.498 Suite: blobfs_async_ut 00:09:09.498 Test: fs_init ...passed 00:09:09.498 Test: fs_open ...passed 00:09:09.498 Test: fs_create ...passed 00:09:09.755 Test: fs_truncate ...passed 00:09:09.755 Test: fs_rename ...[2024-07-12 07:19:43.430861] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:09:09.755 passed 00:09:09.755 Test: fs_rw_async ...passed 00:09:09.755 Test: fs_writev_readv_async ...passed 00:09:09.755 Test: tree_find_buffer_ut ...passed 00:09:09.755 Test: channel_ops ...passed 00:09:09.755 Test: channel_ops_sync ...passed 00:09:09.755 00:09:09.755 Run Summary: Type Total Ran Passed Failed Inactive 00:09:09.755 suites 1 1 n/a 0 0 00:09:09.755 tests 10 10 10 0 0 00:09:09.755 asserts 292 292 292 0 n/a 00:09:09.755 00:09:09.755 Elapsed time = 0.240 seconds 00:09:09.755 07:19:43 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:09:09.755 00:09:09.755 00:09:09.755 CUnit - A unit testing framework for C - Version 2.1-3 00:09:09.755 http://cunit.sourceforge.net/ 00:09:09.755 00:09:09.755 00:09:09.755 Suite: blobfs_sync_ut 00:09:10.013 Test: cache_read_after_write ...[2024-07-12 07:19:43.684918] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:09:10.013 passed 00:09:10.013 Test: file_length ...passed 00:09:10.013 Test: append_write_to_extend_blob ...passed 00:09:10.013 Test: partial_buffer ...passed 00:09:10.013 Test: cache_write_null_buffer ...passed 00:09:10.013 Test: fs_create_sync ...passed 00:09:10.013 Test: fs_rename_sync ...passed 00:09:10.013 Test: cache_append_no_cache ...passed 00:09:10.013 Test: fs_delete_file_without_close ...passed 00:09:10.013 00:09:10.013 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.013 suites 1 1 n/a 0 0 00:09:10.013 tests 9 9 9 0 0 00:09:10.013 asserts 345 345 345 0 n/a 00:09:10.013 00:09:10.013 Elapsed time = 0.532 seconds 00:09:10.271 07:19:43 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:09:10.271 00:09:10.271 00:09:10.271 CUnit - A unit testing framework for C - Version 2.1-3 00:09:10.271 http://cunit.sourceforge.net/ 00:09:10.271 00:09:10.271 00:09:10.271 Suite: blobfs_bdev_ut 00:09:10.271 Test: spdk_blobfs_bdev_detect_test ...[2024-07-12 07:19:43.953424] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:09:10.271 passed 00:09:10.271 Test: spdk_blobfs_bdev_create_test ...[2024-07-12 07:19:43.954209] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:09:10.271 passed 00:09:10.271 Test: spdk_blobfs_bdev_mount_test ...passed 00:09:10.271 00:09:10.271 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.271 suites 1 1 n/a 0 0 00:09:10.271 tests 3 3 3 0 0 00:09:10.271 asserts 9 9 9 0 n/a 00:09:10.271 00:09:10.271 Elapsed time = 0.001 seconds 00:09:10.271 00:09:10.271 real 0m22.620s 00:09:10.271 user 0m21.682s 00:09:10.271 sys 0m1.080s 00:09:10.271 07:19:43 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:10.271 07:19:43 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:09:10.271 ************************************ 00:09:10.271 END TEST unittest_blob_blobfs 00:09:10.271 ************************************ 00:09:10.271 07:19:44 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:09:10.271 07:19:44 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:10.271 07:19:44 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:10.271 07:19:44 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:10.271 ************************************ 00:09:10.271 START TEST unittest_event 00:09:10.271 ************************************ 00:09:10.271 07:19:44 unittest.unittest_event -- common/autotest_common.sh@1121 -- # unittest_event 00:09:10.271 07:19:44 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:09:10.271 00:09:10.271 00:09:10.271 CUnit - A unit testing framework for C - Version 2.1-3 00:09:10.271 http://cunit.sourceforge.net/ 00:09:10.271 00:09:10.271 00:09:10.271 Suite: app_suite 00:09:10.271 Test: test_spdk_app_parse_args ...app_ut [options] 00:09:10.271 app_ut: invalid option -- 'z' 00:09:10.271 00:09:10.271 CPU options: 00:09:10.271 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:10.271 (like [0,1,10]) 00:09:10.271 --lcores lcore to CPU mapping list. The list is in the format: 00:09:10.271 [<,lcores[@CPUs]>...] 00:09:10.271 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:10.271 Within the group, '-' is used for range separator, 00:09:10.271 ',' is used for single number separator. 00:09:10.271 '( )' can be omitted for single element group, 00:09:10.271 '@' can be omitted if cpus and lcores have the same value 00:09:10.271 --disable-cpumask-locks Disable CPU core lock files. 00:09:10.271 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:10.271 pollers in the app support interrupt mode) 00:09:10.271 -p, --main-core main (primary) core for DPDK 00:09:10.271 00:09:10.271 Configuration options: 00:09:10.271 -c, --config, --json JSON config file 00:09:10.271 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:10.271 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:10.271 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:10.271 --rpcs-allowed comma-separated list of permitted RPCS 00:09:10.271 --json-ignore-init-errors don't exit on invalid config entry 00:09:10.271 00:09:10.271 Memory options: 00:09:10.271 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:10.271 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:10.271 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:10.271 -R, --huge-unlink unlink huge files after initialization 00:09:10.271 -n, --mem-channels number of memory channels used for DPDK 00:09:10.271 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:10.271 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:10.271 --no-huge run without using hugepages 00:09:10.271 -i, --shm-id shared memory ID (optional) 00:09:10.271 -g, --single-file-segments force creating just one hugetlbfs file 00:09:10.271 00:09:10.271 PCI options: 00:09:10.271 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:10.271 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:10.271 -u, --no-pci disable PCI access 00:09:10.271 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:10.271 00:09:10.271 Log options: 00:09:10.271 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:09:10.271 --silence-noticelog disable notice level logging to stderr 00:09:10.271 00:09:10.271 Trace options: 00:09:10.271 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:10.271 setting 0 to disable trace (default 32768) 00:09:10.271 Tracepoints vary in size and can use more than one trace entry. 00:09:10.271 -e, --tpoint-group [:] 00:09:10.271 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:09:10.271 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:10.271 a tracepoint group. First tpoint inside a group can be enabled by 00:09:10.271 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:10.271 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:10.271 in /include/spdk_internal/trace_defs.h 00:09:10.271 00:09:10.271 Other options: 00:09:10.271 -h, --help show this usage 00:09:10.271 -v, --version print SPDK version 00:09:10.271 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:10.271 --env-context Opaque context for use of the env implementation 00:09:10.271 app_ut: unrecognized option '--test-long-opt' 00:09:10.271 app_ut [options] 00:09:10.271 00:09:10.271 CPU options: 00:09:10.271 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:10.271 (like [0,1,10]) 00:09:10.271 --lcores lcore to CPU mapping list. The list is in the format: 00:09:10.271 [<,lcores[@CPUs]>...] 00:09:10.271 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:10.271 Within the group, '-' is used for range separator, 00:09:10.271 ',' is used for single number separator. 00:09:10.271 '( )' can be omitted for single element group, 00:09:10.271 '@' can be omitted if cpus and lcores have the same value 00:09:10.271 --disable-cpumask-locks Disable CPU core lock files. 00:09:10.271 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:10.271 pollers in the app support interrupt mode) 00:09:10.271 -p, --main-core main (primary) core for DPDK 00:09:10.271 00:09:10.271 Configuration options: 00:09:10.271 -c, --config, --json JSON config file 00:09:10.271 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:10.272 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:10.272 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:10.272 --rpcs-allowed comma-separated list of permitted RPCS 00:09:10.272 --json-ignore-init-errors don't exit on invalid config entry 00:09:10.272 00:09:10.272 Memory options: 00:09:10.272 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:10.272 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:10.272 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:10.272 -R, --huge-unlink unlink huge files after initialization 00:09:10.272 -n, --mem-channels number of memory channels used for DPDK 00:09:10.272 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:10.272 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:10.272 --no-huge run without using hugepages 00:09:10.272 -i, --shm-id shared memory ID (optional) 00:09:10.272 -g, --single-file-segments force creating just one hugetlbfs file 00:09:10.272 00:09:10.272 PCI options: 00:09:10.272 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:10.272 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:10.272 -u, --no-pci disable PCI access 00:09:10.272 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:10.272 00:09:10.272 Log options: 00:09:10.272 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:09:10.272 --silence-noticelog disable notice level logging to stderr 00:09:10.272 00:09:10.272 Trace options: 00:09:10.272 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:10.272 setting 0 to disable trace (default 32768) 00:09:10.272 Tracepoints vary in size and can use more than one trace entry. 00:09:10.272 -e, --tpoint-group [:] 00:09:10.272 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:09:10.272 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:10.272 a tracepoint group. First tpoint inside a group can be enabled by 00:09:10.272 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:10.272 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:10.272 in /include/spdk_internal/trace_defs.h 00:09:10.272 00:09:10.272 Other options: 00:09:10.272 -h, --help show this usage 00:09:10.272 -v, --version print SPDK version 00:09:10.272 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:10.272 --env-context Opaque context for use of the env implementation 00:09:10.272 app_ut [options] 00:09:10.272 00:09:10.272 CPU options: 00:09:10.272 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:10.272 (like [0,1,10]) 00:09:10.272 --lcores lcore to CPU mapping list. The list is in the format: 00:09:10.272 [<,lcores[@CPUs]>...] 00:09:10.272 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:10.272 Within the group, '-' is used for range separator, 00:09:10.272 ',' is used for single number separator. 00:09:10.272 '( )' can be omitted for single element group, 00:09:10.272 '@' can be omitted if cpus and lcores have the same value 00:09:10.272 --disable-cpumask-locks Disable CPU core lock files. 00:09:10.272 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:10.272 pollers in the app support interrupt mode) 00:09:10.272 -p, --main-core main (primary) core for DPDK 00:09:10.272 00:09:10.272 Configuration options: 00:09:10.272 -c, --config, --json JSON config file 00:09:10.272 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:10.272 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:10.272 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:10.272 --rpcs-allowed comma-separated list of permitted RPCS 00:09:10.272 --json-ignore-init-errors don't exit on invalid config entry 00:09:10.272 00:09:10.272 Memory options: 00:09:10.272 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:10.272 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:10.272 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:10.272 -R, --huge-unlink unlink huge files after initialization 00:09:10.272 -n, --mem-channels number of memory channels used for DPDK 00:09:10.272 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:10.272 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:10.272 --no-huge run without using hugepages 00:09:10.272 -i, --shm-id shared memory ID (optional) 00:09:10.272 -g, --single-file-segments force creating just one hugetlbfs file 00:09:10.272 00:09:10.272 PCI options: 00:09:10.272 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:10.272 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:10.272 -u, --no-pci disable PCI access 00:09:10.272 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:10.272 00:09:10.272 Log options: 00:09:10.272 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:09:10.272 --silence-noticelog disable notice level logging to stderr 00:09:10.272 00:09:10.272 Trace options: 00:09:10.272 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:10.272 setting 0 to disable trace (default 32768) 00:09:10.272 Tracepoints vary in size and can use more than one trace entry. 00:09:10.272 -e, --tpoint-group [:] 00:09:10.272 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:09:10.272 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:10.272 a tracepoint group. First tpoint inside a group can be enabled by 00:09:10.272 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:10.272 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:10.272 in /include/spdk_internal/trace_defs.h 00:09:10.272 00:09:10.272 Other options: 00:09:10.272 -h, --help show this usage 00:09:10.272 -v, --version print SPDK version 00:09:10.272 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:10.272 --env-context Opaque context for use of the env implementation 00:09:10.272 passed 00:09:10.272 00:09:10.272 [2024-07-12 07:19:44.073853] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1192:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:09:10.272 [2024-07-12 07:19:44.074352] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:09:10.272 [2024-07-12 07:19:44.074708] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:09:10.272 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.272 suites 1 1 n/a 0 0 00:09:10.272 tests 1 1 1 0 0 00:09:10.272 asserts 8 8 8 0 n/a 00:09:10.272 00:09:10.272 Elapsed time = 0.002 seconds 00:09:10.272 07:19:44 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:09:10.272 00:09:10.272 00:09:10.272 CUnit - A unit testing framework for C - Version 2.1-3 00:09:10.272 http://cunit.sourceforge.net/ 00:09:10.272 00:09:10.272 00:09:10.272 Suite: app_suite 00:09:10.272 Test: test_create_reactor ...passed 00:09:10.272 Test: test_init_reactors ...passed 00:09:10.272 Test: test_event_call ...passed 00:09:10.272 Test: test_schedule_thread ...passed 00:09:10.272 Test: test_reschedule_thread ...passed 00:09:10.272 Test: test_bind_thread ...passed 00:09:10.272 Test: test_for_each_reactor ...passed 00:09:10.272 Test: test_reactor_stats ...passed 00:09:10.272 Test: test_scheduler ...passed 00:09:10.272 Test: test_governor ...passed 00:09:10.272 00:09:10.272 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.272 suites 1 1 n/a 0 0 00:09:10.272 tests 10 10 10 0 0 00:09:10.272 asserts 344 344 344 0 n/a 00:09:10.272 00:09:10.272 Elapsed time = 0.015 seconds 00:09:10.272 00:09:10.272 real 0m0.113s 00:09:10.272 user 0m0.059s 00:09:10.530 sys 0m0.044s 00:09:10.530 07:19:44 unittest.unittest_event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:10.530 07:19:44 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:09:10.530 ************************************ 00:09:10.530 END TEST unittest_event 00:09:10.530 ************************************ 00:09:10.530 07:19:44 unittest -- unit/unittest.sh@235 -- # uname -s 00:09:10.530 07:19:44 unittest -- unit/unittest.sh@235 -- # '[' Linux = Linux ']' 00:09:10.530 07:19:44 unittest -- unit/unittest.sh@236 -- # run_test unittest_ftl unittest_ftl 00:09:10.530 07:19:44 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:10.530 07:19:44 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:10.530 07:19:44 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:10.530 ************************************ 00:09:10.530 START TEST unittest_ftl 00:09:10.530 ************************************ 00:09:10.530 07:19:44 unittest.unittest_ftl -- common/autotest_common.sh@1121 -- # unittest_ftl 00:09:10.530 07:19:44 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:09:10.530 00:09:10.530 00:09:10.530 CUnit - A unit testing framework for C - Version 2.1-3 00:09:10.530 http://cunit.sourceforge.net/ 00:09:10.530 00:09:10.530 00:09:10.530 Suite: ftl_band_suite 00:09:10.530 Test: test_band_block_offset_from_addr_base ...passed 00:09:10.530 Test: test_band_block_offset_from_addr_offset ...passed 00:09:10.530 Test: test_band_addr_from_block_offset ...passed 00:09:10.530 Test: test_band_set_addr ...passed 00:09:10.530 Test: test_invalidate_addr ...passed 00:09:10.787 Test: test_next_xfer_addr ...passed 00:09:10.787 00:09:10.787 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.787 suites 1 1 n/a 0 0 00:09:10.787 tests 6 6 6 0 0 00:09:10.787 asserts 30356 30356 30356 0 n/a 00:09:10.787 00:09:10.787 Elapsed time = 0.190 seconds 00:09:10.787 07:19:44 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:09:10.787 00:09:10.787 00:09:10.787 CUnit - A unit testing framework for C - Version 2.1-3 00:09:10.787 http://cunit.sourceforge.net/ 00:09:10.787 00:09:10.787 00:09:10.787 Suite: ftl_bitmap 00:09:10.787 Test: test_ftl_bitmap_create ...[2024-07-12 07:19:44.537908] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:09:10.787 [2024-07-12 07:19:44.538525] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:09:10.787 passed 00:09:10.787 Test: test_ftl_bitmap_get ...passed 00:09:10.787 Test: test_ftl_bitmap_set ...passed 00:09:10.787 Test: test_ftl_bitmap_clear ...passed 00:09:10.787 Test: test_ftl_bitmap_find_first_set ...passed 00:09:10.787 Test: test_ftl_bitmap_find_first_clear ...passed 00:09:10.787 Test: test_ftl_bitmap_count_set ...passed 00:09:10.787 00:09:10.787 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.787 suites 1 1 n/a 0 0 00:09:10.787 tests 7 7 7 0 0 00:09:10.787 asserts 137 137 137 0 n/a 00:09:10.787 00:09:10.787 Elapsed time = 0.001 seconds 00:09:10.787 07:19:44 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:09:10.787 00:09:10.787 00:09:10.787 CUnit - A unit testing framework for C - Version 2.1-3 00:09:10.787 http://cunit.sourceforge.net/ 00:09:10.787 00:09:10.787 00:09:10.787 Suite: ftl_io_suite 00:09:10.787 Test: test_completion ...passed 00:09:10.787 Test: test_multiple_ios ...passed 00:09:10.787 00:09:10.787 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.787 suites 1 1 n/a 0 0 00:09:10.787 tests 2 2 2 0 0 00:09:10.787 asserts 47 47 47 0 n/a 00:09:10.787 00:09:10.787 Elapsed time = 0.002 seconds 00:09:10.787 07:19:44 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:09:10.787 00:09:10.787 00:09:10.787 CUnit - A unit testing framework for C - Version 2.1-3 00:09:10.787 http://cunit.sourceforge.net/ 00:09:10.787 00:09:10.787 00:09:10.787 Suite: ftl_mngt 00:09:10.787 Test: test_next_step ...passed 00:09:10.787 Test: test_continue_step ...passed 00:09:10.787 Test: test_get_func_and_step_cntx_alloc ...passed 00:09:10.787 Test: test_fail_step ...passed 00:09:10.787 Test: test_mngt_call_and_call_rollback ...passed 00:09:10.787 Test: test_nested_process_failure ...passed 00:09:10.787 Test: test_call_init_success ...passed 00:09:10.787 Test: test_call_init_failure ...passed 00:09:10.787 00:09:10.787 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.788 suites 1 1 n/a 0 0 00:09:10.788 tests 8 8 8 0 0 00:09:10.788 asserts 196 196 196 0 n/a 00:09:10.788 00:09:10.788 Elapsed time = 0.002 seconds 00:09:10.788 07:19:44 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:09:10.788 00:09:10.788 00:09:10.788 CUnit - A unit testing framework for C - Version 2.1-3 00:09:10.788 http://cunit.sourceforge.net/ 00:09:10.788 00:09:10.788 00:09:10.788 Suite: ftl_mempool 00:09:10.788 Test: test_ftl_mempool_create ...passed 00:09:10.788 Test: test_ftl_mempool_get_put ...passed 00:09:10.788 00:09:10.788 Run Summary: Type Total Ran Passed Failed Inactive 00:09:10.788 suites 1 1 n/a 0 0 00:09:10.788 tests 2 2 2 0 0 00:09:10.788 asserts 36 36 36 0 n/a 00:09:10.788 00:09:10.788 Elapsed time = 0.000 seconds 00:09:10.788 07:19:44 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:09:11.046 00:09:11.046 00:09:11.046 CUnit - A unit testing framework for C - Version 2.1-3 00:09:11.046 http://cunit.sourceforge.net/ 00:09:11.046 00:09:11.046 00:09:11.046 Suite: ftl_addr64_suite 00:09:11.046 Test: test_addr_cached ...passed 00:09:11.046 00:09:11.046 Run Summary: Type Total Ran Passed Failed Inactive 00:09:11.046 suites 1 1 n/a 0 0 00:09:11.046 tests 1 1 1 0 0 00:09:11.046 asserts 1536 1536 1536 0 n/a 00:09:11.046 00:09:11.046 Elapsed time = 0.000 seconds 00:09:11.046 07:19:44 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:09:11.046 00:09:11.046 00:09:11.046 CUnit - A unit testing framework for C - Version 2.1-3 00:09:11.046 http://cunit.sourceforge.net/ 00:09:11.046 00:09:11.046 00:09:11.046 Suite: ftl_sb 00:09:11.046 Test: test_sb_crc_v2 ...passed 00:09:11.046 Test: test_sb_crc_v3 ...passed 00:09:11.046 Test: test_sb_v3_md_layout ...[2024-07-12 07:19:44.721232] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:09:11.046 [2024-07-12 07:19:44.721690] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:11.046 [2024-07-12 07:19:44.721832] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:11.046 [2024-07-12 07:19:44.721956] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:09:11.046 [2024-07-12 07:19:44.722089] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:09:11.046 [2024-07-12 07:19:44.722233] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:09:11.046 [2024-07-12 07:19:44.722346] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:09:11.046 [2024-07-12 07:19:44.722447] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:09:11.046 [2024-07-12 07:19:44.722579] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:09:11.046 [2024-07-12 07:19:44.722781] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:09:11.046 [2024-07-12 07:19:44.722902] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:09:11.046 passed 00:09:11.046 Test: test_sb_v5_md_layout ...passed 00:09:11.046 00:09:11.046 Run Summary: Type Total Ran Passed Failed Inactive 00:09:11.046 suites 1 1 n/a 0 0 00:09:11.046 tests 4 4 4 0 0 00:09:11.046 asserts 160 160 160 0 n/a 00:09:11.046 00:09:11.046 Elapsed time = 0.003 seconds 00:09:11.046 07:19:44 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:09:11.046 00:09:11.046 00:09:11.046 CUnit - A unit testing framework for C - Version 2.1-3 00:09:11.046 http://cunit.sourceforge.net/ 00:09:11.046 00:09:11.046 00:09:11.046 Suite: ftl_layout_upgrade 00:09:11.046 Test: test_l2p_upgrade ...passed 00:09:11.046 00:09:11.046 Run Summary: Type Total Ran Passed Failed Inactive 00:09:11.046 suites 1 1 n/a 0 0 00:09:11.046 tests 1 1 1 0 0 00:09:11.046 asserts 152 152 152 0 n/a 00:09:11.046 00:09:11.046 Elapsed time = 0.001 seconds 00:09:11.046 07:19:44 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut 00:09:11.046 00:09:11.046 00:09:11.047 CUnit - A unit testing framework for C - Version 2.1-3 00:09:11.047 http://cunit.sourceforge.net/ 00:09:11.047 00:09:11.047 00:09:11.047 Suite: ftl_p2l_suite 00:09:11.047 Test: test_p2l_num_pages ...passed 00:09:11.614 Test: test_ckpt_issue ...passed 00:09:12.550 Test: test_persist_band_p2l ...passed 00:09:12.813 Test: test_clean_restore_p2l ...passed 00:09:14.717 Test: test_dirty_restore_p2l ...passed 00:09:14.717 00:09:14.717 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.717 suites 1 1 n/a 0 0 00:09:14.717 tests 5 5 5 0 0 00:09:14.717 asserts 10020 10020 10020 0 n/a 00:09:14.717 00:09:14.717 Elapsed time = 3.369 seconds 00:09:14.717 ************************************ 00:09:14.717 END TEST unittest_ftl 00:09:14.717 ************************************ 00:09:14.717 00:09:14.717 real 0m3.993s 00:09:14.717 user 0m1.288s 00:09:14.717 sys 0m2.691s 00:09:14.717 07:19:48 unittest.unittest_ftl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:14.717 07:19:48 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:09:14.717 07:19:48 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:09:14.717 07:19:48 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:14.717 07:19:48 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:14.717 07:19:48 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:14.717 ************************************ 00:09:14.717 START TEST unittest_accel 00:09:14.717 ************************************ 00:09:14.717 07:19:48 unittest.unittest_accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:09:14.717 00:09:14.717 00:09:14.717 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.717 http://cunit.sourceforge.net/ 00:09:14.717 00:09:14.717 00:09:14.717 Suite: accel_sequence 00:09:14.717 Test: test_sequence_fill_copy ...passed 00:09:14.717 Test: test_sequence_abort ...passed 00:09:14.717 Test: test_sequence_append_error ...passed 00:09:14.717 Test: test_sequence_completion_error ...[2024-07-12 07:19:48.311211] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1931:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f94e4d427c0 00:09:14.717 [2024-07-12 07:19:48.311850] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1931:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f94e4d427c0 00:09:14.717 [2024-07-12 07:19:48.312119] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1841:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f94e4d427c0 00:09:14.717 [2024-07-12 07:19:48.312297] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1841:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f94e4d427c0 00:09:14.717 passed 00:09:14.717 Test: test_sequence_decompress ...passed 00:09:14.717 Test: test_sequence_reverse ...passed 00:09:14.717 Test: test_sequence_copy_elision ...passed 00:09:14.717 Test: test_sequence_accel_buffers ...passed 00:09:14.717 Test: test_sequence_memory_domain ...[2024-07-12 07:19:48.328437] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1733:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:09:14.717 [2024-07-12 07:19:48.328806] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1772:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:09:14.717 passed 00:09:14.717 Test: test_sequence_module_memory_domain ...passed 00:09:14.717 Test: test_sequence_crypto ...passed 00:09:14.717 Test: test_sequence_driver ...[2024-07-12 07:19:48.338423] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1880:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f94e3e2f7c0 using driver: ut 00:09:14.717 [2024-07-12 07:19:48.338710] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1944:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f94e3e2f7c0 through driver: ut 00:09:14.717 passed 00:09:14.717 Test: test_sequence_same_iovs ...passed 00:09:14.717 Test: test_sequence_crc32 ...passed 00:09:14.717 Suite: accel 00:09:14.717 Test: test_spdk_accel_task_complete ...passed 00:09:14.717 Test: test_get_task ...passed 00:09:14.717 Test: test_spdk_accel_submit_copy ...passed 00:09:14.717 Test: test_spdk_accel_submit_dualcast ...[2024-07-12 07:19:48.346361] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:09:14.717 [2024-07-12 07:19:48.346529] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:09:14.717 passed 00:09:14.717 Test: test_spdk_accel_submit_compare ...passed 00:09:14.717 Test: test_spdk_accel_submit_fill ...passed 00:09:14.717 Test: test_spdk_accel_submit_crc32c ...passed 00:09:14.717 Test: test_spdk_accel_submit_crc32cv ...passed 00:09:14.717 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:09:14.717 Test: test_spdk_accel_submit_xor ...passed 00:09:14.717 Test: test_spdk_accel_module_find_by_name ...passed 00:09:14.717 Test: test_spdk_accel_module_register ...passed 00:09:14.717 00:09:14.717 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.717 suites 2 2 n/a 0 0 00:09:14.717 tests 26 26 26 0 0 00:09:14.717 asserts 830 830 830 0 n/a 00:09:14.717 00:09:14.717 Elapsed time = 0.047 seconds 00:09:14.717 00:09:14.717 real 0m0.109s 00:09:14.717 user 0m0.050s 00:09:14.717 sys 0m0.054s 00:09:14.717 07:19:48 unittest.unittest_accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:14.717 07:19:48 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:09:14.717 ************************************ 00:09:14.717 END TEST unittest_accel 00:09:14.717 ************************************ 00:09:14.717 07:19:48 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:09:14.717 07:19:48 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:14.717 07:19:48 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:14.717 07:19:48 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:14.717 ************************************ 00:09:14.717 START TEST unittest_ioat 00:09:14.717 ************************************ 00:09:14.717 07:19:48 unittest.unittest_ioat -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:09:14.717 00:09:14.717 00:09:14.717 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.717 http://cunit.sourceforge.net/ 00:09:14.717 00:09:14.717 00:09:14.717 Suite: ioat 00:09:14.717 Test: ioat_state_check ...passed 00:09:14.717 00:09:14.717 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.717 suites 1 1 n/a 0 0 00:09:14.717 tests 1 1 1 0 0 00:09:14.718 asserts 32 32 32 0 n/a 00:09:14.718 00:09:14.718 Elapsed time = 0.000 seconds 00:09:14.718 00:09:14.718 real 0m0.039s 00:09:14.718 user 0m0.017s 00:09:14.718 sys 0m0.021s 00:09:14.718 07:19:48 unittest.unittest_ioat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:14.718 07:19:48 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:09:14.718 ************************************ 00:09:14.718 END TEST unittest_ioat 00:09:14.718 ************************************ 00:09:14.718 07:19:48 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:14.718 07:19:48 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:09:14.718 07:19:48 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:14.718 07:19:48 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:14.718 07:19:48 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:14.718 ************************************ 00:09:14.718 START TEST unittest_idxd_user 00:09:14.718 ************************************ 00:09:14.718 07:19:48 unittest.unittest_idxd_user -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:09:14.718 00:09:14.718 00:09:14.718 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.718 http://cunit.sourceforge.net/ 00:09:14.718 00:09:14.718 00:09:14.718 Suite: idxd_user 00:09:14.718 Test: test_idxd_wait_cmd ...[2024-07-12 07:19:48.566528] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:09:14.718 [2024-07-12 07:19:48.567589] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:09:14.718 passed 00:09:14.718 Test: test_idxd_reset_dev ...[2024-07-12 07:19:48.568237] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:09:14.718 [2024-07-12 07:19:48.568496] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:09:14.718 passed 00:09:14.718 Test: test_idxd_group_config ...passed 00:09:14.718 Test: test_idxd_wq_config ...passed 00:09:14.718 00:09:14.718 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.718 suites 1 1 n/a 0 0 00:09:14.718 tests 4 4 4 0 0 00:09:14.718 asserts 20 20 20 0 n/a 00:09:14.718 00:09:14.718 Elapsed time = 0.001 seconds 00:09:14.718 00:09:14.718 real 0m0.039s 00:09:14.718 user 0m0.014s 00:09:14.718 sys 0m0.023s 00:09:14.718 07:19:48 unittest.unittest_idxd_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:14.718 07:19:48 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:09:14.718 ************************************ 00:09:14.718 END TEST unittest_idxd_user 00:09:14.718 ************************************ 00:09:14.977 07:19:48 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:09:14.977 07:19:48 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:14.977 07:19:48 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:14.977 07:19:48 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:14.977 ************************************ 00:09:14.977 START TEST unittest_iscsi 00:09:14.977 ************************************ 00:09:14.977 07:19:48 unittest.unittest_iscsi -- common/autotest_common.sh@1121 -- # unittest_iscsi 00:09:14.977 07:19:48 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:09:14.977 00:09:14.977 00:09:14.977 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.977 http://cunit.sourceforge.net/ 00:09:14.977 00:09:14.977 00:09:14.977 Suite: conn_suite 00:09:14.977 Test: read_task_split_in_order_case ...passed 00:09:14.977 Test: read_task_split_reverse_order_case ...passed 00:09:14.977 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:09:14.977 Test: process_non_read_task_completion_test ...passed 00:09:14.977 Test: free_tasks_on_connection ...passed 00:09:14.977 Test: free_tasks_with_queued_datain ...passed 00:09:14.977 Test: abort_queued_datain_task_test ...passed 00:09:14.977 Test: abort_queued_datain_tasks_test ...passed 00:09:14.977 00:09:14.977 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.977 suites 1 1 n/a 0 0 00:09:14.977 tests 8 8 8 0 0 00:09:14.977 asserts 230 230 230 0 n/a 00:09:14.977 00:09:14.977 Elapsed time = 0.001 seconds 00:09:14.977 07:19:48 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:09:14.977 00:09:14.977 00:09:14.977 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.977 http://cunit.sourceforge.net/ 00:09:14.977 00:09:14.977 00:09:14.977 Suite: iscsi_suite 00:09:14.977 Test: param_negotiation_test ...passed 00:09:14.977 Test: list_negotiation_test ...passed 00:09:14.977 Test: parse_valid_test ...passed 00:09:14.977 Test: parse_invalid_test ...[2024-07-12 07:19:48.734749] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:09:14.978 [2024-07-12 07:19:48.735260] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:09:14.978 [2024-07-12 07:19:48.735460] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:09:14.978 [2024-07-12 07:19:48.735674] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:09:14.978 [2024-07-12 07:19:48.735983] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:09:14.978 [2024-07-12 07:19:48.736188] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:09:14.978 [2024-07-12 07:19:48.736487] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:09:14.978 passed 00:09:14.978 00:09:14.978 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.978 suites 1 1 n/a 0 0 00:09:14.978 tests 4 4 4 0 0 00:09:14.978 asserts 161 161 161 0 n/a 00:09:14.978 00:09:14.978 Elapsed time = 0.007 seconds 00:09:14.978 07:19:48 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:09:14.978 00:09:14.978 00:09:14.978 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.978 http://cunit.sourceforge.net/ 00:09:14.978 00:09:14.978 00:09:14.978 Suite: iscsi_target_node_suite 00:09:14.978 Test: add_lun_test_cases ...[2024-07-12 07:19:48.778418] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:09:14.978 [2024-07-12 07:19:48.778908] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:09:14.978 [2024-07-12 07:19:48.779144] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:09:14.978 [2024-07-12 07:19:48.779283] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:09:14.978 [2024-07-12 07:19:48.779360] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:09:14.978 passed 00:09:14.978 Test: allow_any_allowed ...passed 00:09:14.978 Test: allow_ipv6_allowed ...passed 00:09:14.978 Test: allow_ipv6_denied ...passed 00:09:14.978 Test: allow_ipv6_invalid ...passed 00:09:14.978 Test: allow_ipv4_allowed ...passed 00:09:14.978 Test: allow_ipv4_denied ...passed 00:09:14.978 Test: allow_ipv4_invalid ...passed 00:09:14.978 Test: node_access_allowed ...passed 00:09:14.978 Test: node_access_denied_by_empty_netmask ...passed 00:09:14.978 Test: node_access_multi_initiator_groups_cases ...passed 00:09:14.978 Test: allow_iscsi_name_multi_maps_case ...passed 00:09:14.978 Test: chap_param_test_cases ...[2024-07-12 07:19:48.781539] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:09:14.978 [2024-07-12 07:19:48.781673] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:09:14.978 [2024-07-12 07:19:48.781796] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:09:14.978 [2024-07-12 07:19:48.781902] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:09:14.978 [2024-07-12 07:19:48.782040] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:09:14.978 passed 00:09:14.978 00:09:14.978 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.978 suites 1 1 n/a 0 0 00:09:14.978 tests 13 13 13 0 0 00:09:14.978 asserts 50 50 50 0 n/a 00:09:14.978 00:09:14.978 Elapsed time = 0.002 seconds 00:09:14.978 07:19:48 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:09:14.978 00:09:14.978 00:09:14.978 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.978 http://cunit.sourceforge.net/ 00:09:14.978 00:09:14.978 00:09:14.978 Suite: iscsi_suite 00:09:14.978 Test: op_login_check_target_test ...[2024-07-12 07:19:48.834593] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:09:14.978 passed 00:09:14.978 Test: op_login_session_normal_test ...[2024-07-12 07:19:48.835296] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:14.978 [2024-07-12 07:19:48.835478] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:14.978 [2024-07-12 07:19:48.835675] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:09:14.978 [2024-07-12 07:19:48.835895] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:09:14.978 [2024-07-12 07:19:48.836129] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:09:14.978 [2024-07-12 07:19:48.836380] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:09:14.978 [2024-07-12 07:19:48.836578] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:09:14.978 passed 00:09:14.978 Test: maxburstlength_test ...[2024-07-12 07:19:48.837197] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:14.978 [2024-07-12 07:19:48.837630] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4554:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:09:14.978 passed 00:09:14.978 Test: underflow_for_read_transfer_test ...passed 00:09:14.978 Test: underflow_for_zero_read_transfer_test ...passed 00:09:14.978 Test: underflow_for_request_sense_test ...passed 00:09:14.978 Test: underflow_for_check_condition_test ...passed 00:09:14.978 Test: add_transfer_task_test ...passed 00:09:14.978 Test: get_transfer_task_test ...passed 00:09:14.978 Test: del_transfer_task_test ...passed 00:09:14.978 Test: clear_all_transfer_tasks_test ...passed 00:09:14.978 Test: build_iovs_test ...passed 00:09:14.978 Test: build_iovs_with_md_test ...passed 00:09:14.978 Test: pdu_hdr_op_login_test ...[2024-07-12 07:19:48.845030] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:09:14.978 [2024-07-12 07:19:48.845535] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:09:14.978 [2024-07-12 07:19:48.845958] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:09:14.978 passed 00:09:14.978 Test: pdu_hdr_op_text_test ...[2024-07-12 07:19:48.846732] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2246:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:09:14.978 [2024-07-12 07:19:48.847145] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2278:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:09:14.978 [2024-07-12 07:19:48.847496] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2291:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:09:14.978 passed 00:09:14.978 Test: pdu_hdr_op_logout_test ...[2024-07-12 07:19:48.848230] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2521:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:09:14.978 passed 00:09:14.978 Test: pdu_hdr_op_scsi_test ...[2024-07-12 07:19:48.849106] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:09:14.978 [2024-07-12 07:19:48.849474] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:09:14.978 [2024-07-12 07:19:48.849793] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3370:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:09:14.978 [2024-07-12 07:19:48.849935] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3403:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:09:14.978 [2024-07-12 07:19:48.850076] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3410:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:09:14.978 [2024-07-12 07:19:48.850308] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3434:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:09:14.978 passed 00:09:14.978 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-12 07:19:48.850566] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3611:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:09:14.978 [2024-07-12 07:19:48.850677] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3700:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:09:14.978 passed 00:09:14.978 Test: pdu_hdr_op_nopout_test ...[2024-07-12 07:19:48.850955] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3719:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:09:14.978 [2024-07-12 07:19:48.851088] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:09:14.978 [2024-07-12 07:19:48.851178] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:09:14.978 [2024-07-12 07:19:48.851228] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3749:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:09:14.978 passed 00:09:14.978 Test: pdu_hdr_op_data_test ...[2024-07-12 07:19:48.851471] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4192:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:09:14.978 [2024-07-12 07:19:48.851577] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4209:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:09:14.978 [2024-07-12 07:19:48.851658] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:09:14.978 [2024-07-12 07:19:48.851763] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:09:14.978 [2024-07-12 07:19:48.851905] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4228:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:09:14.978 [2024-07-12 07:19:48.852049] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4239:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:09:14.978 [2024-07-12 07:19:48.852110] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4249:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:09:14.978 passed 00:09:14.978 Test: empty_text_with_cbit_test ...passed 00:09:14.978 Test: pdu_payload_read_test ...[2024-07-12 07:19:48.853943] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4637:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:09:14.978 passed 00:09:14.978 Test: data_out_pdu_sequence_test ...passed 00:09:15.237 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:09:15.237 00:09:15.237 Run Summary: Type Total Ran Passed Failed Inactive 00:09:15.237 suites 1 1 n/a 0 0 00:09:15.237 tests 24 24 24 0 0 00:09:15.237 asserts 150253 150253 150253 0 n/a 00:09:15.237 00:09:15.237 Elapsed time = 0.018 seconds 00:09:15.237 07:19:48 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:09:15.237 00:09:15.237 00:09:15.237 CUnit - A unit testing framework for C - Version 2.1-3 00:09:15.237 http://cunit.sourceforge.net/ 00:09:15.237 00:09:15.237 00:09:15.237 Suite: init_grp_suite 00:09:15.237 Test: create_initiator_group_success_case ...passed 00:09:15.237 Test: find_initiator_group_success_case ...passed 00:09:15.237 Test: register_initiator_group_twice_case ...passed 00:09:15.237 Test: add_initiator_name_success_case ...passed 00:09:15.237 Test: add_initiator_name_fail_case ...[2024-07-12 07:19:48.903618] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:09:15.237 passed 00:09:15.237 Test: delete_all_initiator_names_success_case ...passed 00:09:15.237 Test: add_netmask_success_case ...passed 00:09:15.237 Test: add_netmask_fail_case ...[2024-07-12 07:19:48.905029] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:09:15.237 passed 00:09:15.237 Test: delete_all_netmasks_success_case ...passed 00:09:15.237 Test: initiator_name_overwrite_all_to_any_case ...passed 00:09:15.237 Test: netmask_overwrite_all_to_any_case ...passed 00:09:15.237 Test: add_delete_initiator_names_case ...passed 00:09:15.237 Test: add_duplicated_initiator_names_case ...passed 00:09:15.237 Test: delete_nonexisting_initiator_names_case ...passed 00:09:15.237 Test: add_delete_netmasks_case ...passed 00:09:15.237 Test: add_duplicated_netmasks_case ...passed 00:09:15.237 Test: delete_nonexisting_netmasks_case ...passed 00:09:15.237 00:09:15.237 Run Summary: Type Total Ran Passed Failed Inactive 00:09:15.237 suites 1 1 n/a 0 0 00:09:15.237 tests 17 17 17 0 0 00:09:15.237 asserts 108 108 108 0 n/a 00:09:15.237 00:09:15.237 Elapsed time = 0.002 seconds 00:09:15.237 07:19:48 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:09:15.237 00:09:15.237 00:09:15.237 CUnit - A unit testing framework for C - Version 2.1-3 00:09:15.237 http://cunit.sourceforge.net/ 00:09:15.237 00:09:15.237 00:09:15.237 Suite: portal_grp_suite 00:09:15.237 Test: portal_create_ipv4_normal_case ...passed 00:09:15.237 Test: portal_create_ipv6_normal_case ...passed 00:09:15.237 Test: portal_create_ipv4_wildcard_case ...passed 00:09:15.237 Test: portal_create_ipv6_wildcard_case ...passed 00:09:15.237 Test: portal_create_twice_case ...[2024-07-12 07:19:48.949900] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:09:15.237 passed 00:09:15.237 Test: portal_grp_register_unregister_case ...passed 00:09:15.238 Test: portal_grp_register_twice_case ...passed 00:09:15.238 Test: portal_grp_add_delete_case ...passed 00:09:15.238 Test: portal_grp_add_delete_twice_case ...passed 00:09:15.238 00:09:15.238 Run Summary: Type Total Ran Passed Failed Inactive 00:09:15.238 suites 1 1 n/a 0 0 00:09:15.238 tests 9 9 9 0 0 00:09:15.238 asserts 44 44 44 0 n/a 00:09:15.238 00:09:15.238 Elapsed time = 0.004 seconds 00:09:15.238 00:09:15.238 real 0m0.323s 00:09:15.238 user 0m0.125s 00:09:15.238 sys 0m0.179s 00:09:15.238 07:19:48 unittest.unittest_iscsi -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:15.238 07:19:48 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:09:15.238 ************************************ 00:09:15.238 END TEST unittest_iscsi 00:09:15.238 ************************************ 00:09:15.238 07:19:49 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:09:15.238 07:19:49 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:15.238 07:19:49 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:15.238 07:19:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:15.238 ************************************ 00:09:15.238 START TEST unittest_json 00:09:15.238 ************************************ 00:09:15.238 07:19:49 unittest.unittest_json -- common/autotest_common.sh@1121 -- # unittest_json 00:09:15.238 07:19:49 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:09:15.238 00:09:15.238 00:09:15.238 CUnit - A unit testing framework for C - Version 2.1-3 00:09:15.238 http://cunit.sourceforge.net/ 00:09:15.238 00:09:15.238 00:09:15.238 Suite: json 00:09:15.238 Test: test_parse_literal ...passed 00:09:15.238 Test: test_parse_string_simple ...passed 00:09:15.238 Test: test_parse_string_control_chars ...passed 00:09:15.238 Test: test_parse_string_utf8 ...passed 00:09:15.238 Test: test_parse_string_escapes_twochar ...passed 00:09:15.238 Test: test_parse_string_escapes_unicode ...passed 00:09:15.238 Test: test_parse_number ...passed 00:09:15.238 Test: test_parse_array ...passed 00:09:15.238 Test: test_parse_object ...passed 00:09:15.238 Test: test_parse_nesting ...passed 00:09:15.238 Test: test_parse_comment ...passed 00:09:15.238 00:09:15.238 Run Summary: Type Total Ran Passed Failed Inactive 00:09:15.238 suites 1 1 n/a 0 0 00:09:15.238 tests 11 11 11 0 0 00:09:15.238 asserts 1516 1516 1516 0 n/a 00:09:15.238 00:09:15.238 Elapsed time = 0.002 seconds 00:09:15.238 07:19:49 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:09:15.238 00:09:15.238 00:09:15.238 CUnit - A unit testing framework for C - Version 2.1-3 00:09:15.238 http://cunit.sourceforge.net/ 00:09:15.238 00:09:15.238 00:09:15.238 Suite: json 00:09:15.238 Test: test_strequal ...passed 00:09:15.238 Test: test_num_to_uint16 ...passed 00:09:15.238 Test: test_num_to_int32 ...passed 00:09:15.238 Test: test_num_to_uint64 ...passed 00:09:15.238 Test: test_decode_object ...passed 00:09:15.238 Test: test_decode_array ...passed 00:09:15.238 Test: test_decode_bool ...passed 00:09:15.238 Test: test_decode_uint16 ...passed 00:09:15.238 Test: test_decode_int32 ...passed 00:09:15.238 Test: test_decode_uint32 ...passed 00:09:15.238 Test: test_decode_uint64 ...passed 00:09:15.238 Test: test_decode_string ...passed 00:09:15.238 Test: test_decode_uuid ...passed 00:09:15.238 Test: test_find ...passed 00:09:15.238 Test: test_find_array ...passed 00:09:15.238 Test: test_iterating ...passed 00:09:15.238 Test: test_free_object ...passed 00:09:15.238 00:09:15.238 Run Summary: Type Total Ran Passed Failed Inactive 00:09:15.238 suites 1 1 n/a 0 0 00:09:15.238 tests 17 17 17 0 0 00:09:15.238 asserts 236 236 236 0 n/a 00:09:15.238 00:09:15.238 Elapsed time = 0.001 seconds 00:09:15.496 07:19:49 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:09:15.496 00:09:15.496 00:09:15.496 CUnit - A unit testing framework for C - Version 2.1-3 00:09:15.496 http://cunit.sourceforge.net/ 00:09:15.496 00:09:15.496 00:09:15.496 Suite: json 00:09:15.496 Test: test_write_literal ...passed 00:09:15.496 Test: test_write_string_simple ...passed 00:09:15.497 Test: test_write_string_escapes ...passed 00:09:15.497 Test: test_write_string_utf16le ...passed 00:09:15.497 Test: test_write_number_int32 ...passed 00:09:15.497 Test: test_write_number_uint32 ...passed 00:09:15.497 Test: test_write_number_uint128 ...passed 00:09:15.497 Test: test_write_string_number_uint128 ...passed 00:09:15.497 Test: test_write_number_int64 ...passed 00:09:15.497 Test: test_write_number_uint64 ...passed 00:09:15.497 Test: test_write_number_double ...passed 00:09:15.497 Test: test_write_uuid ...passed 00:09:15.497 Test: test_write_array ...passed 00:09:15.497 Test: test_write_object ...passed 00:09:15.497 Test: test_write_nesting ...passed 00:09:15.497 Test: test_write_val ...passed 00:09:15.497 00:09:15.497 Run Summary: Type Total Ran Passed Failed Inactive 00:09:15.497 suites 1 1 n/a 0 0 00:09:15.497 tests 16 16 16 0 0 00:09:15.497 asserts 918 918 918 0 n/a 00:09:15.497 00:09:15.497 Elapsed time = 0.006 seconds 00:09:15.497 07:19:49 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:09:15.497 00:09:15.497 00:09:15.497 CUnit - A unit testing framework for C - Version 2.1-3 00:09:15.497 http://cunit.sourceforge.net/ 00:09:15.497 00:09:15.497 00:09:15.497 Suite: jsonrpc 00:09:15.497 Test: test_parse_request ...passed 00:09:15.497 Test: test_parse_request_streaming ...passed 00:09:15.497 00:09:15.497 Run Summary: Type Total Ran Passed Failed Inactive 00:09:15.497 suites 1 1 n/a 0 0 00:09:15.497 tests 2 2 2 0 0 00:09:15.497 asserts 289 289 289 0 n/a 00:09:15.497 00:09:15.497 Elapsed time = 0.004 seconds 00:09:15.497 00:09:15.497 real 0m0.164s 00:09:15.497 user 0m0.072s 00:09:15.497 sys 0m0.086s 00:09:15.497 07:19:49 unittest.unittest_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:15.497 07:19:49 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:09:15.497 ************************************ 00:09:15.497 END TEST unittest_json 00:09:15.497 ************************************ 00:09:15.497 07:19:49 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:09:15.497 07:19:49 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:15.497 07:19:49 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:15.497 07:19:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:15.497 ************************************ 00:09:15.497 START TEST unittest_rpc 00:09:15.497 ************************************ 00:09:15.497 07:19:49 unittest.unittest_rpc -- common/autotest_common.sh@1121 -- # unittest_rpc 00:09:15.497 07:19:49 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:09:15.497 00:09:15.497 00:09:15.497 CUnit - A unit testing framework for C - Version 2.1-3 00:09:15.497 http://cunit.sourceforge.net/ 00:09:15.497 00:09:15.497 00:09:15.497 Suite: rpc 00:09:15.497 Test: test_jsonrpc_handler ...passed 00:09:15.497 Test: test_spdk_rpc_is_method_allowed ...passed 00:09:15.497 Test: test_rpc_get_methods ...[2024-07-12 07:19:49.290536] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:09:15.497 passed 00:09:15.497 Test: test_rpc_spdk_get_version ...passed 00:09:15.497 Test: test_spdk_rpc_listen_close ...passed 00:09:15.497 Test: test_rpc_run_multiple_servers ...passed 00:09:15.497 00:09:15.497 Run Summary: Type Total Ran Passed Failed Inactive 00:09:15.497 suites 1 1 n/a 0 0 00:09:15.497 tests 6 6 6 0 0 00:09:15.497 asserts 23 23 23 0 n/a 00:09:15.497 00:09:15.497 Elapsed time = 0.001 seconds 00:09:15.497 00:09:15.497 real 0m0.042s 00:09:15.497 user 0m0.013s 00:09:15.497 sys 0m0.028s 00:09:15.497 07:19:49 unittest.unittest_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:15.497 07:19:49 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.497 ************************************ 00:09:15.497 END TEST unittest_rpc 00:09:15.497 ************************************ 00:09:15.497 07:19:49 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:09:15.497 07:19:49 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:15.497 07:19:49 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:15.497 07:19:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:15.497 ************************************ 00:09:15.497 START TEST unittest_notify 00:09:15.497 ************************************ 00:09:15.497 07:19:49 unittest.unittest_notify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:09:15.756 00:09:15.756 00:09:15.756 CUnit - A unit testing framework for C - Version 2.1-3 00:09:15.756 http://cunit.sourceforge.net/ 00:09:15.756 00:09:15.756 00:09:15.756 Suite: app_suite 00:09:15.756 Test: notify ...passed 00:09:15.756 00:09:15.756 Run Summary: Type Total Ran Passed Failed Inactive 00:09:15.756 suites 1 1 n/a 0 0 00:09:15.756 tests 1 1 1 0 0 00:09:15.756 asserts 13 13 13 0 n/a 00:09:15.756 00:09:15.756 Elapsed time = 0.000 seconds 00:09:15.756 00:09:15.756 real 0m0.036s 00:09:15.756 user 0m0.018s 00:09:15.756 sys 0m0.018s 00:09:15.756 07:19:49 unittest.unittest_notify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:15.756 07:19:49 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:09:15.756 ************************************ 00:09:15.756 END TEST unittest_notify 00:09:15.756 ************************************ 00:09:15.756 07:19:49 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:09:15.756 07:19:49 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:15.756 07:19:49 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:15.756 07:19:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:15.756 ************************************ 00:09:15.756 START TEST unittest_nvme 00:09:15.756 ************************************ 00:09:15.756 07:19:49 unittest.unittest_nvme -- common/autotest_common.sh@1121 -- # unittest_nvme 00:09:15.756 07:19:49 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:09:15.756 00:09:15.756 00:09:15.756 CUnit - A unit testing framework for C - Version 2.1-3 00:09:15.756 http://cunit.sourceforge.net/ 00:09:15.756 00:09:15.756 00:09:15.756 Suite: nvme 00:09:15.756 Test: test_opc_data_transfer ...passed 00:09:15.756 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:09:15.756 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:09:15.756 Test: test_trid_parse_and_compare ...[2024-07-12 07:19:49.493131] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1176:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:09:15.756 [2024-07-12 07:19:49.493684] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:15.756 [2024-07-12 07:19:49.493934] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1188:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:09:15.756 [2024-07-12 07:19:49.494078] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:15.756 [2024-07-12 07:19:49.494172] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without value 00:09:15.756 [2024-07-12 07:19:49.494325] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:09:15.756 passed 00:09:15.756 Test: test_trid_trtype_str ...passed 00:09:15.756 Test: test_trid_adrfam_str ...passed 00:09:15.756 Test: test_nvme_ctrlr_probe ...[2024-07-12 07:19:49.495279] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:09:15.756 passed 00:09:15.756 Test: test_spdk_nvme_probe ...[2024-07-12 07:19:49.495708] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:15.756 [2024-07-12 07:19:49.495867] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:09:15.756 [2024-07-12 07:19:49.496120] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:09:15.756 [2024-07-12 07:19:49.496290] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:09:15.756 passed 00:09:15.756 Test: test_spdk_nvme_connect ...[2024-07-12 07:19:49.496684] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 994:spdk_nvme_connect: *ERROR*: No transport ID specified 00:09:15.756 [2024-07-12 07:19:49.497238] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:15.756 [2024-07-12 07:19:49.497575] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1005:spdk_nvme_connect: *ERROR*: Create probe context failed 00:09:15.756 passed 00:09:15.756 Test: test_nvme_ctrlr_probe_internal ...[2024-07-12 07:19:49.498036] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:09:15.756 [2024-07-12 07:19:49.498257] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:09:15.756 passed 00:09:15.756 Test: test_nvme_init_controllers ...[2024-07-12 07:19:49.498692] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:09:15.756 passed 00:09:15.756 Test: test_nvme_driver_init ...[2024-07-12 07:19:49.499090] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:09:15.756 [2024-07-12 07:19:49.499242] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:09:15.756 [2024-07-12 07:19:49.608425] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:09:15.756 [2024-07-12 07:19:49.609000] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:09:15.756 passed 00:09:15.756 Test: test_spdk_nvme_detach ...passed 00:09:15.756 Test: test_nvme_completion_poll_cb ...passed 00:09:15.756 Test: test_nvme_user_copy_cmd_complete ...passed 00:09:15.756 Test: test_nvme_allocate_request_null ...passed 00:09:15.756 Test: test_nvme_allocate_request ...passed 00:09:15.756 Test: test_nvme_free_request ...passed 00:09:15.756 Test: test_nvme_allocate_request_user_copy ...passed 00:09:15.756 Test: test_nvme_robust_mutex_init_shared ...passed 00:09:15.757 Test: test_nvme_request_check_timeout ...passed 00:09:15.757 Test: test_nvme_wait_for_completion ...passed 00:09:15.757 Test: test_spdk_nvme_parse_func ...passed 00:09:15.757 Test: test_spdk_nvme_detach_async ...passed 00:09:15.757 Test: test_nvme_parse_addr ...[2024-07-12 07:19:49.613562] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1586:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:09:15.757 passed 00:09:15.757 00:09:15.757 Run Summary: Type Total Ran Passed Failed Inactive 00:09:15.757 suites 1 1 n/a 0 0 00:09:15.757 tests 25 25 25 0 0 00:09:15.757 asserts 326 326 326 0 n/a 00:09:15.757 00:09:15.757 Elapsed time = 0.008 seconds 00:09:16.016 07:19:49 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:09:16.016 00:09:16.016 00:09:16.016 CUnit - A unit testing framework for C - Version 2.1-3 00:09:16.016 http://cunit.sourceforge.net/ 00:09:16.016 00:09:16.016 00:09:16.016 Suite: nvme_ctrlr 00:09:16.016 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-12 07:19:49.665416] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.016 passed 00:09:16.016 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-12 07:19:49.667725] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.016 passed 00:09:16.016 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-12 07:19:49.669213] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.016 passed 00:09:16.016 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-12 07:19:49.670758] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.016 passed 00:09:16.016 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-12 07:19:49.672302] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.016 [2024-07-12 07:19:49.673587] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3948:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 07:19:49.674914] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3948:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 07:19:49.676180] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3948:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:16.016 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-12 07:19:49.678865] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.016 [2024-07-12 07:19:49.681214] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3948:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 07:19:49.682566] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3948:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:16.016 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-12 07:19:49.685354] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.016 [2024-07-12 07:19:49.686603] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3948:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-12 07:19:49.689071] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3948:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:09:16.016 Test: test_nvme_ctrlr_init_delay ...[2024-07-12 07:19:49.691988] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.016 passed 00:09:16.016 Test: test_alloc_io_qpair_rr_1 ...[2024-07-12 07:19:49.693739] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.016 [2024-07-12 07:19:49.694091] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:09:16.016 [2024-07-12 07:19:49.694580] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:16.016 [2024-07-12 07:19:49.694847] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:16.016 [2024-07-12 07:19:49.695090] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:09:16.016 passed 00:09:16.016 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:09:16.016 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:09:16.016 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-12 07:19:49.696179] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.016 passed 00:09:16.016 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-12 07:19:49.696866] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.016 [2024-07-12 07:19:49.697179] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:09:16.016 passed 00:09:16.016 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-12 07:19:49.698009] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4870:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:09:16.016 [2024-07-12 07:19:49.698389] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4907:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:09:16.016 [2024-07-12 07:19:49.698711] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4947:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:09:16.016 [2024-07-12 07:19:49.699001] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4907:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:09:16.016 passed 00:09:16.016 Test: test_nvme_ctrlr_fail ...[2024-07-12 07:19:49.699396] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:09:16.016 passed 00:09:16.016 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:09:16.016 Test: test_nvme_ctrlr_set_supported_features ...passed 00:09:16.016 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:09:16.016 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-12 07:19:49.700272] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.275 passed 00:09:16.275 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:09:16.275 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:09:16.275 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:09:16.275 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-12 07:19:50.018545] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.275 passed 00:09:16.275 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-12 07:19:50.025713] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.275 passed 00:09:16.275 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-12 07:19:50.027052] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.275 [2024-07-12 07:19:50.027164] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2884:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:09:16.275 passed 00:09:16.275 Test: test_alloc_io_qpair_fail ...[2024-07-12 07:19:50.028563] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.275 [2024-07-12 07:19:50.028727] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 511:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:09:16.275 passed 00:09:16.275 Test: test_nvme_ctrlr_add_remove_process ...passed 00:09:16.275 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:09:16.275 Test: test_nvme_ctrlr_set_state ...passed[2024-07-12 07:19:50.029171] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1479:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:09:16.275 00:09:16.275 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-12 07:19:50.029350] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.275 passed 00:09:16.275 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-12 07:19:50.049299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.275 passed 00:09:16.275 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-12 07:19:50.085498] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.275 passed 00:09:16.275 Test: test_nvme_ctrlr_reset ...[2024-07-12 07:19:50.087214] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.275 passed 00:09:16.275 Test: test_nvme_ctrlr_aer_callback ...[2024-07-12 07:19:50.087657] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.275 passed 00:09:16.275 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-12 07:19:50.089217] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.275 passed 00:09:16.275 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:09:16.275 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:09:16.275 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-12 07:19:50.091385] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.275 passed 00:09:16.275 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:09:16.276 Test: test_nvme_ctrlr_ana_resize ...[2024-07-12 07:19:50.093003] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.276 passed 00:09:16.276 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:09:16.276 Test: test_nvme_transport_ctrlr_ready ...[2024-07-12 07:19:50.094937] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4030:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:09:16.276 [2024-07-12 07:19:50.095025] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4081:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:09:16.276 passed 00:09:16.276 Test: test_nvme_ctrlr_disable ...[2024-07-12 07:19:50.095249] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:09:16.276 passed 00:09:16.276 00:09:16.276 Run Summary: Type Total Ran Passed Failed Inactive 00:09:16.276 suites 1 1 n/a 0 0 00:09:16.276 tests 43 43 43 0 0 00:09:16.276 asserts 10418 10418 10418 0 n/a 00:09:16.276 00:09:16.276 Elapsed time = 0.381 seconds 00:09:16.276 07:19:50 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:09:16.276 00:09:16.276 00:09:16.276 CUnit - A unit testing framework for C - Version 2.1-3 00:09:16.276 http://cunit.sourceforge.net/ 00:09:16.276 00:09:16.276 00:09:16.276 Suite: nvme_ctrlr_cmd 00:09:16.276 Test: test_get_log_pages ...passed 00:09:16.276 Test: test_set_feature_cmd ...passed 00:09:16.276 Test: test_set_feature_ns_cmd ...passed 00:09:16.276 Test: test_get_feature_cmd ...passed 00:09:16.276 Test: test_get_feature_ns_cmd ...passed 00:09:16.276 Test: test_abort_cmd ...passed 00:09:16.276 Test: test_set_host_id_cmds ...[2024-07-12 07:19:50.151063] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:09:16.276 passed 00:09:16.276 Test: test_io_cmd_raw_no_payload_build ...passed 00:09:16.276 Test: test_io_raw_cmd ...passed 00:09:16.276 Test: test_io_raw_cmd_with_md ...passed 00:09:16.276 Test: test_namespace_attach ...passed 00:09:16.276 Test: test_namespace_detach ...passed 00:09:16.276 Test: test_namespace_create ...passed 00:09:16.276 Test: test_namespace_delete ...passed 00:09:16.276 Test: test_doorbell_buffer_config ...passed 00:09:16.276 Test: test_format_nvme ...passed 00:09:16.276 Test: test_fw_commit ...passed 00:09:16.276 Test: test_fw_image_download ...passed 00:09:16.276 Test: test_sanitize ...passed 00:09:16.276 Test: test_directive ...passed 00:09:16.276 Test: test_nvme_request_add_abort ...passed 00:09:16.276 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:09:16.276 Test: test_nvme_ctrlr_cmd_identify ...passed 00:09:16.276 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:09:16.276 00:09:16.276 Run Summary: Type Total Ran Passed Failed Inactive 00:09:16.276 suites 1 1 n/a 0 0 00:09:16.276 tests 24 24 24 0 0 00:09:16.276 asserts 198 198 198 0 n/a 00:09:16.276 00:09:16.276 Elapsed time = 0.001 seconds 00:09:16.535 07:19:50 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:09:16.535 00:09:16.535 00:09:16.535 CUnit - A unit testing framework for C - Version 2.1-3 00:09:16.535 http://cunit.sourceforge.net/ 00:09:16.535 00:09:16.535 00:09:16.535 Suite: nvme_ctrlr_cmd 00:09:16.535 Test: test_geometry_cmd ...passed 00:09:16.535 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:09:16.535 00:09:16.535 Run Summary: Type Total Ran Passed Failed Inactive 00:09:16.535 suites 1 1 n/a 0 0 00:09:16.535 tests 2 2 2 0 0 00:09:16.535 asserts 7 7 7 0 n/a 00:09:16.535 00:09:16.535 Elapsed time = 0.000 seconds 00:09:16.535 07:19:50 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:09:16.535 00:09:16.535 00:09:16.535 CUnit - A unit testing framework for C - Version 2.1-3 00:09:16.535 http://cunit.sourceforge.net/ 00:09:16.535 00:09:16.535 00:09:16.535 Suite: nvme 00:09:16.535 Test: test_nvme_ns_construct ...passed 00:09:16.535 Test: test_nvme_ns_uuid ...passed 00:09:16.535 Test: test_nvme_ns_csi ...passed 00:09:16.535 Test: test_nvme_ns_data ...passed 00:09:16.535 Test: test_nvme_ns_set_identify_data ...passed 00:09:16.535 Test: test_spdk_nvme_ns_get_values ...passed 00:09:16.535 Test: test_spdk_nvme_ns_is_active ...passed 00:09:16.535 Test: spdk_nvme_ns_supports ...passed 00:09:16.535 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:09:16.535 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:09:16.535 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:09:16.535 Test: test_nvme_ns_find_id_desc ...passed 00:09:16.535 00:09:16.535 Run Summary: Type Total Ran Passed Failed Inactive 00:09:16.535 suites 1 1 n/a 0 0 00:09:16.535 tests 12 12 12 0 0 00:09:16.535 asserts 83 83 83 0 n/a 00:09:16.535 00:09:16.535 Elapsed time = 0.001 seconds 00:09:16.535 07:19:50 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:09:16.535 00:09:16.535 00:09:16.535 CUnit - A unit testing framework for C - Version 2.1-3 00:09:16.535 http://cunit.sourceforge.net/ 00:09:16.535 00:09:16.535 00:09:16.535 Suite: nvme_ns_cmd 00:09:16.535 Test: split_test ...passed 00:09:16.535 Test: split_test2 ...passed 00:09:16.535 Test: split_test3 ...passed 00:09:16.535 Test: split_test4 ...passed 00:09:16.535 Test: test_nvme_ns_cmd_flush ...passed 00:09:16.535 Test: test_nvme_ns_cmd_dataset_management ...passed 00:09:16.535 Test: test_nvme_ns_cmd_copy ...passed 00:09:16.535 Test: test_io_flags ...[2024-07-12 07:19:50.276457] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:09:16.535 passed 00:09:16.535 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:09:16.535 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:09:16.535 Test: test_nvme_ns_cmd_reservation_register ...passed 00:09:16.535 Test: test_nvme_ns_cmd_reservation_release ...passed 00:09:16.535 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:09:16.535 Test: test_nvme_ns_cmd_reservation_report ...passed 00:09:16.535 Test: test_cmd_child_request ...passed 00:09:16.535 Test: test_nvme_ns_cmd_readv ...passed 00:09:16.535 Test: test_nvme_ns_cmd_read_with_md ...passed 00:09:16.535 Test: test_nvme_ns_cmd_writev ...[2024-07-12 07:19:50.278415] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:09:16.535 passed 00:09:16.535 Test: test_nvme_ns_cmd_write_with_md ...passed 00:09:16.535 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:09:16.535 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:09:16.535 Test: test_nvme_ns_cmd_comparev ...passed 00:09:16.535 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:09:16.535 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:09:16.536 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:09:16.536 Test: test_nvme_ns_cmd_setup_request ...passed 00:09:16.536 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:09:16.536 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed[2024-07-12 07:19:50.280747] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:09:16.536 00:09:16.536 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-07-12 07:19:50.280944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:09:16.536 passed 00:09:16.536 Test: test_nvme_ns_cmd_verify ...passed 00:09:16.536 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:09:16.536 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:09:16.536 00:09:16.536 Run Summary: Type Total Ran Passed Failed Inactive 00:09:16.536 suites 1 1 n/a 0 0 00:09:16.536 tests 32 32 32 0 0 00:09:16.536 asserts 550 550 550 0 n/a 00:09:16.536 00:09:16.536 Elapsed time = 0.004 seconds 00:09:16.536 07:19:50 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:09:16.536 00:09:16.536 00:09:16.536 CUnit - A unit testing framework for C - Version 2.1-3 00:09:16.536 http://cunit.sourceforge.net/ 00:09:16.536 00:09:16.536 00:09:16.536 Suite: nvme_ns_cmd 00:09:16.536 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:09:16.536 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:09:16.536 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:09:16.536 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:09:16.536 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:09:16.536 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:09:16.536 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:09:16.536 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:09:16.536 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:09:16.536 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:09:16.536 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:09:16.536 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:09:16.536 00:09:16.536 Run Summary: Type Total Ran Passed Failed Inactive 00:09:16.536 suites 1 1 n/a 0 0 00:09:16.536 tests 12 12 12 0 0 00:09:16.536 asserts 123 123 123 0 n/a 00:09:16.536 00:09:16.536 Elapsed time = 0.001 seconds 00:09:16.536 07:19:50 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:09:16.536 00:09:16.536 00:09:16.536 CUnit - A unit testing framework for C - Version 2.1-3 00:09:16.536 http://cunit.sourceforge.net/ 00:09:16.536 00:09:16.536 00:09:16.536 Suite: nvme_qpair 00:09:16.536 Test: test3 ...passed 00:09:16.536 Test: test_ctrlr_failed ...passed 00:09:16.536 Test: struct_packing ...passed 00:09:16.536 Test: test_nvme_qpair_process_completions ...[2024-07-12 07:19:50.374390] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:16.536 [2024-07-12 07:19:50.374942] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:16.536 [2024-07-12 07:19:50.375132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:09:16.536 [2024-07-12 07:19:50.375391] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:09:16.536 passed 00:09:16.536 Test: test_nvme_completion_is_retry ...passed 00:09:16.536 Test: test_get_status_string ...passed 00:09:16.536 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:09:16.536 Test: test_nvme_qpair_submit_request ...passed 00:09:16.536 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:09:16.536 Test: test_nvme_qpair_manual_complete_request ...passed 00:09:16.536 Test: test_nvme_qpair_init_deinit ...[2024-07-12 07:19:50.377029] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:09:16.536 passed 00:09:16.536 Test: test_nvme_get_sgl_print_info ...passed 00:09:16.536 00:09:16.536 Run Summary: Type Total Ran Passed Failed Inactive 00:09:16.536 suites 1 1 n/a 0 0 00:09:16.536 tests 12 12 12 0 0 00:09:16.536 asserts 154 154 154 0 n/a 00:09:16.536 00:09:16.536 Elapsed time = 0.002 seconds 00:09:16.536 07:19:50 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:09:16.796 00:09:16.796 00:09:16.796 CUnit - A unit testing framework for C - Version 2.1-3 00:09:16.796 http://cunit.sourceforge.net/ 00:09:16.796 00:09:16.796 00:09:16.796 Suite: nvme_pcie 00:09:16.796 Test: test_prp_list_append ...[2024-07-12 07:19:50.420962] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:09:16.796 [2024-07-12 07:19:50.421537] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:09:16.796 [2024-07-12 07:19:50.421711] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:09:16.796 [2024-07-12 07:19:50.422052] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:09:16.796 [2024-07-12 07:19:50.422267] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:09:16.796 passed 00:09:16.796 Test: test_nvme_pcie_hotplug_monitor ...passed 00:09:16.796 Test: test_shadow_doorbell_update ...passed 00:09:16.796 Test: test_build_contig_hw_sgl_request ...passed 00:09:16.796 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:09:16.796 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:09:16.796 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:09:16.796 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-07-12 07:19:50.423374] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:09:16.796 passed 00:09:16.796 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:09:16.796 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:09:16.796 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-07-12 07:19:50.424051] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:09:16.796 passed 00:09:16.796 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-07-12 07:19:50.424365] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:09:16.796 passed 00:09:16.796 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-12 07:19:50.424580] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:09:16.796 passed 00:09:16.796 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-07-12 07:19:50.424789] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:09:16.796 passed 00:09:16.796 00:09:16.796 Run Summary: Type Total Ran Passed Failed Inactive 00:09:16.796 suites 1 1 n/a 0 0 00:09:16.796 tests 14 14 14 0 0 00:09:16.796 asserts 235 235 235 0 n/a 00:09:16.796 00:09:16.796 Elapsed time = 0.002 seconds 00:09:16.796 07:19:50 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:09:16.796 00:09:16.796 00:09:16.796 CUnit - A unit testing framework for C - Version 2.1-3 00:09:16.796 http://cunit.sourceforge.net/ 00:09:16.796 00:09:16.796 00:09:16.796 Suite: nvme_ns_cmd 00:09:16.796 Test: nvme_poll_group_create_test ...passed 00:09:16.796 Test: nvme_poll_group_add_remove_test ...passed 00:09:16.796 Test: nvme_poll_group_process_completions ...passed 00:09:16.796 Test: nvme_poll_group_destroy_test ...passed 00:09:16.796 Test: nvme_poll_group_get_free_stats ...passed 00:09:16.796 00:09:16.796 Run Summary: Type Total Ran Passed Failed Inactive 00:09:16.796 suites 1 1 n/a 0 0 00:09:16.796 tests 5 5 5 0 0 00:09:16.796 asserts 75 75 75 0 n/a 00:09:16.796 00:09:16.796 Elapsed time = 0.001 seconds 00:09:16.796 07:19:50 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:09:16.796 00:09:16.796 00:09:16.796 CUnit - A unit testing framework for C - Version 2.1-3 00:09:16.796 http://cunit.sourceforge.net/ 00:09:16.797 00:09:16.797 00:09:16.797 Suite: nvme_quirks 00:09:16.797 Test: test_nvme_quirks_striping ...passed 00:09:16.797 00:09:16.797 Run Summary: Type Total Ran Passed Failed Inactive 00:09:16.797 suites 1 1 n/a 0 0 00:09:16.797 tests 1 1 1 0 0 00:09:16.797 asserts 5 5 5 0 n/a 00:09:16.797 00:09:16.797 Elapsed time = 0.000 seconds 00:09:16.797 07:19:50 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:09:16.797 00:09:16.797 00:09:16.797 CUnit - A unit testing framework for C - Version 2.1-3 00:09:16.797 http://cunit.sourceforge.net/ 00:09:16.797 00:09:16.797 00:09:16.797 Suite: nvme_tcp 00:09:16.797 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:09:16.797 Test: test_nvme_tcp_build_iovs ...passed 00:09:16.797 Test: test_nvme_tcp_build_sgl_request ...[2024-07-12 07:19:50.552164] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 825:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffd4675f850, and the iovcnt=16, remaining_size=28672 00:09:16.797 passed 00:09:16.797 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:09:16.797 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:09:16.797 Test: test_nvme_tcp_req_complete_safe ...passed 00:09:16.797 Test: test_nvme_tcp_req_get ...passed 00:09:16.797 Test: test_nvme_tcp_req_init ...passed 00:09:16.797 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:09:16.797 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:09:16.797 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-07-12 07:19:50.554072] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd46761570 is same with the state(6) to be set 00:09:16.797 passed 00:09:16.797 Test: test_nvme_tcp_alloc_reqs ...passed 00:09:16.797 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-12 07:19:50.554808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd46760720 is same with the state(5) to be set 00:09:16.797 passed 00:09:16.797 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-12 07:19:50.555106] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffd467612b0 00:09:16.797 [2024-07-12 07:19:50.555235] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1226:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:09:16.797 [2024-07-12 07:19:50.555423] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd46760be0 is same with the state(5) to be set 00:09:16.797 [2024-07-12 07:19:50.555622] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1177:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:09:16.797 [2024-07-12 07:19:50.555863] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd46760be0 is same with the state(5) to be set 00:09:16.797 [2024-07-12 07:19:50.556034] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:09:16.797 [2024-07-12 07:19:50.556179] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd46760be0 is same with the state(5) to be set 00:09:16.797 [2024-07-12 07:19:50.556340] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd46760be0 is same with the state(5) to be set 00:09:16.797 [2024-07-12 07:19:50.556484] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd46760be0 is same with the state(5) to be set 00:09:16.797 [2024-07-12 07:19:50.556669] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd46760be0 is same with the state(5) to be set 00:09:16.797 [2024-07-12 07:19:50.556814] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd46760be0 is same with the state(5) to be set 00:09:16.797 [2024-07-12 07:19:50.556975] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd46760be0 is same with the state(5) to be set 00:09:16.797 passed 00:09:16.797 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-12 07:19:50.557303] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:09:16.797 [2024-07-12 07:19:50.557616] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:09:16.797 [2024-07-12 07:19:50.558035] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:09:16.797 passed 00:09:16.797 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:09:16.797 Test: test_nvme_tcp_c2h_payload_handle ...[2024-07-12 07:19:50.558613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1341:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd46760df0): PDU Sequence Error 00:09:16.797 passed 00:09:16.797 Test: test_nvme_tcp_icresp_handle ...[2024-07-12 07:19:50.558961] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1567:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:09:16.797 [2024-07-12 07:19:50.559061] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1574:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:09:16.797 [2024-07-12 07:19:50.559287] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd46760730 is same with the state(5) to be set 00:09:16.797 [2024-07-12 07:19:50.559470] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:09:16.797 [2024-07-12 07:19:50.559642] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd46760730 is same with the state(5) to be set 00:09:16.797 [2024-07-12 07:19:50.559819] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd46760730 is same with the state(0) to be set 00:09:16.797 passed 00:09:16.797 Test: test_nvme_tcp_pdu_payload_handle ...[2024-07-12 07:19:50.560053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1341:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffd467612b0): PDU Sequence Error 00:09:16.797 passed 00:09:16.797 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-12 07:19:50.560430] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1644:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffd4675f9f0 00:09:16.797 passed 00:09:16.797 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:09:16.797 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-12 07:19:50.561018] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 354:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffd4675f070, errno=0, rc=0 00:09:16.797 [2024-07-12 07:19:50.561185] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4675f070 is same with the state(5) to be set 00:09:16.797 [2024-07-12 07:19:50.561372] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd4675f070 is same with the state(5) to be set 00:09:16.797 [2024-07-12 07:19:50.561527] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd4675f070 (0): Success 00:09:16.797 [2024-07-12 07:19:50.561673] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffd4675f070 (0): Success 00:09:16.797 passed 00:09:17.057 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-12 07:19:50.704810] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2507:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:09:17.057 [2024-07-12 07:19:50.705169] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2507:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:17.057 passed 00:09:17.057 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:09:17.057 Test: test_nvme_tcp_poll_group_get_stats ...[2024-07-12 07:19:50.705782] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:17.057 [2024-07-12 07:19:50.706010] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:17.057 passed 00:09:17.057 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-12 07:19:50.706386] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2507:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:17.057 [2024-07-12 07:19:50.706590] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:17.057 [2024-07-12 07:19:50.706923] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:09:17.057 [2024-07-12 07:19:50.707107] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:17.057 [2024-07-12 07:19:50.707374] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000007d80 with addr=192.168.1.78, port=23 00:09:17.057 [2024-07-12 07:19:50.707616] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:09:17.057 passed 00:09:17.057 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-12 07:19:50.708048] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 825:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000000c80, and the iovcnt=1, remaining_size=1024 00:09:17.057 [2024-07-12 07:19:50.708197] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1018:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:09:17.057 passed 00:09:17.057 00:09:17.057 Run Summary: Type Total Ran Passed Failed Inactive 00:09:17.057 suites 1 1 n/a 0 0 00:09:17.057 tests 27 27 27 0 0 00:09:17.057 asserts 624 624 624 0 n/a 00:09:17.057 00:09:17.057 Elapsed time = 0.150 seconds 00:09:17.057 07:19:50 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:09:17.057 00:09:17.057 00:09:17.057 CUnit - A unit testing framework for C - Version 2.1-3 00:09:17.057 http://cunit.sourceforge.net/ 00:09:17.057 00:09:17.057 00:09:17.057 Suite: nvme_transport 00:09:17.057 Test: test_nvme_get_transport ...passed 00:09:17.057 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:09:17.057 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:09:17.057 Test: test_nvme_transport_poll_group_add_remove ...passed 00:09:17.057 Test: test_ctrlr_get_memory_domains ...passed 00:09:17.057 00:09:17.057 Run Summary: Type Total Ran Passed Failed Inactive 00:09:17.057 suites 1 1 n/a 0 0 00:09:17.057 tests 5 5 5 0 0 00:09:17.057 asserts 28 28 28 0 n/a 00:09:17.057 00:09:17.057 Elapsed time = 0.000 seconds 00:09:17.057 07:19:50 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:09:17.057 00:09:17.057 00:09:17.057 CUnit - A unit testing framework for C - Version 2.1-3 00:09:17.057 http://cunit.sourceforge.net/ 00:09:17.057 00:09:17.057 00:09:17.057 Suite: nvme_io_msg 00:09:17.057 Test: test_nvme_io_msg_send ...passed 00:09:17.057 Test: test_nvme_io_msg_process ...passed 00:09:17.057 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:09:17.057 00:09:17.057 Run Summary: Type Total Ran Passed Failed Inactive 00:09:17.057 suites 1 1 n/a 0 0 00:09:17.057 tests 3 3 3 0 0 00:09:17.057 asserts 56 56 56 0 n/a 00:09:17.057 00:09:17.057 Elapsed time = 0.000 seconds 00:09:17.057 07:19:50 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:09:17.057 00:09:17.057 00:09:17.057 CUnit - A unit testing framework for C - Version 2.1-3 00:09:17.057 http://cunit.sourceforge.net/ 00:09:17.057 00:09:17.057 00:09:17.057 Suite: nvme_pcie_common 00:09:17.057 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-12 07:19:50.847829] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:09:17.057 passed 00:09:17.057 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:09:17.057 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:09:17.057 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-12 07:19:50.849390] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:09:17.057 [2024-07-12 07:19:50.849665] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:09:17.057 [2024-07-12 07:19:50.849811] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:09:17.057 passed 00:09:17.057 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:09:17.057 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-12 07:19:50.850754] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:17.057 [2024-07-12 07:19:50.850911] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:17.057 passed 00:09:17.057 00:09:17.057 Run Summary: Type Total Ran Passed Failed Inactive 00:09:17.057 suites 1 1 n/a 0 0 00:09:17.057 tests 6 6 6 0 0 00:09:17.057 asserts 148 148 148 0 n/a 00:09:17.057 00:09:17.057 Elapsed time = 0.002 seconds 00:09:17.057 07:19:50 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:09:17.057 00:09:17.057 00:09:17.057 CUnit - A unit testing framework for C - Version 2.1-3 00:09:17.057 http://cunit.sourceforge.net/ 00:09:17.057 00:09:17.057 00:09:17.057 Suite: nvme_fabric 00:09:17.057 Test: test_nvme_fabric_prop_set_cmd ...passed 00:09:17.057 Test: test_nvme_fabric_prop_get_cmd ...passed 00:09:17.057 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:09:17.057 Test: test_nvme_fabric_discover_probe ...passed 00:09:17.057 Test: test_nvme_fabric_qpair_connect ...[2024-07-12 07:19:50.893198] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:09:17.057 passed 00:09:17.057 00:09:17.057 Run Summary: Type Total Ran Passed Failed Inactive 00:09:17.057 suites 1 1 n/a 0 0 00:09:17.057 tests 5 5 5 0 0 00:09:17.057 asserts 60 60 60 0 n/a 00:09:17.057 00:09:17.057 Elapsed time = 0.001 seconds 00:09:17.057 07:19:50 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:09:17.057 00:09:17.057 00:09:17.057 CUnit - A unit testing framework for C - Version 2.1-3 00:09:17.057 http://cunit.sourceforge.net/ 00:09:17.057 00:09:17.057 00:09:17.057 Suite: nvme_opal 00:09:17.057 Test: test_opal_nvme_security_recv_send_done ...passed 00:09:17.057 Test: test_opal_add_short_atom_header ...[2024-07-12 07:19:50.931832] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:09:17.057 passed 00:09:17.057 00:09:17.057 Run Summary: Type Total Ran Passed Failed Inactive 00:09:17.058 suites 1 1 n/a 0 0 00:09:17.058 tests 2 2 2 0 0 00:09:17.058 asserts 22 22 22 0 n/a 00:09:17.058 00:09:17.058 Elapsed time = 0.000 seconds 00:09:17.316 00:09:17.316 real 0m1.482s 00:09:17.316 user 0m0.753s 00:09:17.316 sys 0m0.538s 00:09:17.316 07:19:50 unittest.unittest_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:17.316 07:19:50 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.316 ************************************ 00:09:17.316 END TEST unittest_nvme 00:09:17.316 ************************************ 00:09:17.316 07:19:51 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:09:17.316 07:19:51 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:17.316 07:19:51 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:17.316 07:19:51 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:17.316 ************************************ 00:09:17.316 START TEST unittest_log 00:09:17.316 ************************************ 00:09:17.316 07:19:51 unittest.unittest_log -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:09:17.316 00:09:17.316 00:09:17.316 CUnit - A unit testing framework for C - Version 2.1-3 00:09:17.316 http://cunit.sourceforge.net/ 00:09:17.316 00:09:17.316 00:09:17.316 Suite: log 00:09:17.316 Test: log_test ...[2024-07-12 07:19:51.048646] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:09:17.316 [2024-07-12 07:19:51.049136] log_ut.c: 57:log_test: *DEBUG*: log test 00:09:17.316 log dump test: 00:09:17.316 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:09:17.316 spdk dump test: 00:09:17.316 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:09:17.316 spdk dump test: 00:09:17.316 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:09:17.316 00000010 65 20 63 68 61 72 73 e chars 00:09:17.316 passed 00:09:18.253 Test: deprecation ...passed 00:09:18.253 00:09:18.253 Run Summary: Type Total Ran Passed Failed Inactive 00:09:18.253 suites 1 1 n/a 0 0 00:09:18.253 tests 2 2 2 0 0 00:09:18.253 asserts 73 73 73 0 n/a 00:09:18.253 00:09:18.253 Elapsed time = 0.001 seconds 00:09:18.253 00:09:18.253 real 0m1.049s 00:09:18.253 user 0m0.008s 00:09:18.253 sys 0m0.039s 00:09:18.253 07:19:52 unittest.unittest_log -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:18.253 07:19:52 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:09:18.253 ************************************ 00:09:18.253 END TEST unittest_log 00:09:18.253 ************************************ 00:09:18.253 07:19:52 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:09:18.253 07:19:52 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:18.253 07:19:52 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:18.253 07:19:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:18.513 ************************************ 00:09:18.513 START TEST unittest_lvol 00:09:18.513 ************************************ 00:09:18.513 07:19:52 unittest.unittest_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:09:18.513 00:09:18.513 00:09:18.513 CUnit - A unit testing framework for C - Version 2.1-3 00:09:18.513 http://cunit.sourceforge.net/ 00:09:18.513 00:09:18.513 00:09:18.513 Suite: lvol 00:09:18.513 Test: lvs_init_unload_success ...[2024-07-12 07:19:52.172627] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:09:18.513 passed 00:09:18.513 Test: lvs_init_destroy_success ...[2024-07-12 07:19:52.174467] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:09:18.513 passed 00:09:18.513 Test: lvs_init_opts_success ...passed 00:09:18.513 Test: lvs_unload_lvs_is_null_fail ...[2024-07-12 07:19:52.175628] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:09:18.513 passed 00:09:18.513 Test: lvs_names ...[2024-07-12 07:19:52.176148] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:09:18.513 [2024-07-12 07:19:52.176397] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:09:18.513 [2024-07-12 07:19:52.176896] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:09:18.513 passed 00:09:18.513 Test: lvol_create_destroy_success ...passed 00:09:18.513 Test: lvol_create_fail ...[2024-07-12 07:19:52.178786] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:09:18.513 [2024-07-12 07:19:52.179150] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:09:18.513 passed 00:09:18.513 Test: lvol_destroy_fail ...[2024-07-12 07:19:52.180158] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:09:18.513 passed 00:09:18.513 Test: lvol_close ...[2024-07-12 07:19:52.180678] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:09:18.513 [2024-07-12 07:19:52.180836] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:09:18.513 passed 00:09:18.513 Test: lvol_resize ...passed 00:09:18.513 Test: lvol_set_read_only ...passed 00:09:18.513 Test: test_lvs_load ...[2024-07-12 07:19:52.182432] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:09:18.513 [2024-07-12 07:19:52.182578] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:09:18.513 passed 00:09:18.513 Test: lvols_load ...[2024-07-12 07:19:52.183102] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:09:18.513 [2024-07-12 07:19:52.183356] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:09:18.513 passed 00:09:18.513 Test: lvol_open ...passed 00:09:18.513 Test: lvol_snapshot ...passed 00:09:18.513 Test: lvol_snapshot_fail ...[2024-07-12 07:19:52.184828] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:09:18.513 passed 00:09:18.513 Test: lvol_clone ...passed 00:09:18.513 Test: lvol_clone_fail ...[2024-07-12 07:19:52.185847] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:09:18.513 passed 00:09:18.513 Test: lvol_iter_clones ...passed 00:09:18.513 Test: lvol_refcnt ...[2024-07-12 07:19:52.186919] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol f020b20f-989e-4aeb-a761-adada4dc1c07 because it is still open 00:09:18.513 passed 00:09:18.513 Test: lvol_names ...[2024-07-12 07:19:52.187406] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:09:18.513 [2024-07-12 07:19:52.187647] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:18.513 [2024-07-12 07:19:52.188043] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:09:18.513 passed 00:09:18.513 Test: lvol_create_thin_provisioned ...passed 00:09:18.513 Test: lvol_rename ...[2024-07-12 07:19:52.189035] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:18.513 [2024-07-12 07:19:52.189282] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:09:18.513 passed 00:09:18.513 Test: lvs_rename ...[2024-07-12 07:19:52.189845] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:09:18.513 passed 00:09:18.513 Test: lvol_inflate ...[2024-07-12 07:19:52.190359] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:09:18.513 passed 00:09:18.513 Test: lvol_decouple_parent ...[2024-07-12 07:19:52.190943] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:09:18.513 passed 00:09:18.513 Test: lvol_get_xattr ...passed 00:09:18.513 Test: lvol_esnap_reload ...passed 00:09:18.514 Test: lvol_esnap_create_bad_args ...[2024-07-12 07:19:52.192084] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:09:18.514 [2024-07-12 07:19:52.192221] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:09:18.514 [2024-07-12 07:19:52.192382] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:09:18.514 [2024-07-12 07:19:52.192634] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:09:18.514 [2024-07-12 07:19:52.192925] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:09:18.514 passed 00:09:18.514 Test: lvol_esnap_create_delete ...passed 00:09:18.514 Test: lvol_esnap_load_esnaps ...[2024-07-12 07:19:52.193815] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:09:18.514 passed 00:09:18.514 Test: lvol_esnap_missing ...[2024-07-12 07:19:52.194141] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:09:18.514 [2024-07-12 07:19:52.194302] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:09:18.514 passed 00:09:18.514 Test: lvol_esnap_hotplug ... 00:09:18.514 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:09:18.514 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:09:18.514 [2024-07-12 07:19:52.195662] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 8b41ec6f-6db2-4363-9ffb-7c2fd79a5d16: failed to create esnap bs_dev: error -12 00:09:18.514 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:09:18.514 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:09:18.514 [2024-07-12 07:19:52.196147] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 0d3e910c-1097-442f-ac16-0f9a44cc4085: failed to create esnap bs_dev: error -12 00:09:18.514 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:09:18.514 [2024-07-12 07:19:52.196441] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol dc88f35c-dfe4-4ed1-9d85-51fea5a1e35f: failed to create esnap bs_dev: error -12 00:09:18.514 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:09:18.514 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:09:18.514 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:09:18.514 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:09:18.514 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:09:18.514 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:09:18.514 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:09:18.514 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:09:18.514 passed 00:09:18.514 Test: lvol_get_by ...passed 00:09:18.514 Test: lvol_shallow_copy ...[2024-07-12 07:19:52.198562] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:09:18.514 [2024-07-12 07:19:52.198739] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 39ec950b-e26c-4814-a4fe-7ce1cd99d3f4 shallow copy, ext_dev must not be NULL 00:09:18.514 passed 00:09:18.514 Test: lvol_set_parent ...[2024-07-12 07:19:52.199256] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:09:18.514 [2024-07-12 07:19:52.199402] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:09:18.514 passed 00:09:18.514 Test: lvol_set_external_parent ...[2024-07-12 07:19:52.199881] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:09:18.514 [2024-07-12 07:19:52.200016] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:09:18.514 [2024-07-12 07:19:52.200168] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:09:18.514 passed 00:09:18.514 00:09:18.514 Run Summary: Type Total Ran Passed Failed Inactive 00:09:18.514 suites 1 1 n/a 0 0 00:09:18.514 tests 37 37 37 0 0 00:09:18.514 asserts 1505 1505 1505 0 n/a 00:09:18.514 00:09:18.514 Elapsed time = 0.018 seconds 00:09:18.514 00:09:18.514 real 0m0.078s 00:09:18.514 user 0m0.031s 00:09:18.514 sys 0m0.036s 00:09:18.514 07:19:52 unittest.unittest_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:18.514 07:19:52 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:18.514 ************************************ 00:09:18.514 END TEST unittest_lvol 00:09:18.514 ************************************ 00:09:18.514 07:19:52 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:18.514 07:19:52 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:09:18.514 07:19:52 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:18.514 07:19:52 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:18.514 07:19:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:18.514 ************************************ 00:09:18.514 START TEST unittest_nvme_rdma 00:09:18.514 ************************************ 00:09:18.514 07:19:52 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:09:18.514 00:09:18.514 00:09:18.514 CUnit - A unit testing framework for C - Version 2.1-3 00:09:18.514 http://cunit.sourceforge.net/ 00:09:18.514 00:09:18.514 00:09:18.514 Suite: nvme_rdma 00:09:18.514 Test: test_nvme_rdma_build_sgl_request ...[2024-07-12 07:19:52.324664] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:09:18.514 [2024-07-12 07:19:52.325709] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1632:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:09:18.514 [2024-07-12 07:19:52.326161] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1688:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:09:18.514 passed 00:09:18.514 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:09:18.514 Test: test_nvme_rdma_build_contig_request ...[2024-07-12 07:19:52.326888] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1569:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:09:18.514 passed 00:09:18.514 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:09:18.514 Test: test_nvme_rdma_create_reqs ...[2024-07-12 07:19:52.327696] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1011:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:09:18.514 passed 00:09:18.514 Test: test_nvme_rdma_create_rsps ...[2024-07-12 07:19:52.328676] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 929:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:09:18.514 passed 00:09:18.514 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-12 07:19:52.329489] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:09:18.514 [2024-07-12 07:19:52.329833] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:09:18.514 passed 00:09:18.514 Test: test_nvme_rdma_poller_create ...passed 00:09:18.514 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-07-12 07:19:52.330715] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 530:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:09:18.514 passed 00:09:18.514 Test: test_nvme_rdma_ctrlr_construct ...passed 00:09:18.514 Test: test_nvme_rdma_req_put_and_get ...passed 00:09:18.514 Test: test_nvme_rdma_req_init ...passed 00:09:18.514 Test: test_nvme_rdma_validate_cm_event ...[2024-07-12 07:19:52.331971] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:09:18.514 [2024-07-12 07:19:52.332310] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:09:18.514 passed 00:09:18.514 Test: test_nvme_rdma_qpair_init ...passed 00:09:18.514 Test: test_nvme_rdma_qpair_submit_request ...passed 00:09:18.514 Test: test_nvme_rdma_memory_domain ...[2024-07-12 07:19:52.333326] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 353:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:09:18.514 passed 00:09:18.514 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:09:18.514 Test: test_rdma_get_memory_translation ...[2024-07-12 07:19:52.334039] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1448:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:09:18.514 [2024-07-12 07:19:52.334335] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:09:18.514 passed 00:09:18.514 Test: test_get_rdma_qpair_from_wc ...passed 00:09:18.514 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:09:18.514 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-12 07:19:52.335191] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:18.514 [2024-07-12 07:19:52.335507] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:09:18.514 passed 00:09:18.514 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-12 07:19:52.336141] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:09:18.514 [2024-07-12 07:19:52.336419] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:09:18.514 [2024-07-12 07:19:52.336715] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffd4efe95a0 on poll group 0x60c000000040 00:09:18.514 [2024-07-12 07:19:52.337052] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:09:18.514 [2024-07-12 07:19:52.337387] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:09:18.514 [2024-07-12 07:19:52.337694] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffd4efe95a0 on poll group 0x60c000000040 00:09:18.514 [2024-07-12 07:19:52.338067] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 705:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:09:18.514 passed 00:09:18.514 00:09:18.514 Run Summary: Type Total Ran Passed Failed Inactive 00:09:18.514 suites 1 1 n/a 0 0 00:09:18.514 tests 22 22 22 0 0 00:09:18.514 asserts 412 412 412 0 n/a 00:09:18.514 00:09:18.514 Elapsed time = 0.006 seconds 00:09:18.514 00:09:18.514 real 0m0.059s 00:09:18.514 user 0m0.029s 00:09:18.514 sys 0m0.022s 00:09:18.514 07:19:52 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:18.514 07:19:52 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:18.514 ************************************ 00:09:18.514 END TEST unittest_nvme_rdma 00:09:18.514 ************************************ 00:09:18.775 07:19:52 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:09:18.775 07:19:52 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:18.775 07:19:52 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:18.775 07:19:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:18.775 ************************************ 00:09:18.775 START TEST unittest_nvmf_transport 00:09:18.775 ************************************ 00:09:18.775 07:19:52 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:09:18.775 00:09:18.775 00:09:18.775 CUnit - A unit testing framework for C - Version 2.1-3 00:09:18.775 http://cunit.sourceforge.net/ 00:09:18.775 00:09:18.775 00:09:18.775 Suite: nvmf 00:09:18.775 Test: test_spdk_nvmf_transport_create ...[2024-07-12 07:19:52.455992] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:09:18.775 [2024-07-12 07:19:52.456552] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:09:18.775 [2024-07-12 07:19:52.456746] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:09:18.775 [2024-07-12 07:19:52.457064] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:09:18.775 passed 00:09:18.775 Test: test_nvmf_transport_poll_group_create ...passed 00:09:18.775 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-12 07:19:52.457880] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:09:18.775 [2024-07-12 07:19:52.458110] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:09:18.775 [2024-07-12 07:19:52.458249] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:09:18.775 passed 00:09:18.775 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:09:18.775 00:09:18.775 Run Summary: Type Total Ran Passed Failed Inactive 00:09:18.775 suites 1 1 n/a 0 0 00:09:18.775 tests 4 4 4 0 0 00:09:18.775 asserts 49 49 49 0 n/a 00:09:18.775 00:09:18.775 Elapsed time = 0.002 seconds 00:09:18.775 00:09:18.775 real 0m0.056s 00:09:18.775 user 0m0.020s 00:09:18.775 sys 0m0.035s 00:09:18.775 07:19:52 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:18.775 07:19:52 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:09:18.775 ************************************ 00:09:18.775 END TEST unittest_nvmf_transport 00:09:18.775 ************************************ 00:09:18.775 07:19:52 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:09:18.775 07:19:52 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:18.775 07:19:52 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:18.775 07:19:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:18.775 ************************************ 00:09:18.775 START TEST unittest_rdma 00:09:18.775 ************************************ 00:09:18.775 07:19:52 unittest.unittest_rdma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:09:18.775 00:09:18.775 00:09:18.775 CUnit - A unit testing framework for C - Version 2.1-3 00:09:18.775 http://cunit.sourceforge.net/ 00:09:18.775 00:09:18.775 00:09:18.775 Suite: rdma_common 00:09:18.775 Test: test_spdk_rdma_pd ...[2024-07-12 07:19:52.581577] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:09:18.775 [2024-07-12 07:19:52.582276] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:09:18.775 passed 00:09:18.775 00:09:18.775 Run Summary: Type Total Ran Passed Failed Inactive 00:09:18.775 suites 1 1 n/a 0 0 00:09:18.775 tests 1 1 1 0 0 00:09:18.775 asserts 31 31 31 0 n/a 00:09:18.776 00:09:18.776 Elapsed time = 0.001 seconds 00:09:18.776 00:09:18.776 real 0m0.044s 00:09:18.776 user 0m0.009s 00:09:18.776 sys 0m0.034s 00:09:18.776 07:19:52 unittest.unittest_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:18.776 07:19:52 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:18.776 ************************************ 00:09:18.776 END TEST unittest_rdma 00:09:18.776 ************************************ 00:09:19.034 07:19:52 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:19.034 07:19:52 unittest -- unit/unittest.sh@258 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:09:19.034 07:19:52 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:19.034 07:19:52 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:19.034 07:19:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:19.034 ************************************ 00:09:19.034 START TEST unittest_nvme_cuse 00:09:19.034 ************************************ 00:09:19.034 07:19:52 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:09:19.034 00:09:19.034 00:09:19.034 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.034 http://cunit.sourceforge.net/ 00:09:19.034 00:09:19.034 00:09:19.034 Suite: nvme_cuse 00:09:19.034 Test: test_cuse_nvme_submit_io_read_write ...passed 00:09:19.034 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:09:19.034 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:09:19.034 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:09:19.034 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:09:19.034 Test: test_cuse_nvme_submit_io ...[2024-07-12 07:19:52.707467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:09:19.034 passed 00:09:19.034 Test: test_cuse_nvme_reset ...[2024-07-12 07:19:52.708107] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:09:19.034 passed 00:09:19.598 Test: test_nvme_cuse_stop ...passed 00:09:19.598 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:09:19.598 00:09:19.598 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.598 suites 1 1 n/a 0 0 00:09:19.598 tests 9 9 9 0 0 00:09:19.598 asserts 118 118 118 0 n/a 00:09:19.598 00:09:19.598 Elapsed time = 0.505 seconds 00:09:19.598 00:09:19.598 real 0m0.553s 00:09:19.598 user 0m0.268s 00:09:19.598 sys 0m0.285s 00:09:19.598 07:19:53 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:19.598 07:19:53 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:09:19.598 ************************************ 00:09:19.598 END TEST unittest_nvme_cuse 00:09:19.598 ************************************ 00:09:19.598 07:19:53 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:09:19.598 07:19:53 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:19.598 07:19:53 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:19.598 07:19:53 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:19.598 ************************************ 00:09:19.598 START TEST unittest_nvmf 00:09:19.598 ************************************ 00:09:19.598 07:19:53 unittest.unittest_nvmf -- common/autotest_common.sh@1121 -- # unittest_nvmf 00:09:19.599 07:19:53 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:09:19.599 00:09:19.599 00:09:19.599 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.599 http://cunit.sourceforge.net/ 00:09:19.599 00:09:19.599 00:09:19.599 Suite: nvmf 00:09:19.599 Test: test_get_log_page ...[2024-07-12 07:19:53.320623] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:09:19.599 passed 00:09:19.599 Test: test_process_fabrics_cmd ...[2024-07-12 07:19:53.321380] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4677:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:09:19.599 passed 00:09:19.599 Test: test_connect ...[2024-07-12 07:19:53.322484] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1006:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:09:19.599 [2024-07-12 07:19:53.322750] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 869:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:09:19.599 [2024-07-12 07:19:53.322912] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1045:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:09:19.599 [2024-07-12 07:19:53.323102] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:09:19.599 [2024-07-12 07:19:53.323341] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 880:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:09:19.599 [2024-07-12 07:19:53.323562] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 887:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:09:19.599 [2024-07-12 07:19:53.323615] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 893:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:09:19.599 [2024-07-12 07:19:53.323696] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 920:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:09:19.599 [2024-07-12 07:19:53.323920] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:09:19.599 [2024-07-12 07:19:53.324055] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 670:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:09:19.599 [2024-07-12 07:19:53.324603] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:09:19.599 [2024-07-12 07:19:53.324746] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:09:19.599 [2024-07-12 07:19:53.324881] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 689:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:09:19.599 [2024-07-12 07:19:53.325015] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 713:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:09:19.599 [2024-07-12 07:19:53.325241] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 293:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 00:09:19.599 [2024-07-12 07:19:53.325726] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 800:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:09:19.599 [2024-07-12 07:19:53.325974] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 800:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:09:19.599 passed 00:09:19.599 Test: test_get_ns_id_desc_list ...passed 00:09:19.599 Test: test_identify_ns ...[2024-07-12 07:19:53.327314] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:19.599 [2024-07-12 07:19:53.328871] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:09:19.599 [2024-07-12 07:19:53.329381] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:09:19.599 passed 00:09:19.599 Test: test_identify_ns_iocs_specific ...[2024-07-12 07:19:53.330207] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:19.599 [2024-07-12 07:19:53.331584] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:19.599 passed 00:09:19.599 Test: test_reservation_write_exclusive ...passed 00:09:19.599 Test: test_reservation_exclusive_access ...passed 00:09:19.599 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:09:19.599 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:09:19.599 Test: test_reservation_notification_log_page ...passed 00:09:19.599 Test: test_get_dif_ctx ...passed 00:09:19.599 Test: test_set_get_features ...[2024-07-12 07:19:53.334231] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1642:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:09:19.599 [2024-07-12 07:19:53.334420] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1642:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:09:19.599 [2024-07-12 07:19:53.334563] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1653:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:09:19.599 [2024-07-12 07:19:53.334708] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1729:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:09:19.599 passed 00:09:19.599 Test: test_identify_ctrlr ...passed 00:09:19.599 Test: test_identify_ctrlr_iocs_specific ...passed 00:09:19.599 Test: test_custom_admin_cmd ...passed 00:09:19.599 Test: test_fused_compare_and_write ...[2024-07-12 07:19:53.337733] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4212:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:09:19.599 [2024-07-12 07:19:53.337903] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4201:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:09:19.599 [2024-07-12 07:19:53.338070] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4219:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:09:19.599 passed 00:09:19.599 Test: test_multi_async_event_reqs ...passed 00:09:19.599 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:09:19.599 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:09:19.599 Test: test_multi_async_events ...passed 00:09:19.599 Test: test_rae ...passed 00:09:19.599 Test: test_nvmf_ctrlr_create_destruct ...passed 00:09:19.599 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:09:19.599 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-12 07:19:53.340954] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4677:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:09:19.599 [2024-07-12 07:19:53.341165] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:09:19.599 passed 00:09:19.599 Test: test_zcopy_read ...passed 00:09:19.599 Test: test_zcopy_write ...passed 00:09:19.599 Test: test_nvmf_property_set ...passed 00:09:19.599 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-12 07:19:53.342231] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1940:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:09:19.599 [2024-07-12 07:19:53.342423] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1940:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:09:19.599 passed 00:09:19.599 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-12 07:19:53.342730] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1963:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:09:19.599 [2024-07-12 07:19:53.342875] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:09:19.599 [2024-07-12 07:19:53.343075] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1981:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:09:19.599 passed 00:09:19.599 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:09:19.599 Test: test_nvmf_check_qpair_active ...[2024-07-12 07:19:53.343694] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4677:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:09:19.599 [2024-07-12 07:19:53.343873] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4691:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:09:19.599 [2024-07-12 07:19:53.344025] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:09:19.599 [2024-07-12 07:19:53.344195] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:09:19.599 [2024-07-12 07:19:53.344330] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:09:19.599 passed 00:09:19.599 00:09:19.599 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.599 suites 1 1 n/a 0 0 00:09:19.599 tests 32 32 32 0 0 00:09:19.599 asserts 977 977 977 0 n/a 00:09:19.599 00:09:19.599 Elapsed time = 0.016 seconds 00:09:19.599 07:19:53 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:09:19.599 00:09:19.599 00:09:19.599 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.599 http://cunit.sourceforge.net/ 00:09:19.599 00:09:19.599 00:09:19.599 Suite: nvmf 00:09:19.599 Test: test_get_rw_params ...passed 00:09:19.599 Test: test_get_rw_ext_params ...passed 00:09:19.599 Test: test_lba_in_range ...passed 00:09:19.599 Test: test_get_dif_ctx ...passed 00:09:19.599 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:09:19.599 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-12 07:19:53.394768] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:09:19.599 [2024-07-12 07:19:53.395155] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:09:19.599 [2024-07-12 07:19:53.395322] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:09:19.599 passed 00:09:19.599 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-12 07:19:53.395573] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:09:19.599 [2024-07-12 07:19:53.395721] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:09:19.599 passed 00:09:19.599 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-12 07:19:53.395990] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:09:19.599 [2024-07-12 07:19:53.396056] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:09:19.599 [2024-07-12 07:19:53.396144] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:09:19.599 [2024-07-12 07:19:53.396293] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:09:19.599 passed 00:09:19.599 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:09:19.599 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:09:19.599 00:09:19.599 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.599 suites 1 1 n/a 0 0 00:09:19.599 tests 10 10 10 0 0 00:09:19.599 asserts 159 159 159 0 n/a 00:09:19.599 00:09:19.599 Elapsed time = 0.001 seconds 00:09:19.599 07:19:53 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:09:19.599 00:09:19.599 00:09:19.600 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.600 http://cunit.sourceforge.net/ 00:09:19.600 00:09:19.600 00:09:19.600 Suite: nvmf 00:09:19.600 Test: test_discovery_log ...passed 00:09:19.600 Test: test_discovery_log_with_filters ...passed 00:09:19.600 00:09:19.600 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.600 suites 1 1 n/a 0 0 00:09:19.600 tests 2 2 2 0 0 00:09:19.600 asserts 238 238 238 0 n/a 00:09:19.600 00:09:19.600 Elapsed time = 0.003 seconds 00:09:19.600 07:19:53 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:09:19.858 00:09:19.858 00:09:19.858 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.858 http://cunit.sourceforge.net/ 00:09:19.858 00:09:19.858 00:09:19.858 Suite: nvmf 00:09:19.858 Test: nvmf_test_create_subsystem ...[2024-07-12 07:19:53.497577] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:09:19.858 [2024-07-12 07:19:53.497925] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:09:19.858 [2024-07-12 07:19:53.498161] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:09:19.858 [2024-07-12 07:19:53.498328] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:09:19.858 [2024-07-12 07:19:53.498429] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:09:19.858 [2024-07-12 07:19:53.498498] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:09:19.858 [2024-07-12 07:19:53.498647] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:09:19.858 [2024-07-12 07:19:53.498749] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:09:19.858 [2024-07-12 07:19:53.498861] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:09:19.858 [2024-07-12 07:19:53.498958] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:09:19.858 [2024-07-12 07:19:53.499012] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:09:19.858 [2024-07-12 07:19:53.499104] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:09:19.858 [2024-07-12 07:19:53.499289] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:09:19.858 [2024-07-12 07:19:53.499481] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:09:19.858 [2024-07-12 07:19:53.499665] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:09:19.858 [2024-07-12 07:19:53.499770] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:09:19.858 [2024-07-12 07:19:53.499908] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:09:19.858 [2024-07-12 07:19:53.500040] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:09:19.858 [2024-07-12 07:19:53.500111] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:09:19.858 [2024-07-12 07:19:53.500301] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:09:19.858 [2024-07-12 07:19:53.500368] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:09:19.858 [2024-07-12 07:19:53.500446] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:09:19.858 passed 00:09:19.858 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-12 07:19:53.500761] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:09:19.858 [2024-07-12 07:19:53.500890] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2010:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:09:19.858 passed 00:09:19.858 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-07-12 07:19:53.501254] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2138:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:09:19.858 passed 00:09:19.858 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:09:19.858 Test: test_spdk_nvmf_ns_visible ...[2024-07-12 07:19:53.501886] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:09:19.858 passed 00:09:19.858 Test: test_reservation_register ...[2024-07-12 07:19:53.502480] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:19.858 [2024-07-12 07:19:53.502678] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3135:nvmf_ns_reservation_register: *ERROR*: No registrant 00:09:19.858 passed 00:09:19.858 Test: test_reservation_register_with_ptpl ...passed 00:09:19.858 Test: test_reservation_acquire_preempt_1 ...[2024-07-12 07:19:53.503963] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:19.858 passed 00:09:19.858 Test: test_reservation_acquire_release_with_ptpl ...passed 00:09:19.858 Test: test_reservation_release ...[2024-07-12 07:19:53.505897] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:19.858 passed 00:09:19.858 Test: test_reservation_unregister_notification ...[2024-07-12 07:19:53.506369] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:19.858 passed 00:09:19.858 Test: test_reservation_release_notification ...[2024-07-12 07:19:53.506795] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:19.858 passed 00:09:19.858 Test: test_reservation_release_notification_write_exclusive ...[2024-07-12 07:19:53.507190] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:19.858 passed 00:09:19.858 Test: test_reservation_clear_notification ...[2024-07-12 07:19:53.507605] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:19.858 passed 00:09:19.858 Test: test_reservation_preempt_notification ...[2024-07-12 07:19:53.507974] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:09:19.858 passed 00:09:19.858 Test: test_spdk_nvmf_ns_event ...passed 00:09:19.858 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:09:19.858 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:09:19.858 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-12 07:19:53.509142] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:09:19.858 [2024-07-12 07:19:53.509338] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:09:19.858 passed 00:09:19.858 Test: test_nvmf_ns_reservation_report ...[2024-07-12 07:19:53.509650] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3440:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:09:19.858 passed 00:09:19.858 Test: test_nvmf_nqn_is_valid ...[2024-07-12 07:19:53.509904] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:09:19.858 [2024-07-12 07:19:53.509994] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:b8fdb953-1a03-4b6c-bf1d-add0a0f1e89": uuid is not the correct length 00:09:19.858 [2024-07-12 07:19:53.510096] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:09:19.858 passed 00:09:19.858 Test: test_nvmf_ns_reservation_restore ...[2024-07-12 07:19:53.510340] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2634:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:09:19.858 passed 00:09:19.858 Test: test_nvmf_subsystem_state_change ...passed 00:09:19.858 Test: test_nvmf_reservation_custom_ops ...passed 00:09:19.858 00:09:19.858 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.858 suites 1 1 n/a 0 0 00:09:19.858 tests 24 24 24 0 0 00:09:19.858 asserts 499 499 499 0 n/a 00:09:19.858 00:09:19.858 Elapsed time = 0.009 seconds 00:09:19.858 07:19:53 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:09:19.858 00:09:19.858 00:09:19.858 CUnit - A unit testing framework for C - Version 2.1-3 00:09:19.858 http://cunit.sourceforge.net/ 00:09:19.858 00:09:19.858 00:09:19.858 Suite: nvmf 00:09:19.858 Test: test_nvmf_tcp_create ...[2024-07-12 07:19:53.585252] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:09:19.858 passed 00:09:19.858 Test: test_nvmf_tcp_destroy ...passed 00:09:19.858 Test: test_nvmf_tcp_poll_group_create ...passed 00:09:19.858 Test: test_nvmf_tcp_send_c2h_data ...passed 00:09:19.858 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:09:19.858 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:09:19.858 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:09:19.858 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-12 07:19:53.695493] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:19.858 [2024-07-12 07:19:53.695598] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53076c40 is same with the state(5) to be set 00:09:19.858 [2024-07-12 07:19:53.695703] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53076c40 is same with the state(5) to be set 00:09:19.858 [2024-07-12 07:19:53.695811] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:19.858 [2024-07-12 07:19:53.695860] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53076c40 is same with the state(5) to be set 00:09:19.858 passed 00:09:19.858 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:09:19.859 Test: test_nvmf_tcp_icreq_handle ...[2024-07-12 07:19:53.696144] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2113:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:09:19.859 [2024-07-12 07:19:53.696251] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:19.859 [2024-07-12 07:19:53.696374] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53076c40 is same with the state(5) to be set 00:09:19.859 [2024-07-12 07:19:53.696461] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2113:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:09:19.859 [2024-07-12 07:19:53.696550] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53076c40 is same with the state(5) to be set 00:09:19.859 [2024-07-12 07:19:53.696666] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:19.859 [2024-07-12 07:19:53.696724] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53076c40 is same with the state(5) to be set 00:09:19.859 [2024-07-12 07:19:53.696828] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:09:19.859 passed 00:09:19.859 Test: test_nvmf_tcp_check_xfer_type ...[2024-07-12 07:19:53.697032] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53076c40 is same with the state(5) to be set 00:09:19.859 passed 00:09:19.859 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-12 07:19:53.697226] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2508:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:09:19.859 [2024-07-12 07:19:53.697299] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:19.859 [2024-07-12 07:19:53.697350] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53076c40 is same with the state(5) to be set 00:09:19.859 passed 00:09:19.859 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-12 07:19:53.697492] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2240:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7fff530779a0 00:09:19.859 [2024-07-12 07:19:53.697586] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:19.859 [2024-07-12 07:19:53.697689] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53077100 is same with the state(5) to be set 00:09:19.859 [2024-07-12 07:19:53.697794] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2297:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7fff53077100 00:09:19.859 [2024-07-12 07:19:53.697852] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:19.859 [2024-07-12 07:19:53.697948] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53077100 is same with the state(5) to be set 00:09:19.859 [2024-07-12 07:19:53.698097] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2250:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:09:19.859 [2024-07-12 07:19:53.698155] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:19.859 [2024-07-12 07:19:53.698216] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53077100 is same with the state(5) to be set 00:09:19.859 [2024-07-12 07:19:53.698384] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2289:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:09:19.859 [2024-07-12 07:19:53.698437] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:19.859 [2024-07-12 07:19:53.698490] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53077100 is same with the state(5) to be set 00:09:19.859 [2024-07-12 07:19:53.698575] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:19.859 [2024-07-12 07:19:53.698721] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53077100 is same with the state(5) to be set 00:09:19.859 [2024-07-12 07:19:53.698804] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:19.859 [2024-07-12 07:19:53.698901] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53077100 is same with the state(5) to be set 00:09:19.859 [2024-07-12 07:19:53.698966] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:19.859 [2024-07-12 07:19:53.699065] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53077100 is same with the state(5) to be set 00:09:19.859 [2024-07-12 07:19:53.699195] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:19.859 [2024-07-12 07:19:53.699258] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53077100 is same with the state(5) to be set 00:09:19.859 [2024-07-12 07:19:53.699331] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:19.859 [2024-07-12 07:19:53.699540] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53077100 is same with the state(5) to be set 00:09:19.859 [2024-07-12 07:19:53.699613] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:09:19.859 [2024-07-12 07:19:53.699697] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff53077100 is same with the state(5) to be set 00:09:19.859 passed 00:09:19.859 Test: test_nvmf_tcp_tls_add_remove_credentials ...passed 00:09:19.859 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-12 07:19:53.719480] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:09:19.859 [2024-07-12 07:19:53.719655] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:09:19.859 passed 00:09:19.859 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-12 07:19:53.720070] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:09:19.859 [2024-07-12 07:19:53.720211] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:09:19.859 passed 00:09:19.859 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-12 07:19:53.720525] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:09:19.859 [2024-07-12 07:19:53.720637] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:09:19.859 passed 00:09:19.859 00:09:19.859 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.859 suites 1 1 n/a 0 0 00:09:19.859 tests 17 17 17 0 0 00:09:19.859 asserts 222 222 222 0 n/a 00:09:19.859 00:09:19.859 Elapsed time = 0.161 seconds 00:09:20.116 07:19:53 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:09:20.116 00:09:20.116 00:09:20.116 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.116 http://cunit.sourceforge.net/ 00:09:20.116 00:09:20.116 00:09:20.116 Suite: nvmf 00:09:20.116 Test: test_nvmf_tgt_create_poll_group ...passed 00:09:20.116 00:09:20.116 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.116 suites 1 1 n/a 0 0 00:09:20.116 tests 1 1 1 0 0 00:09:20.116 asserts 17 17 17 0 n/a 00:09:20.116 00:09:20.116 Elapsed time = 0.023 seconds 00:09:20.116 00:09:20.116 real 0m0.643s 00:09:20.116 user 0m0.246s 00:09:20.116 sys 0m0.376s 00:09:20.116 07:19:53 unittest.unittest_nvmf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:20.116 07:19:53 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:09:20.116 ************************************ 00:09:20.116 END TEST unittest_nvmf 00:09:20.116 ************************************ 00:09:20.116 07:19:53 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:20.394 07:19:54 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:20.394 07:19:54 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:09:20.394 07:19:54 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:20.394 07:19:54 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:20.394 07:19:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:20.394 ************************************ 00:09:20.394 START TEST unittest_nvmf_rdma 00:09:20.394 ************************************ 00:09:20.394 07:19:54 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:09:20.394 00:09:20.394 00:09:20.394 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.394 http://cunit.sourceforge.net/ 00:09:20.394 00:09:20.394 00:09:20.394 Suite: nvmf 00:09:20.394 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-12 07:19:54.052383] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1858:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:09:20.394 [2024-07-12 07:19:54.053037] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1908:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:09:20.394 [2024-07-12 07:19:54.053328] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1908:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:09:20.394 passed 00:09:20.394 Test: test_spdk_nvmf_rdma_request_process ...passed 00:09:20.394 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:09:20.394 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:09:20.394 Test: test_nvmf_rdma_opts_init ...passed 00:09:20.394 Test: test_nvmf_rdma_request_free_data ...passed 00:09:20.394 Test: test_nvmf_rdma_resources_create ...passed 00:09:20.394 Test: test_nvmf_rdma_qpair_compare ...passed 00:09:20.394 Test: test_nvmf_rdma_resize_cq ...[2024-07-12 07:19:54.060215] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 949:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:09:20.394 Using CQ of insufficient size may lead to CQ overrun 00:09:20.394 [2024-07-12 07:19:54.060613] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 954:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:09:20.394 [2024-07-12 07:19:54.060994] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 962:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:09:20.394 passed 00:09:20.394 00:09:20.394 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.394 suites 1 1 n/a 0 0 00:09:20.394 tests 9 9 9 0 0 00:09:20.394 asserts 579 579 579 0 n/a 00:09:20.394 00:09:20.394 Elapsed time = 0.006 seconds 00:09:20.394 00:09:20.394 real 0m0.063s 00:09:20.394 user 0m0.023s 00:09:20.394 sys 0m0.036s 00:09:20.394 07:19:54 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:20.394 07:19:54 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:20.394 ************************************ 00:09:20.394 END TEST unittest_nvmf_rdma 00:09:20.394 ************************************ 00:09:20.394 07:19:54 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:20.394 07:19:54 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:09:20.394 07:19:54 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:20.394 07:19:54 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:20.394 07:19:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:20.394 ************************************ 00:09:20.394 START TEST unittest_scsi 00:09:20.394 ************************************ 00:09:20.394 07:19:54 unittest.unittest_scsi -- common/autotest_common.sh@1121 -- # unittest_scsi 00:09:20.394 07:19:54 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:09:20.394 00:09:20.394 00:09:20.394 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.394 http://cunit.sourceforge.net/ 00:09:20.394 00:09:20.394 00:09:20.394 Suite: dev_suite 00:09:20.394 Test: dev_destruct_null_dev ...passed 00:09:20.394 Test: dev_destruct_zero_luns ...passed 00:09:20.394 Test: dev_destruct_null_lun ...passed 00:09:20.394 Test: dev_destruct_success ...passed 00:09:20.394 Test: dev_construct_num_luns_zero ...[2024-07-12 07:19:54.175679] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:09:20.394 passed 00:09:20.394 Test: dev_construct_no_lun_zero ...[2024-07-12 07:19:54.176755] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:09:20.394 passed 00:09:20.394 Test: dev_construct_null_lun ...[2024-07-12 07:19:54.177414] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:09:20.394 passed 00:09:20.394 Test: dev_construct_name_too_long ...[2024-07-12 07:19:54.178081] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:09:20.394 passed 00:09:20.394 Test: dev_construct_success ...passed 00:09:20.394 Test: dev_construct_success_lun_zero_not_first ...passed 00:09:20.394 Test: dev_queue_mgmt_task_success ...passed 00:09:20.394 Test: dev_queue_task_success ...passed 00:09:20.394 Test: dev_stop_success ...passed 00:09:20.394 Test: dev_add_port_max_ports ...[2024-07-12 07:19:54.180834] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:09:20.394 passed 00:09:20.394 Test: dev_add_port_construct_failure1 ...[2024-07-12 07:19:54.181670] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:09:20.394 passed 00:09:20.394 Test: dev_add_port_construct_failure2 ...[2024-07-12 07:19:54.182408] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:09:20.394 passed 00:09:20.394 Test: dev_add_port_success1 ...passed 00:09:20.394 Test: dev_add_port_success2 ...passed 00:09:20.394 Test: dev_add_port_success3 ...passed 00:09:20.394 Test: dev_find_port_by_id_num_ports_zero ...passed 00:09:20.394 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:09:20.394 Test: dev_find_port_by_id_success ...passed 00:09:20.394 Test: dev_add_lun_bdev_not_found ...passed 00:09:20.394 Test: dev_add_lun_no_free_lun_id ...[2024-07-12 07:19:54.185929] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:09:20.394 passed 00:09:20.394 Test: dev_add_lun_success1 ...passed 00:09:20.394 Test: dev_add_lun_success2 ...passed 00:09:20.394 Test: dev_check_pending_tasks ...passed 00:09:20.394 Test: dev_iterate_luns ...passed 00:09:20.394 Test: dev_find_free_lun ...passed 00:09:20.394 00:09:20.394 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.394 suites 1 1 n/a 0 0 00:09:20.394 tests 29 29 29 0 0 00:09:20.394 asserts 97 97 97 0 n/a 00:09:20.394 00:09:20.394 Elapsed time = 0.005 seconds 00:09:20.394 07:19:54 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:09:20.394 00:09:20.394 00:09:20.394 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.394 http://cunit.sourceforge.net/ 00:09:20.394 00:09:20.394 00:09:20.394 Suite: lun_suite 00:09:20.394 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-12 07:19:54.241488] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:09:20.394 passed 00:09:20.394 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-12 07:19:54.242212] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:09:20.394 passed 00:09:20.394 Test: lun_task_mgmt_execute_lun_reset ...passed 00:09:20.394 Test: lun_task_mgmt_execute_target_reset ...passed 00:09:20.394 Test: lun_task_mgmt_execute_invalid_case ...[2024-07-12 07:19:54.242809] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:09:20.394 passed 00:09:20.394 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:09:20.394 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:09:20.394 Test: lun_append_task_null_lun_not_supported ...passed 00:09:20.394 Test: lun_execute_scsi_task_pending ...passed 00:09:20.394 Test: lun_execute_scsi_task_complete ...passed 00:09:20.394 Test: lun_execute_scsi_task_resize ...passed 00:09:20.394 Test: lun_destruct_success ...passed 00:09:20.394 Test: lun_construct_null_ctx ...[2024-07-12 07:19:54.244236] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:09:20.394 passed 00:09:20.394 Test: lun_construct_success ...passed 00:09:20.394 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:09:20.394 Test: lun_reset_task_suspend_scsi_task ...passed 00:09:20.394 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:09:20.394 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:09:20.394 00:09:20.394 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.394 suites 1 1 n/a 0 0 00:09:20.394 tests 18 18 18 0 0 00:09:20.394 asserts 153 153 153 0 n/a 00:09:20.394 00:09:20.394 Elapsed time = 0.002 seconds 00:09:20.394 07:19:54 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:09:20.651 00:09:20.651 00:09:20.651 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.651 http://cunit.sourceforge.net/ 00:09:20.651 00:09:20.651 00:09:20.651 Suite: scsi_suite 00:09:20.651 Test: scsi_init ...passed 00:09:20.651 00:09:20.651 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.651 suites 1 1 n/a 0 0 00:09:20.651 tests 1 1 1 0 0 00:09:20.651 asserts 1 1 1 0 n/a 00:09:20.651 00:09:20.651 Elapsed time = 0.000 seconds 00:09:20.651 07:19:54 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:09:20.651 00:09:20.651 00:09:20.651 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.651 http://cunit.sourceforge.net/ 00:09:20.651 00:09:20.651 00:09:20.651 Suite: translation_suite 00:09:20.651 Test: mode_select_6_test ...passed 00:09:20.651 Test: mode_select_6_test2 ...passed 00:09:20.651 Test: mode_sense_6_test ...passed 00:09:20.651 Test: mode_sense_10_test ...passed 00:09:20.651 Test: inquiry_evpd_test ...passed 00:09:20.651 Test: inquiry_standard_test ...passed 00:09:20.651 Test: inquiry_overflow_test ...passed 00:09:20.651 Test: task_complete_test ...passed 00:09:20.651 Test: lba_range_test ...passed 00:09:20.651 Test: xfer_len_test ...[2024-07-12 07:19:54.325989] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:09:20.651 passed 00:09:20.651 Test: xfer_test ...passed 00:09:20.651 Test: scsi_name_padding_test ...passed 00:09:20.651 Test: get_dif_ctx_test ...passed 00:09:20.651 Test: unmap_split_test ...passed 00:09:20.651 00:09:20.651 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.651 suites 1 1 n/a 0 0 00:09:20.651 tests 14 14 14 0 0 00:09:20.651 asserts 1205 1205 1205 0 n/a 00:09:20.651 00:09:20.651 Elapsed time = 0.005 seconds 00:09:20.651 07:19:54 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:09:20.651 00:09:20.651 00:09:20.651 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.651 http://cunit.sourceforge.net/ 00:09:20.651 00:09:20.651 00:09:20.651 Suite: reservation_suite 00:09:20.651 Test: test_reservation_register ...[2024-07-12 07:19:54.355874] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:20.651 passed 00:09:20.651 Test: test_reservation_reserve ...[2024-07-12 07:19:54.356425] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:20.651 [2024-07-12 07:19:54.356558] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:09:20.651 passed[2024-07-12 07:19:54.356680] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:09:20.651 00:09:20.651 Test: test_reservation_preempt_non_all_regs ...[2024-07-12 07:19:54.356834] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:20.651 [2024-07-12 07:19:54.356957] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:09:20.651 passed 00:09:20.651 Test: test_reservation_preempt_all_regs ...[2024-07-12 07:19:54.357291] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:20.651 passed 00:09:20.651 Test: test_reservation_cmds_conflict ...[2024-07-12 07:19:54.357575] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:20.651 [2024-07-12 07:19:54.357681] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:09:20.651 [2024-07-12 07:19:54.357799] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:09:20.651 [2024-07-12 07:19:54.357860] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:09:20.651 [2024-07-12 07:19:54.357964] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:09:20.651 [2024-07-12 07:19:54.358019] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:09:20.651 passed 00:09:20.651 Test: test_scsi2_reserve_release ...passed 00:09:20.651 Test: test_pr_with_scsi2_reserve_release ...[2024-07-12 07:19:54.358319] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:09:20.651 passed 00:09:20.651 00:09:20.651 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.651 suites 1 1 n/a 0 0 00:09:20.651 tests 7 7 7 0 0 00:09:20.651 asserts 257 257 257 0 n/a 00:09:20.651 00:09:20.651 Elapsed time = 0.002 seconds 00:09:20.651 00:09:20.651 real 0m0.224s 00:09:20.651 user 0m0.124s 00:09:20.651 sys 0m0.082s 00:09:20.651 07:19:54 unittest.unittest_scsi -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:20.651 07:19:54 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:09:20.651 ************************************ 00:09:20.651 END TEST unittest_scsi 00:09:20.651 ************************************ 00:09:20.651 07:19:54 unittest -- unit/unittest.sh@278 -- # uname -s 00:09:20.651 07:19:54 unittest -- unit/unittest.sh@278 -- # '[' Linux = Linux ']' 00:09:20.651 07:19:54 unittest -- unit/unittest.sh@279 -- # run_test unittest_sock unittest_sock 00:09:20.651 07:19:54 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:20.651 07:19:54 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:20.651 07:19:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:20.651 ************************************ 00:09:20.651 START TEST unittest_sock 00:09:20.651 ************************************ 00:09:20.651 07:19:54 unittest.unittest_sock -- common/autotest_common.sh@1121 -- # unittest_sock 00:09:20.651 07:19:54 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:09:20.651 00:09:20.651 00:09:20.651 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.651 http://cunit.sourceforge.net/ 00:09:20.651 00:09:20.651 00:09:20.651 Suite: sock 00:09:20.651 Test: posix_sock ...passed 00:09:20.651 Test: ut_sock ...passed 00:09:20.651 Test: posix_sock_group ...passed 00:09:20.651 Test: ut_sock_group ...passed 00:09:20.651 Test: posix_sock_group_fairness ...passed 00:09:20.651 Test: _posix_sock_close ...passed 00:09:20.651 Test: sock_get_default_opts ...passed 00:09:20.651 Test: ut_sock_impl_get_set_opts ...passed 00:09:20.651 Test: posix_sock_impl_get_set_opts ...passed 00:09:20.651 Test: ut_sock_map ...passed 00:09:20.651 Test: override_impl_opts ...passed 00:09:20.651 Test: ut_sock_group_get_ctx ...passed 00:09:20.651 00:09:20.651 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.651 suites 1 1 n/a 0 0 00:09:20.651 tests 12 12 12 0 0 00:09:20.651 asserts 349 349 349 0 n/a 00:09:20.651 00:09:20.651 Elapsed time = 0.009 seconds 00:09:20.908 07:19:54 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:09:20.908 00:09:20.908 00:09:20.908 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.908 http://cunit.sourceforge.net/ 00:09:20.908 00:09:20.908 00:09:20.908 Suite: posix 00:09:20.909 Test: flush ...passed 00:09:20.909 00:09:20.909 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.909 suites 1 1 n/a 0 0 00:09:20.909 tests 1 1 1 0 0 00:09:20.909 asserts 28 28 28 0 n/a 00:09:20.909 00:09:20.909 Elapsed time = 0.000 seconds 00:09:20.909 07:19:54 unittest.unittest_sock -- unit/unittest.sh@128 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:20.909 00:09:20.909 real 0m0.118s 00:09:20.909 user 0m0.033s 00:09:20.909 sys 0m0.059s 00:09:20.909 07:19:54 unittest.unittest_sock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:20.909 07:19:54 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:09:20.909 ************************************ 00:09:20.909 END TEST unittest_sock 00:09:20.909 ************************************ 00:09:20.909 07:19:54 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:09:20.909 07:19:54 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:20.909 07:19:54 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:20.909 07:19:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:20.909 ************************************ 00:09:20.909 START TEST unittest_thread 00:09:20.909 ************************************ 00:09:20.909 07:19:54 unittest.unittest_thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:09:20.909 00:09:20.909 00:09:20.909 CUnit - A unit testing framework for C - Version 2.1-3 00:09:20.909 http://cunit.sourceforge.net/ 00:09:20.909 00:09:20.909 00:09:20.909 Suite: io_channel 00:09:20.909 Test: thread_alloc ...passed 00:09:20.909 Test: thread_send_msg ...passed 00:09:20.909 Test: thread_poller ...passed 00:09:20.909 Test: poller_pause ...passed 00:09:20.909 Test: thread_for_each ...passed 00:09:20.909 Test: for_each_channel_remove ...passed 00:09:20.909 Test: for_each_channel_unreg ...[2024-07-12 07:19:54.688595] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2173:spdk_io_device_register: *ERROR*: io_device 0x7ffcfa4ea510 already registered (old:0x613000000200 new:0x6130000003c0) 00:09:20.909 passed 00:09:20.909 Test: thread_name ...passed 00:09:20.909 Test: channel ...[2024-07-12 07:19:54.694321] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2307:spdk_get_io_channel: *ERROR*: could not find io_device 0x55f3f8063c80 00:09:20.909 passed 00:09:20.909 Test: channel_destroy_races ...passed 00:09:20.909 Test: thread_exit_test ...[2024-07-12 07:19:54.701123] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 635:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:09:20.909 passed 00:09:20.909 Test: thread_update_stats_test ...passed 00:09:20.909 Test: nested_channel ...passed 00:09:20.909 Test: device_unregister_and_thread_exit_race ...passed 00:09:20.909 Test: cache_closest_timed_poller ...passed 00:09:20.909 Test: multi_timed_pollers_have_same_expiration ...passed 00:09:20.909 Test: io_device_lookup ...passed 00:09:20.909 Test: spdk_spin ...[2024-07-12 07:19:54.716256] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3071:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:09:20.909 [2024-07-12 07:19:54.716599] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffcfa4ea500 00:09:20.909 [2024-07-12 07:19:54.717014] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3109:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:09:20.909 [2024-07-12 07:19:54.719184] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:20.909 [2024-07-12 07:19:54.719558] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffcfa4ea500 00:09:20.909 [2024-07-12 07:19:54.719876] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:09:20.909 [2024-07-12 07:19:54.720189] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffcfa4ea500 00:09:20.909 [2024-07-12 07:19:54.720496] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:09:20.909 [2024-07-12 07:19:54.720799] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffcfa4ea500 00:09:20.909 [2024-07-12 07:19:54.721094] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3053:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:09:20.909 [2024-07-12 07:19:54.721432] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffcfa4ea500 00:09:20.909 passed 00:09:20.909 Test: for_each_channel_and_thread_exit_race ...passed 00:09:20.909 Test: for_each_thread_and_thread_exit_race ...passed 00:09:20.909 00:09:20.909 Run Summary: Type Total Ran Passed Failed Inactive 00:09:20.909 suites 1 1 n/a 0 0 00:09:20.909 tests 20 20 20 0 0 00:09:20.909 asserts 409 409 409 0 n/a 00:09:20.909 00:09:20.909 Elapsed time = 0.058 seconds 00:09:20.909 00:09:20.909 real 0m0.120s 00:09:20.909 user 0m0.069s 00:09:20.909 sys 0m0.041s 00:09:20.909 07:19:54 unittest.unittest_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:20.909 07:19:54 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.909 ************************************ 00:09:20.909 END TEST unittest_thread 00:09:20.909 ************************************ 00:09:21.166 07:19:54 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:09:21.166 07:19:54 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:21.166 07:19:54 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:21.166 07:19:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:21.166 ************************************ 00:09:21.166 START TEST unittest_iobuf 00:09:21.166 ************************************ 00:09:21.166 07:19:54 unittest.unittest_iobuf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:09:21.166 00:09:21.166 00:09:21.166 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.166 http://cunit.sourceforge.net/ 00:09:21.166 00:09:21.166 00:09:21.166 Suite: io_channel 00:09:21.166 Test: iobuf ...passed 00:09:21.166 Test: iobuf_cache ...[2024-07-12 07:19:54.859001] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:09:21.166 [2024-07-12 07:19:54.859528] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:21.166 [2024-07-12 07:19:54.859818] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:09:21.166 [2024-07-12 07:19:54.859971] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:21.166 [2024-07-12 07:19:54.860177] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:09:21.166 [2024-07-12 07:19:54.860330] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:09:21.166 passed 00:09:21.166 00:09:21.166 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.166 suites 1 1 n/a 0 0 00:09:21.166 tests 2 2 2 0 0 00:09:21.166 asserts 107 107 107 0 n/a 00:09:21.166 00:09:21.166 Elapsed time = 0.007 seconds 00:09:21.166 00:09:21.166 real 0m0.051s 00:09:21.166 user 0m0.036s 00:09:21.166 sys 0m0.014s 00:09:21.166 07:19:54 unittest.unittest_iobuf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:21.166 07:19:54 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:09:21.166 ************************************ 00:09:21.166 END TEST unittest_iobuf 00:09:21.166 ************************************ 00:09:21.166 07:19:54 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:09:21.166 07:19:54 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:21.166 07:19:54 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:21.166 07:19:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:21.166 ************************************ 00:09:21.166 START TEST unittest_util 00:09:21.166 ************************************ 00:09:21.166 07:19:54 unittest.unittest_util -- common/autotest_common.sh@1121 -- # unittest_util 00:09:21.166 07:19:54 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:09:21.166 00:09:21.166 00:09:21.166 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.166 http://cunit.sourceforge.net/ 00:09:21.166 00:09:21.166 00:09:21.166 Suite: base64 00:09:21.166 Test: test_base64_get_encoded_strlen ...passed 00:09:21.166 Test: test_base64_get_decoded_len ...passed 00:09:21.166 Test: test_base64_encode ...passed 00:09:21.166 Test: test_base64_decode ...passed 00:09:21.166 Test: test_base64_urlsafe_encode ...passed 00:09:21.166 Test: test_base64_urlsafe_decode ...passed 00:09:21.166 00:09:21.166 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.166 suites 1 1 n/a 0 0 00:09:21.166 tests 6 6 6 0 0 00:09:21.166 asserts 112 112 112 0 n/a 00:09:21.166 00:09:21.166 Elapsed time = 0.000 seconds 00:09:21.166 07:19:54 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:09:21.166 00:09:21.166 00:09:21.166 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.166 http://cunit.sourceforge.net/ 00:09:21.166 00:09:21.166 00:09:21.166 Suite: bit_array 00:09:21.166 Test: test_1bit ...passed 00:09:21.166 Test: test_64bit ...passed 00:09:21.166 Test: test_find ...passed 00:09:21.166 Test: test_resize ...passed 00:09:21.166 Test: test_errors ...passed 00:09:21.166 Test: test_count ...passed 00:09:21.166 Test: test_mask_store_load ...passed 00:09:21.166 Test: test_mask_clear ...passed 00:09:21.166 00:09:21.166 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.166 suites 1 1 n/a 0 0 00:09:21.166 tests 8 8 8 0 0 00:09:21.166 asserts 5075 5075 5075 0 n/a 00:09:21.166 00:09:21.166 Elapsed time = 0.002 seconds 00:09:21.166 07:19:55 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:09:21.166 00:09:21.166 00:09:21.166 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.166 http://cunit.sourceforge.net/ 00:09:21.166 00:09:21.166 00:09:21.166 Suite: cpuset 00:09:21.166 Test: test_cpuset ...passed 00:09:21.166 Test: test_cpuset_parse ...[2024-07-12 07:19:55.048109] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:09:21.166 [2024-07-12 07:19:55.048650] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:09:21.166 [2024-07-12 07:19:55.048947] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:09:21.166 [2024-07-12 07:19:55.049176] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:09:21.166 [2024-07-12 07:19:55.049337] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:09:21.166 [2024-07-12 07:19:55.049482] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:09:21.166 [2024-07-12 07:19:55.049616] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:09:21.166 [2024-07-12 07:19:55.049818] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:09:21.423 passed 00:09:21.423 Test: test_cpuset_fmt ...passed 00:09:21.423 00:09:21.423 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.423 suites 1 1 n/a 0 0 00:09:21.423 tests 3 3 3 0 0 00:09:21.423 asserts 65 65 65 0 n/a 00:09:21.423 00:09:21.423 Elapsed time = 0.003 seconds 00:09:21.423 07:19:55 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:09:21.423 00:09:21.423 00:09:21.423 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.423 http://cunit.sourceforge.net/ 00:09:21.423 00:09:21.423 00:09:21.423 Suite: crc16 00:09:21.423 Test: test_crc16_t10dif ...passed 00:09:21.423 Test: test_crc16_t10dif_seed ...passed 00:09:21.423 Test: test_crc16_t10dif_copy ...passed 00:09:21.423 00:09:21.423 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.423 suites 1 1 n/a 0 0 00:09:21.423 tests 3 3 3 0 0 00:09:21.423 asserts 5 5 5 0 n/a 00:09:21.423 00:09:21.423 Elapsed time = 0.000 seconds 00:09:21.423 07:19:55 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:09:21.423 00:09:21.423 00:09:21.423 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.423 http://cunit.sourceforge.net/ 00:09:21.423 00:09:21.423 00:09:21.423 Suite: crc32_ieee 00:09:21.423 Test: test_crc32_ieee ...passed 00:09:21.423 00:09:21.423 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.423 suites 1 1 n/a 0 0 00:09:21.423 tests 1 1 1 0 0 00:09:21.423 asserts 1 1 1 0 n/a 00:09:21.423 00:09:21.423 Elapsed time = 0.000 seconds 00:09:21.423 07:19:55 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:09:21.423 00:09:21.423 00:09:21.423 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.423 http://cunit.sourceforge.net/ 00:09:21.423 00:09:21.423 00:09:21.423 Suite: crc32c 00:09:21.423 Test: test_crc32c ...passed 00:09:21.423 Test: test_crc32c_nvme ...passed 00:09:21.423 00:09:21.423 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.423 suites 1 1 n/a 0 0 00:09:21.423 tests 2 2 2 0 0 00:09:21.423 asserts 16 16 16 0 n/a 00:09:21.423 00:09:21.423 Elapsed time = 0.000 seconds 00:09:21.423 07:19:55 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:09:21.423 00:09:21.423 00:09:21.423 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.423 http://cunit.sourceforge.net/ 00:09:21.423 00:09:21.423 00:09:21.423 Suite: crc64 00:09:21.423 Test: test_crc64_nvme ...passed 00:09:21.423 00:09:21.423 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.423 suites 1 1 n/a 0 0 00:09:21.423 tests 1 1 1 0 0 00:09:21.423 asserts 4 4 4 0 n/a 00:09:21.423 00:09:21.423 Elapsed time = 0.000 seconds 00:09:21.423 07:19:55 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:09:21.423 00:09:21.423 00:09:21.423 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.423 http://cunit.sourceforge.net/ 00:09:21.423 00:09:21.423 00:09:21.423 Suite: string 00:09:21.423 Test: test_parse_ip_addr ...passed 00:09:21.423 Test: test_str_chomp ...passed 00:09:21.423 Test: test_parse_capacity ...passed 00:09:21.423 Test: test_sprintf_append_realloc ...passed 00:09:21.423 Test: test_strtol ...passed 00:09:21.423 Test: test_strtoll ...passed 00:09:21.423 Test: test_strarray ...passed 00:09:21.423 Test: test_strcpy_replace ...passed 00:09:21.423 00:09:21.423 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.423 suites 1 1 n/a 0 0 00:09:21.423 tests 8 8 8 0 0 00:09:21.423 asserts 161 161 161 0 n/a 00:09:21.423 00:09:21.423 Elapsed time = 0.001 seconds 00:09:21.423 07:19:55 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:09:21.423 00:09:21.423 00:09:21.423 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.423 http://cunit.sourceforge.net/ 00:09:21.423 00:09:21.423 00:09:21.423 Suite: dif 00:09:21.423 Test: dif_generate_and_verify_test ...[2024-07-12 07:19:55.276813] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:21.423 [2024-07-12 07:19:55.277505] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:21.423 [2024-07-12 07:19:55.277935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:09:21.423 [2024-07-12 07:19:55.278330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:21.423 [2024-07-12 07:19:55.278774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:21.424 [2024-07-12 07:19:55.279182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:09:21.424 passed 00:09:21.424 Test: dif_disable_check_test ...[2024-07-12 07:19:55.280518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:21.424 [2024-07-12 07:19:55.280938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:21.424 [2024-07-12 07:19:55.281468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:09:21.424 passed 00:09:21.424 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-12 07:19:55.282847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:09:21.424 [2024-07-12 07:19:55.283320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:09:21.424 [2024-07-12 07:19:55.283782] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:09:21.424 [2024-07-12 07:19:55.284277] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:09:21.424 [2024-07-12 07:19:55.284734] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:21.424 [2024-07-12 07:19:55.285151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:21.424 [2024-07-12 07:19:55.285671] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:21.424 [2024-07-12 07:19:55.286101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:09:21.424 [2024-07-12 07:19:55.286534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:21.424 [2024-07-12 07:19:55.286996] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:21.424 [2024-07-12 07:19:55.287454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:09:21.424 passed 00:09:21.424 Test: dif_apptag_mask_test ...[2024-07-12 07:19:55.288082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:09:21.424 [2024-07-12 07:19:55.288490] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:09:21.424 passed 00:09:21.424 Test: dif_sec_512_md_0_error_test ...[2024-07-12 07:19:55.288948] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:21.424 passed 00:09:21.424 Test: dif_sec_4096_md_0_error_test ...[2024-07-12 07:19:55.289255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:21.424 [2024-07-12 07:19:55.289444] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:21.424 passed 00:09:21.424 Test: dif_sec_4100_md_128_error_test ...[2024-07-12 07:19:55.289783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:09:21.424 [2024-07-12 07:19:55.289955] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:09:21.424 passed 00:09:21.424 Test: dif_guard_seed_test ...passed 00:09:21.424 Test: dif_guard_value_test ...passed 00:09:21.424 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:09:21.424 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:09:21.424 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:21.424 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:21.682 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:21.682 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:09:21.682 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:09:21.682 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:09:21.682 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:09:21.682 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:21.682 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:09:21.682 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:09:21.682 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:09:21.682 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:09:21.682 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:09:21.682 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:09:21.682 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:21.682 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:21.682 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-12 07:19:55.339965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd0c, Actual=fd4c 00:09:21.682 [2024-07-12 07:19:55.342578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fe61, Actual=fe21 00:09:21.682 [2024-07-12 07:19:55.345177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.682 [2024-07-12 07:19:55.347787] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.682 [2024-07-12 07:19:55.350397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.682 [2024-07-12 07:19:55.352958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.682 [2024-07-12 07:19:55.355560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=6cce 00:09:21.682 [2024-07-12 07:19:55.357683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fe21, Actual=b0f3 00:09:21.682 [2024-07-12 07:19:55.359797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1af753ed, Actual=1ab753ed 00:09:21.682 [2024-07-12 07:19:55.362386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=38174660, Actual=38574660 00:09:21.682 [2024-07-12 07:19:55.364984] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.682 [2024-07-12 07:19:55.367563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.682 [2024-07-12 07:19:55.370141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.682 [2024-07-12 07:19:55.372733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.682 [2024-07-12 07:19:55.375333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=a04c886d 00:09:21.682 [2024-07-12 07:19:55.377471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=38574660, Actual=cfb2348c 00:09:21.682 [2024-07-12 07:19:55.379606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:09:21.682 [2024-07-12 07:19:55.382255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=88010a2d4877a266, Actual=88010a2d4837a266 00:09:21.682 [2024-07-12 07:19:55.384916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.682 [2024-07-12 07:19:55.387556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.682 [2024-07-12 07:19:55.390163] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.682 [2024-07-12 07:19:55.392742] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.682 [2024-07-12 07:19:55.395393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=e3ffa7afacff56d7 00:09:21.682 [2024-07-12 07:19:55.397475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=88010a2d4837a266, Actual=9feb093585a73271 00:09:21.682 passed 00:09:21.682 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-12 07:19:55.398843] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:09:21.682 [2024-07-12 07:19:55.399255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:09:21.682 [2024-07-12 07:19:55.399666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.682 [2024-07-12 07:19:55.400082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.682 [2024-07-12 07:19:55.400528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.682 [2024-07-12 07:19:55.400936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.682 [2024-07-12 07:19:55.401375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6cce 00:09:21.682 [2024-07-12 07:19:55.401725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=b0f3 00:09:21.682 [2024-07-12 07:19:55.402083] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:09:21.682 [2024-07-12 07:19:55.402498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:09:21.682 [2024-07-12 07:19:55.402935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.682 [2024-07-12 07:19:55.403303] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.682 [2024-07-12 07:19:55.403709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.682 [2024-07-12 07:19:55.404128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.682 [2024-07-12 07:19:55.404541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a04c886d 00:09:21.682 [2024-07-12 07:19:55.404877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=cfb2348c 00:09:21.682 [2024-07-12 07:19:55.405251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:09:21.682 [2024-07-12 07:19:55.405684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4877a266, Actual=88010a2d4837a266 00:09:21.682 [2024-07-12 07:19:55.406119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.682 [2024-07-12 07:19:55.406523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.682 [2024-07-12 07:19:55.406924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.682 [2024-07-12 07:19:55.407338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.682 [2024-07-12 07:19:55.407808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=e3ffa7afacff56d7 00:09:21.682 [2024-07-12 07:19:55.408178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9feb093585a73271 00:09:21.682 passed 00:09:21.683 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-12 07:19:55.408758] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:09:21.683 [2024-07-12 07:19:55.409181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:09:21.683 [2024-07-12 07:19:55.409601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.410013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.410440] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.410845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.411244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6cce 00:09:21.683 [2024-07-12 07:19:55.411623] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=b0f3 00:09:21.683 [2024-07-12 07:19:55.411971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:09:21.683 [2024-07-12 07:19:55.412373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:09:21.683 [2024-07-12 07:19:55.412781] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.413190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.413611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.414019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.414440] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a04c886d 00:09:21.683 [2024-07-12 07:19:55.414795] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=cfb2348c 00:09:21.683 [2024-07-12 07:19:55.415164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:09:21.683 [2024-07-12 07:19:55.415577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4877a266, Actual=88010a2d4837a266 00:09:21.683 [2024-07-12 07:19:55.415999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.416422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.416848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.417253] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.417702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=e3ffa7afacff56d7 00:09:21.683 [2024-07-12 07:19:55.418052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9feb093585a73271 00:09:21.683 passed 00:09:21.683 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-12 07:19:55.418617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:09:21.683 [2024-07-12 07:19:55.419033] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:09:21.683 [2024-07-12 07:19:55.419471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.419880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.420337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.420765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.421190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6cce 00:09:21.683 [2024-07-12 07:19:55.421559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=b0f3 00:09:21.683 [2024-07-12 07:19:55.421917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:09:21.683 [2024-07-12 07:19:55.422332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:09:21.683 [2024-07-12 07:19:55.422772] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.423199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.423625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.424034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.424451] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a04c886d 00:09:21.683 [2024-07-12 07:19:55.424809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=cfb2348c 00:09:21.683 [2024-07-12 07:19:55.425176] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:09:21.683 [2024-07-12 07:19:55.425640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4877a266, Actual=88010a2d4837a266 00:09:21.683 [2024-07-12 07:19:55.426045] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.426455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.426884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.427308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.427754] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=e3ffa7afacff56d7 00:09:21.683 [2024-07-12 07:19:55.428125] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9feb093585a73271 00:09:21.683 passed 00:09:21.683 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-12 07:19:55.428684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:09:21.683 [2024-07-12 07:19:55.429078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:09:21.683 [2024-07-12 07:19:55.429510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.429929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.430372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.430793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.431197] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6cce 00:09:21.683 [2024-07-12 07:19:55.431552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=b0f3 00:09:21.683 passed 00:09:21.683 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-12 07:19:55.432120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:09:21.683 [2024-07-12 07:19:55.432516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:09:21.683 [2024-07-12 07:19:55.432955] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.433377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.683 [2024-07-12 07:19:55.433789] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.434209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.683 [2024-07-12 07:19:55.434624] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a04c886d 00:09:21.684 [2024-07-12 07:19:55.434977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=cfb2348c 00:09:21.684 [2024-07-12 07:19:55.435396] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:09:21.684 [2024-07-12 07:19:55.435834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4877a266, Actual=88010a2d4837a266 00:09:21.684 [2024-07-12 07:19:55.436249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.684 [2024-07-12 07:19:55.436655] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.684 [2024-07-12 07:19:55.437066] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.684 [2024-07-12 07:19:55.437495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.684 [2024-07-12 07:19:55.437928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=e3ffa7afacff56d7 00:09:21.684 [2024-07-12 07:19:55.438288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9feb093585a73271 00:09:21.684 passed 00:09:21.684 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-12 07:19:55.438836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:09:21.684 [2024-07-12 07:19:55.439230] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:09:21.684 [2024-07-12 07:19:55.439654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.684 [2024-07-12 07:19:55.440083] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.684 [2024-07-12 07:19:55.440528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.684 [2024-07-12 07:19:55.440934] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.684 [2024-07-12 07:19:55.441362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6cce 00:09:21.684 [2024-07-12 07:19:55.441715] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=b0f3 00:09:21.684 passed 00:09:21.684 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-12 07:19:55.442228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:09:21.684 [2024-07-12 07:19:55.442622] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:09:21.684 [2024-07-12 07:19:55.443048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.684 [2024-07-12 07:19:55.443481] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.684 [2024-07-12 07:19:55.443912] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.684 [2024-07-12 07:19:55.444341] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.684 [2024-07-12 07:19:55.444753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a04c886d 00:09:21.684 [2024-07-12 07:19:55.445096] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=cfb2348c 00:09:21.684 [2024-07-12 07:19:55.445550] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:09:21.684 [2024-07-12 07:19:55.445982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4877a266, Actual=88010a2d4837a266 00:09:21.684 [2024-07-12 07:19:55.446407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.684 [2024-07-12 07:19:55.446819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.684 [2024-07-12 07:19:55.447237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.684 [2024-07-12 07:19:55.447654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.684 [2024-07-12 07:19:55.448093] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=e3ffa7afacff56d7 00:09:21.684 [2024-07-12 07:19:55.448464] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9feb093585a73271 00:09:21.684 passed 00:09:21.684 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:09:21.684 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:21.684 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:09:21.684 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:21.684 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:09:21.684 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:09:21.684 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:21.684 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:09:21.684 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:21.684 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-12 07:19:55.494845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd0c, Actual=fd4c 00:09:21.684 [2024-07-12 07:19:55.496075] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=b00, Actual=b40 00:09:21.684 [2024-07-12 07:19:55.497311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.684 [2024-07-12 07:19:55.498518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.684 [2024-07-12 07:19:55.499753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.684 [2024-07-12 07:19:55.500964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.684 [2024-07-12 07:19:55.502195] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=6cce 00:09:21.684 [2024-07-12 07:19:55.503412] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=ba8c, Actual=f45e 00:09:21.684 [2024-07-12 07:19:55.504632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1af753ed, Actual=1ab753ed 00:09:21.684 [2024-07-12 07:19:55.505860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=9dfa294b, Actual=9dba294b 00:09:21.684 [2024-07-12 07:19:55.507080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.684 [2024-07-12 07:19:55.508332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.684 [2024-07-12 07:19:55.509564] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.684 [2024-07-12 07:19:55.510780] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.684 [2024-07-12 07:19:55.511996] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=a04c886d 00:09:21.684 [2024-07-12 07:19:55.513212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=c91054db, Actual=3ef52637 00:09:21.684 [2024-07-12 07:19:55.514437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:09:21.684 [2024-07-12 07:19:55.515709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a95234708b8c956c, Actual=a95234708bcc956c 00:09:21.684 [2024-07-12 07:19:55.516924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.684 [2024-07-12 07:19:55.518098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.684 [2024-07-12 07:19:55.518930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.684 [2024-07-12 07:19:55.519773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.684 [2024-07-12 07:19:55.520605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=e3ffa7afacff56d7 00:09:21.684 [2024-07-12 07:19:55.521428] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=2d19b1684f09bf67, Actual=3af3b27082992f70 00:09:21.684 passed 00:09:21.685 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-12 07:19:55.521821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:09:21.685 [2024-07-12 07:19:55.522115] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=9c9a, Actual=9cda 00:09:21.685 [2024-07-12 07:19:55.522375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.685 [2024-07-12 07:19:55.522630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.685 [2024-07-12 07:19:55.522903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.685 [2024-07-12 07:19:55.523188] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.685 [2024-07-12 07:19:55.523453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6cce 00:09:21.685 [2024-07-12 07:19:55.523665] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=63c4 00:09:21.685 [2024-07-12 07:19:55.523860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:09:21.685 [2024-07-12 07:19:55.524064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be4c3d3c, Actual=be0c3d3c 00:09:21.685 [2024-07-12 07:19:55.524276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.685 [2024-07-12 07:19:55.524480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.685 [2024-07-12 07:19:55.524683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.685 [2024-07-12 07:19:55.524877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.685 [2024-07-12 07:19:55.525048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a04c886d 00:09:21.685 [2024-07-12 07:19:55.525286] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=1d433240 00:09:21.685 [2024-07-12 07:19:55.525572] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:09:21.685 [2024-07-12 07:19:55.525815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bc2daf03925f1aa9, Actual=bc2daf03921f1aa9 00:09:21.685 [2024-07-12 07:19:55.526079] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.685 [2024-07-12 07:19:55.526322] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.685 [2024-07-12 07:19:55.526572] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.685 [2024-07-12 07:19:55.526792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.685 [2024-07-12 07:19:55.527059] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=e3ffa7afacff56d7 00:09:21.685 [2024-07-12 07:19:55.527316] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=2f8c29039b4aa0b5 00:09:21.685 passed 00:09:21.685 Test: dix_sec_512_md_0_error ...[2024-07-12 07:19:55.527576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:09:21.685 passed 00:09:21.685 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:09:21.685 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:09:21.685 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:09:21.685 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:09:21.685 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:09:21.685 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:09:21.685 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:09:21.685 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:09:21.685 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:09:21.685 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-12 07:19:55.558220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd0c, Actual=fd4c 00:09:21.685 [2024-07-12 07:19:55.559023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=b00, Actual=b40 00:09:21.685 [2024-07-12 07:19:55.559843] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.685 [2024-07-12 07:19:55.560667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.685 [2024-07-12 07:19:55.561482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.685 [2024-07-12 07:19:55.562330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.685 [2024-07-12 07:19:55.563142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=6cce 00:09:21.685 [2024-07-12 07:19:55.564045] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=ba8c, Actual=f45e 00:09:21.685 [2024-07-12 07:19:55.564903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1af753ed, Actual=1ab753ed 00:09:21.942 [2024-07-12 07:19:55.565747] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=9dfa294b, Actual=9dba294b 00:09:21.942 [2024-07-12 07:19:55.566594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.943 [2024-07-12 07:19:55.567350] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.943 [2024-07-12 07:19:55.568202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.943 [2024-07-12 07:19:55.569004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.943 [2024-07-12 07:19:55.569834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=a04c886d 00:09:21.943 [2024-07-12 07:19:55.570660] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=c91054db, Actual=3ef52637 00:09:21.943 [2024-07-12 07:19:55.571535] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:09:21.943 [2024-07-12 07:19:55.572417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a95234708b8c956c, Actual=a95234708bcc956c 00:09:21.943 [2024-07-12 07:19:55.573235] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.943 [2024-07-12 07:19:55.574075] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=c8 00:09:21.943 [2024-07-12 07:19:55.574886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.943 [2024-07-12 07:19:55.575690] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=40005e 00:09:21.943 [2024-07-12 07:19:55.576537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=e3ffa7afacff56d7 00:09:21.943 [2024-07-12 07:19:55.577366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=2d19b1684f09bf67, Actual=3af3b27082992f70 00:09:21.943 passed 00:09:21.943 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-12 07:19:55.577863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:09:21.943 [2024-07-12 07:19:55.578123] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=9c9a, Actual=9cda 00:09:21.943 [2024-07-12 07:19:55.578364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.943 [2024-07-12 07:19:55.578634] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.943 [2024-07-12 07:19:55.578880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.943 [2024-07-12 07:19:55.579118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.943 [2024-07-12 07:19:55.579355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6cce 00:09:21.943 [2024-07-12 07:19:55.579624] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=63c4 00:09:21.943 [2024-07-12 07:19:55.579878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:09:21.943 [2024-07-12 07:19:55.580131] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be4c3d3c, Actual=be0c3d3c 00:09:21.943 [2024-07-12 07:19:55.580388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.943 [2024-07-12 07:19:55.580645] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.943 [2024-07-12 07:19:55.580871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.943 [2024-07-12 07:19:55.581113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.943 [2024-07-12 07:19:55.581361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=a04c886d 00:09:21.943 [2024-07-12 07:19:55.581627] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=1d433240 00:09:21.943 [2024-07-12 07:19:55.581877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:09:21.943 [2024-07-12 07:19:55.582120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bc2daf03925f1aa9, Actual=bc2daf03921f1aa9 00:09:21.943 [2024-07-12 07:19:55.582375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.943 [2024-07-12 07:19:55.582625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:09:21.943 [2024-07-12 07:19:55.582860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.943 [2024-07-12 07:19:55.583111] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:09:21.943 [2024-07-12 07:19:55.583352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=e3ffa7afacff56d7 00:09:21.943 [2024-07-12 07:19:55.583616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=2f8c29039b4aa0b5 00:09:21.943 passed 00:09:21.943 Test: set_md_interleave_iovs_test ...passed 00:09:21.943 Test: set_md_interleave_iovs_split_test ...passed 00:09:21.943 Test: dif_generate_stream_pi_16_test ...passed 00:09:21.943 Test: dif_generate_stream_test ...passed 00:09:21.943 Test: set_md_interleave_iovs_alignment_test ...[2024-07-12 07:19:55.589457] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:09:21.943 passed 00:09:21.943 Test: dif_generate_split_test ...passed 00:09:21.943 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:09:21.943 Test: dif_verify_split_test ...passed 00:09:21.943 Test: dif_verify_stream_multi_segments_test ...passed 00:09:21.943 Test: update_crc32c_pi_16_test ...passed 00:09:21.943 Test: update_crc32c_test ...passed 00:09:21.943 Test: dif_update_crc32c_split_test ...passed 00:09:21.943 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:09:21.943 Test: get_range_with_md_test ...passed 00:09:21.943 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:09:21.943 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:09:21.943 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:09:21.943 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:09:21.943 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:09:21.943 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:09:21.943 Test: dif_generate_and_verify_unmap_test ...passed 00:09:21.943 00:09:21.943 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.943 suites 1 1 n/a 0 0 00:09:21.943 tests 79 79 79 0 0 00:09:21.943 asserts 3584 3584 3584 0 n/a 00:09:21.943 00:09:21.943 Elapsed time = 0.314 seconds 00:09:21.943 07:19:55 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:09:21.943 00:09:21.943 00:09:21.943 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.943 http://cunit.sourceforge.net/ 00:09:21.943 00:09:21.943 00:09:21.943 Suite: iov 00:09:21.943 Test: test_single_iov ...passed 00:09:21.943 Test: test_simple_iov ...passed 00:09:21.943 Test: test_complex_iov ...passed 00:09:21.943 Test: test_iovs_to_buf ...passed 00:09:21.943 Test: test_buf_to_iovs ...passed 00:09:21.943 Test: test_memset ...passed 00:09:21.943 Test: test_iov_one ...passed 00:09:21.943 Test: test_iov_xfer ...passed 00:09:21.943 00:09:21.943 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.943 suites 1 1 n/a 0 0 00:09:21.943 tests 8 8 8 0 0 00:09:21.943 asserts 156 156 156 0 n/a 00:09:21.943 00:09:21.943 Elapsed time = 0.001 seconds 00:09:21.943 07:19:55 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:09:21.943 00:09:21.943 00:09:21.943 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.943 http://cunit.sourceforge.net/ 00:09:21.943 00:09:21.943 00:09:21.943 Suite: math 00:09:21.943 Test: test_serial_number_arithmetic ...passed 00:09:21.943 Suite: erase 00:09:21.943 Test: test_memset_s ...passed 00:09:21.943 00:09:21.943 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.943 suites 2 2 n/a 0 0 00:09:21.943 tests 2 2 2 0 0 00:09:21.943 asserts 18 18 18 0 n/a 00:09:21.943 00:09:21.943 Elapsed time = 0.000 seconds 00:09:21.943 07:19:55 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:09:21.943 00:09:21.943 00:09:21.943 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.943 http://cunit.sourceforge.net/ 00:09:21.943 00:09:21.943 00:09:21.943 Suite: pipe 00:09:21.943 Test: test_create_destroy ...passed 00:09:21.943 Test: test_write_get_buffer ...passed 00:09:21.943 Test: test_write_advance ...passed 00:09:21.943 Test: test_read_get_buffer ...passed 00:09:21.943 Test: test_read_advance ...passed 00:09:21.943 Test: test_data ...passed 00:09:21.943 00:09:21.943 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.943 suites 1 1 n/a 0 0 00:09:21.943 tests 6 6 6 0 0 00:09:21.943 asserts 251 251 251 0 n/a 00:09:21.943 00:09:21.943 Elapsed time = 0.000 seconds 00:09:21.943 07:19:55 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:09:21.943 00:09:21.943 00:09:21.943 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.943 http://cunit.sourceforge.net/ 00:09:21.943 00:09:21.943 00:09:21.943 Suite: xor 00:09:21.943 Test: test_xor_gen ...passed 00:09:21.943 00:09:21.943 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.943 suites 1 1 n/a 0 0 00:09:21.943 tests 1 1 1 0 0 00:09:21.943 asserts 17 17 17 0 n/a 00:09:21.943 00:09:21.943 Elapsed time = 0.005 seconds 00:09:21.943 00:09:21.943 real 0m0.838s 00:09:21.943 user 0m0.557s 00:09:21.943 sys 0m0.242s 00:09:21.943 07:19:55 unittest.unittest_util -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:21.943 07:19:55 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:09:21.943 ************************************ 00:09:21.943 END TEST unittest_util 00:09:21.943 ************************************ 00:09:22.202 07:19:55 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:09:22.202 07:19:55 unittest -- unit/unittest.sh@285 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:09:22.202 07:19:55 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:22.202 07:19:55 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:22.202 07:19:55 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:22.202 ************************************ 00:09:22.202 START TEST unittest_vhost 00:09:22.202 ************************************ 00:09:22.202 07:19:55 unittest.unittest_vhost -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:09:22.202 00:09:22.202 00:09:22.202 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.202 http://cunit.sourceforge.net/ 00:09:22.202 00:09:22.202 00:09:22.202 Suite: vhost_suite 00:09:22.202 Test: desc_to_iov_test ...[2024-07-12 07:19:55.890158] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:09:22.202 passed 00:09:22.202 Test: create_controller_test ...[2024-07-12 07:19:55.893575] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:09:22.202 [2024-07-12 07:19:55.893787] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:09:22.202 [2024-07-12 07:19:55.893917] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:09:22.202 [2024-07-12 07:19:55.894054] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:09:22.202 [2024-07-12 07:19:55.894220] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:09:22.202 [2024-07-12 07:19:55.894558] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller is too long: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:09:22.202 [2024-07-12 07:19:55.895402] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 137:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:09:22.202 passed 00:09:22.202 Test: session_find_by_vid_test ...passed 00:09:22.202 Test: remove_controller_test ...[2024-07-12 07:19:55.897261] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1866:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:09:22.202 passed 00:09:22.202 Test: vq_avail_ring_get_test ...passed 00:09:22.202 Test: vq_packed_ring_test ...passed 00:09:22.202 Test: vhost_blk_construct_test ...passed 00:09:22.202 00:09:22.202 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.202 suites 1 1 n/a 0 0 00:09:22.202 tests 7 7 7 0 0 00:09:22.202 asserts 147 147 147 0 n/a 00:09:22.202 00:09:22.202 Elapsed time = 0.009 seconds 00:09:22.202 00:09:22.202 real 0m0.059s 00:09:22.202 user 0m0.027s 00:09:22.202 sys 0m0.030s 00:09:22.202 07:19:55 unittest.unittest_vhost -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.202 07:19:55 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:09:22.202 ************************************ 00:09:22.202 END TEST unittest_vhost 00:09:22.202 ************************************ 00:09:22.202 07:19:55 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:09:22.202 07:19:55 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:22.202 07:19:55 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:22.202 07:19:55 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:22.202 ************************************ 00:09:22.202 START TEST unittest_dma 00:09:22.202 ************************************ 00:09:22.202 07:19:55 unittest.unittest_dma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:09:22.202 00:09:22.202 00:09:22.202 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.202 http://cunit.sourceforge.net/ 00:09:22.202 00:09:22.202 00:09:22.202 Suite: dma_suite 00:09:22.202 Test: test_dma ...[2024-07-12 07:19:56.004561] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:09:22.202 passed 00:09:22.202 00:09:22.202 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.202 suites 1 1 n/a 0 0 00:09:22.202 tests 1 1 1 0 0 00:09:22.202 asserts 54 54 54 0 n/a 00:09:22.202 00:09:22.202 Elapsed time = 0.001 seconds 00:09:22.202 00:09:22.202 real 0m0.036s 00:09:22.202 user 0m0.021s 00:09:22.202 sys 0m0.015s 00:09:22.203 07:19:56 unittest.unittest_dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.203 ************************************ 00:09:22.203 END TEST unittest_dma 00:09:22.203 ************************************ 00:09:22.203 07:19:56 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:09:22.203 07:19:56 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:09:22.203 07:19:56 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:22.203 07:19:56 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:22.203 07:19:56 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:22.203 ************************************ 00:09:22.203 START TEST unittest_init 00:09:22.203 ************************************ 00:09:22.203 07:19:56 unittest.unittest_init -- common/autotest_common.sh@1121 -- # unittest_init 00:09:22.203 07:19:56 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:09:22.460 00:09:22.460 00:09:22.460 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.460 http://cunit.sourceforge.net/ 00:09:22.460 00:09:22.460 00:09:22.460 Suite: subsystem_suite 00:09:22.460 Test: subsystem_sort_test_depends_on_single ...passed 00:09:22.460 Test: subsystem_sort_test_depends_on_multiple ...passed 00:09:22.460 Test: subsystem_sort_test_missing_dependency ...[2024-07-12 07:19:56.103545] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:09:22.460 passed 00:09:22.460 00:09:22.460 [2024-07-12 07:19:56.103930] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:09:22.460 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.460 suites 1 1 n/a 0 0 00:09:22.460 tests 3 3 3 0 0 00:09:22.460 asserts 20 20 20 0 n/a 00:09:22.460 00:09:22.460 Elapsed time = 0.001 seconds 00:09:22.460 00:09:22.460 real 0m0.043s 00:09:22.460 user 0m0.021s 00:09:22.460 sys 0m0.022s 00:09:22.460 07:19:56 unittest.unittest_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.461 ************************************ 00:09:22.461 END TEST unittest_init 00:09:22.461 ************************************ 00:09:22.461 07:19:56 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:09:22.461 07:19:56 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:09:22.461 07:19:56 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:22.461 07:19:56 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:22.461 07:19:56 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:22.461 ************************************ 00:09:22.461 START TEST unittest_keyring 00:09:22.461 ************************************ 00:09:22.461 07:19:56 unittest.unittest_keyring -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:09:22.461 00:09:22.461 00:09:22.461 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.461 http://cunit.sourceforge.net/ 00:09:22.461 00:09:22.461 00:09:22.461 Suite: keyring 00:09:22.461 Test: test_keyring_add_remove ...[2024-07-12 07:19:56.201810] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:09:22.461 [2024-07-12 07:19:56.202579] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:09:22.461 [2024-07-12 07:19:56.202793] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:09:22.461 passed 00:09:22.461 Test: test_keyring_get_put ...passed 00:09:22.461 00:09:22.461 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.461 suites 1 1 n/a 0 0 00:09:22.461 tests 2 2 2 0 0 00:09:22.461 asserts 44 44 44 0 n/a 00:09:22.461 00:09:22.461 Elapsed time = 0.001 seconds 00:09:22.461 00:09:22.461 real 0m0.038s 00:09:22.461 user 0m0.026s 00:09:22.461 sys 0m0.011s 00:09:22.461 07:19:56 unittest.unittest_keyring -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.461 07:19:56 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:09:22.461 ************************************ 00:09:22.461 END TEST unittest_keyring 00:09:22.461 ************************************ 00:09:22.461 07:19:56 unittest -- unit/unittest.sh@292 -- # '[' yes = yes ']' 00:09:22.461 07:19:56 unittest -- unit/unittest.sh@292 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:09:22.461 07:19:56 unittest -- unit/unittest.sh@293 -- # hostname 00:09:22.461 07:19:56 unittest -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:09:22.719 geninfo: WARNING: invalid characters removed from testname! 00:09:54.803 07:20:23 unittest -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:09:54.803 07:20:28 unittest -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:57.337 07:20:30 unittest -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:59.869 07:20:33 unittest -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:02.401 07:20:35 unittest -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:04.934 07:20:38 unittest -- unit/unittest.sh@299 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:07.498 07:20:40 unittest -- unit/unittest.sh@300 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:09.401 07:20:43 unittest -- unit/unittest.sh@301 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:10:09.401 07:20:43 unittest -- unit/unittest.sh@302 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:10:09.967 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:10:09.967 Found 321 entries. 00:10:09.967 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:10:09.967 Writing .css and .png files. 00:10:09.967 Generating output. 00:10:09.967 Processing file include/linux/virtio_ring.h 00:10:10.225 Processing file include/spdk/trace.h 00:10:10.225 Processing file include/spdk/nvmf_transport.h 00:10:10.225 Processing file include/spdk/bdev_module.h 00:10:10.225 Processing file include/spdk/mmio.h 00:10:10.225 Processing file include/spdk/thread.h 00:10:10.225 Processing file include/spdk/base64.h 00:10:10.225 Processing file include/spdk/histogram_data.h 00:10:10.225 Processing file include/spdk/endian.h 00:10:10.225 Processing file include/spdk/nvme_spec.h 00:10:10.225 Processing file include/spdk/nvme.h 00:10:10.225 Processing file include/spdk/util.h 00:10:10.225 Processing file include/spdk_internal/nvme_tcp.h 00:10:10.225 Processing file include/spdk_internal/rdma.h 00:10:10.225 Processing file include/spdk_internal/sock.h 00:10:10.225 Processing file include/spdk_internal/sgl.h 00:10:10.225 Processing file include/spdk_internal/virtio.h 00:10:10.225 Processing file include/spdk_internal/utf.h 00:10:10.482 Processing file lib/accel/accel_sw.c 00:10:10.482 Processing file lib/accel/accel_rpc.c 00:10:10.482 Processing file lib/accel/accel.c 00:10:10.740 Processing file lib/bdev/scsi_nvme.c 00:10:10.740 Processing file lib/bdev/bdev_zone.c 00:10:10.740 Processing file lib/bdev/part.c 00:10:10.740 Processing file lib/bdev/bdev.c 00:10:10.740 Processing file lib/bdev/bdev_rpc.c 00:10:10.999 Processing file lib/blob/blobstore.h 00:10:10.999 Processing file lib/blob/request.c 00:10:10.999 Processing file lib/blob/zeroes.c 00:10:10.999 Processing file lib/blob/blob_bs_dev.c 00:10:10.999 Processing file lib/blob/blobstore.c 00:10:10.999 Processing file lib/blobfs/blobfs.c 00:10:10.999 Processing file lib/blobfs/tree.c 00:10:10.999 Processing file lib/conf/conf.c 00:10:10.999 Processing file lib/dma/dma.c 00:10:11.257 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:10:11.257 Processing file lib/env_dpdk/env.c 00:10:11.257 Processing file lib/env_dpdk/sigbus_handler.c 00:10:11.257 Processing file lib/env_dpdk/init.c 00:10:11.257 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:10:11.257 Processing file lib/env_dpdk/threads.c 00:10:11.257 Processing file lib/env_dpdk/pci.c 00:10:11.257 Processing file lib/env_dpdk/memory.c 00:10:11.257 Processing file lib/env_dpdk/pci_virtio.c 00:10:11.257 Processing file lib/env_dpdk/pci_vmd.c 00:10:11.257 Processing file lib/env_dpdk/pci_ioat.c 00:10:11.257 Processing file lib/env_dpdk/pci_event.c 00:10:11.257 Processing file lib/env_dpdk/pci_dpdk.c 00:10:11.257 Processing file lib/env_dpdk/pci_idxd.c 00:10:11.516 Processing file lib/event/scheduler_static.c 00:10:11.516 Processing file lib/event/app_rpc.c 00:10:11.516 Processing file lib/event/reactor.c 00:10:11.516 Processing file lib/event/app.c 00:10:11.516 Processing file lib/event/log_rpc.c 00:10:11.775 Processing file lib/ftl/ftl_nv_cache_io.h 00:10:11.775 Processing file lib/ftl/ftl_nv_cache.c 00:10:11.775 Processing file lib/ftl/ftl_writer.h 00:10:11.775 Processing file lib/ftl/ftl_reloc.c 00:10:11.775 Processing file lib/ftl/ftl_core.c 00:10:11.775 Processing file lib/ftl/ftl_p2l.c 00:10:11.775 Processing file lib/ftl/ftl_band.h 00:10:11.775 Processing file lib/ftl/ftl_core.h 00:10:11.775 Processing file lib/ftl/ftl_writer.c 00:10:11.775 Processing file lib/ftl/ftl_l2p_flat.c 00:10:11.775 Processing file lib/ftl/ftl_layout.c 00:10:11.775 Processing file lib/ftl/ftl_l2p.c 00:10:11.775 Processing file lib/ftl/ftl_io.c 00:10:11.775 Processing file lib/ftl/ftl_nv_cache.h 00:10:11.775 Processing file lib/ftl/ftl_io.h 00:10:11.775 Processing file lib/ftl/ftl_debug.c 00:10:11.775 Processing file lib/ftl/ftl_l2p_cache.c 00:10:11.775 Processing file lib/ftl/ftl_init.c 00:10:11.775 Processing file lib/ftl/ftl_band.c 00:10:11.775 Processing file lib/ftl/ftl_debug.h 00:10:11.775 Processing file lib/ftl/ftl_rq.c 00:10:11.775 Processing file lib/ftl/ftl_band_ops.c 00:10:11.775 Processing file lib/ftl/ftl_sb.c 00:10:11.775 Processing file lib/ftl/ftl_trace.c 00:10:11.775 Processing file lib/ftl/base/ftl_base_bdev.c 00:10:11.775 Processing file lib/ftl/base/ftl_base_dev.c 00:10:12.034 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:10:12.034 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:10:12.034 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:10:12.034 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:10:12.034 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:10:12.034 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:10:12.034 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:10:12.034 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:10:12.034 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:10:12.034 Processing file lib/ftl/mngt/ftl_mngt.c 00:10:12.034 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:10:12.034 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:10:12.034 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:10:12.293 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:10:12.293 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:10:12.293 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:10:12.293 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:10:12.293 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:10:12.293 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:10:12.293 Processing file lib/ftl/upgrade/ftl_band_upgrade.c 00:10:12.293 Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c 00:10:12.293 Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c 00:10:12.293 Processing file lib/ftl/upgrade/ftl_trim_upgrade.c 00:10:12.553 Processing file lib/ftl/utils/ftl_property.h 00:10:12.553 Processing file lib/ftl/utils/ftl_md.c 00:10:12.553 Processing file lib/ftl/utils/ftl_bitmap.c 00:10:12.553 Processing file lib/ftl/utils/ftl_df.h 00:10:12.553 Processing file lib/ftl/utils/ftl_property.c 00:10:12.553 Processing file lib/ftl/utils/ftl_addr_utils.h 00:10:12.553 Processing file lib/ftl/utils/ftl_mempool.c 00:10:12.553 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:10:12.553 Processing file lib/ftl/utils/ftl_conf.c 00:10:12.553 Processing file lib/idxd/idxd_user.c 00:10:12.553 Processing file lib/idxd/idxd_internal.h 00:10:12.553 Processing file lib/idxd/idxd.c 00:10:12.553 Processing file lib/init/rpc.c 00:10:12.553 Processing file lib/init/subsystem_rpc.c 00:10:12.553 Processing file lib/init/json_config.c 00:10:12.553 Processing file lib/init/subsystem.c 00:10:12.811 Processing file lib/ioat/ioat.c 00:10:12.811 Processing file lib/ioat/ioat_internal.h 00:10:13.070 Processing file lib/iscsi/iscsi_subsystem.c 00:10:13.070 Processing file lib/iscsi/md5.c 00:10:13.070 Processing file lib/iscsi/task.h 00:10:13.070 Processing file lib/iscsi/portal_grp.c 00:10:13.070 Processing file lib/iscsi/param.c 00:10:13.070 Processing file lib/iscsi/iscsi.h 00:10:13.070 Processing file lib/iscsi/task.c 00:10:13.070 Processing file lib/iscsi/init_grp.c 00:10:13.070 Processing file lib/iscsi/iscsi_rpc.c 00:10:13.070 Processing file lib/iscsi/iscsi.c 00:10:13.070 Processing file lib/iscsi/tgt_node.c 00:10:13.070 Processing file lib/iscsi/conn.c 00:10:13.070 Processing file lib/json/json_parse.c 00:10:13.070 Processing file lib/json/json_util.c 00:10:13.070 Processing file lib/json/json_write.c 00:10:13.070 Processing file lib/jsonrpc/jsonrpc_server.c 00:10:13.070 Processing file lib/jsonrpc/jsonrpc_client.c 00:10:13.070 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:10:13.070 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:10:13.070 Processing file lib/keyring/keyring_rpc.c 00:10:13.070 Processing file lib/keyring/keyring.c 00:10:13.329 Processing file lib/log/log.c 00:10:13.329 Processing file lib/log/log_deprecated.c 00:10:13.329 Processing file lib/log/log_flags.c 00:10:13.329 Processing file lib/lvol/lvol.c 00:10:13.329 Processing file lib/nbd/nbd_rpc.c 00:10:13.329 Processing file lib/nbd/nbd.c 00:10:13.587 Processing file lib/notify/notify_rpc.c 00:10:13.587 Processing file lib/notify/notify.c 00:10:14.155 Processing file lib/nvme/nvme_pcie.c 00:10:14.155 Processing file lib/nvme/nvme_discovery.c 00:10:14.155 Processing file lib/nvme/nvme.c 00:10:14.155 Processing file lib/nvme/nvme_poll_group.c 00:10:14.155 Processing file lib/nvme/nvme_rdma.c 00:10:14.155 Processing file lib/nvme/nvme_auth.c 00:10:14.155 Processing file lib/nvme/nvme_pcie_internal.h 00:10:14.155 Processing file lib/nvme/nvme_internal.h 00:10:14.155 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:10:14.155 Processing file lib/nvme/nvme_pcie_common.c 00:10:14.155 Processing file lib/nvme/nvme_qpair.c 00:10:14.155 Processing file lib/nvme/nvme_io_msg.c 00:10:14.155 Processing file lib/nvme/nvme_quirks.c 00:10:14.155 Processing file lib/nvme/nvme_ns.c 00:10:14.155 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:10:14.155 Processing file lib/nvme/nvme_ns_cmd.c 00:10:14.155 Processing file lib/nvme/nvme_opal.c 00:10:14.155 Processing file lib/nvme/nvme_ctrlr.c 00:10:14.155 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:10:14.155 Processing file lib/nvme/nvme_transport.c 00:10:14.155 Processing file lib/nvme/nvme_cuse.c 00:10:14.155 Processing file lib/nvme/nvme_tcp.c 00:10:14.155 Processing file lib/nvme/nvme_zns.c 00:10:14.155 Processing file lib/nvme/nvme_fabric.c 00:10:14.413 Processing file lib/nvmf/auth.c 00:10:14.413 Processing file lib/nvmf/nvmf_rpc.c 00:10:14.413 Processing file lib/nvmf/tcp.c 00:10:14.413 Processing file lib/nvmf/ctrlr_discovery.c 00:10:14.413 Processing file lib/nvmf/transport.c 00:10:14.413 Processing file lib/nvmf/nvmf_internal.h 00:10:14.413 Processing file lib/nvmf/nvmf.c 00:10:14.413 Processing file lib/nvmf/ctrlr_bdev.c 00:10:14.413 Processing file lib/nvmf/rdma.c 00:10:14.413 Processing file lib/nvmf/ctrlr.c 00:10:14.413 Processing file lib/nvmf/subsystem.c 00:10:14.413 Processing file lib/rdma/common.c 00:10:14.413 Processing file lib/rdma/rdma_verbs.c 00:10:14.413 Processing file lib/rpc/rpc.c 00:10:14.670 Processing file lib/scsi/dev.c 00:10:14.670 Processing file lib/scsi/lun.c 00:10:14.670 Processing file lib/scsi/scsi.c 00:10:14.670 Processing file lib/scsi/scsi_bdev.c 00:10:14.670 Processing file lib/scsi/port.c 00:10:14.671 Processing file lib/scsi/scsi_pr.c 00:10:14.671 Processing file lib/scsi/task.c 00:10:14.671 Processing file lib/scsi/scsi_rpc.c 00:10:14.671 Processing file lib/sock/sock.c 00:10:14.671 Processing file lib/sock/sock_rpc.c 00:10:14.671 Processing file lib/thread/thread.c 00:10:14.671 Processing file lib/thread/iobuf.c 00:10:14.929 Processing file lib/trace/trace_flags.c 00:10:14.929 Processing file lib/trace/trace_rpc.c 00:10:14.929 Processing file lib/trace/trace.c 00:10:14.929 Processing file lib/trace_parser/trace.cpp 00:10:14.929 Processing file lib/ut/ut.c 00:10:14.929 Processing file lib/ut_mock/mock.c 00:10:15.187 Processing file lib/util/bit_array.c 00:10:15.187 Processing file lib/util/crc32.c 00:10:15.187 Processing file lib/util/string.c 00:10:15.187 Processing file lib/util/hexlify.c 00:10:15.187 Processing file lib/util/strerror_tls.c 00:10:15.187 Processing file lib/util/fd.c 00:10:15.187 Processing file lib/util/iov.c 00:10:15.187 Processing file lib/util/cpuset.c 00:10:15.187 Processing file lib/util/crc64.c 00:10:15.187 Processing file lib/util/crc32_ieee.c 00:10:15.187 Processing file lib/util/uuid.c 00:10:15.187 Processing file lib/util/crc32c.c 00:10:15.187 Processing file lib/util/file.c 00:10:15.187 Processing file lib/util/fd_group.c 00:10:15.187 Processing file lib/util/pipe.c 00:10:15.187 Processing file lib/util/base64.c 00:10:15.187 Processing file lib/util/xor.c 00:10:15.187 Processing file lib/util/zipf.c 00:10:15.187 Processing file lib/util/dif.c 00:10:15.187 Processing file lib/util/math.c 00:10:15.187 Processing file lib/util/crc16.c 00:10:15.445 Processing file lib/vfio_user/host/vfio_user.c 00:10:15.445 Processing file lib/vfio_user/host/vfio_user_pci.c 00:10:15.445 Processing file lib/vhost/vhost_scsi.c 00:10:15.445 Processing file lib/vhost/vhost_rpc.c 00:10:15.445 Processing file lib/vhost/vhost_internal.h 00:10:15.445 Processing file lib/vhost/vhost.c 00:10:15.445 Processing file lib/vhost/rte_vhost_user.c 00:10:15.445 Processing file lib/vhost/vhost_blk.c 00:10:15.704 Processing file lib/virtio/virtio_vfio_user.c 00:10:15.704 Processing file lib/virtio/virtio_pci.c 00:10:15.704 Processing file lib/virtio/virtio.c 00:10:15.704 Processing file lib/virtio/virtio_vhost_user.c 00:10:15.704 Processing file lib/vmd/led.c 00:10:15.704 Processing file lib/vmd/vmd.c 00:10:15.704 Processing file module/accel/dsa/accel_dsa.c 00:10:15.704 Processing file module/accel/dsa/accel_dsa_rpc.c 00:10:15.704 Processing file module/accel/error/accel_error.c 00:10:15.704 Processing file module/accel/error/accel_error_rpc.c 00:10:15.963 Processing file module/accel/iaa/accel_iaa_rpc.c 00:10:15.963 Processing file module/accel/iaa/accel_iaa.c 00:10:15.963 Processing file module/accel/ioat/accel_ioat_rpc.c 00:10:15.963 Processing file module/accel/ioat/accel_ioat.c 00:10:15.963 Processing file module/bdev/aio/bdev_aio_rpc.c 00:10:15.963 Processing file module/bdev/aio/bdev_aio.c 00:10:15.963 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:10:15.963 Processing file module/bdev/delay/vbdev_delay.c 00:10:16.223 Processing file module/bdev/error/vbdev_error.c 00:10:16.223 Processing file module/bdev/error/vbdev_error_rpc.c 00:10:16.223 Processing file module/bdev/ftl/bdev_ftl.c 00:10:16.223 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:10:16.223 Processing file module/bdev/gpt/gpt.h 00:10:16.223 Processing file module/bdev/gpt/gpt.c 00:10:16.223 Processing file module/bdev/gpt/vbdev_gpt.c 00:10:16.482 Processing file module/bdev/iscsi/bdev_iscsi.c 00:10:16.482 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:10:16.482 Processing file module/bdev/lvol/vbdev_lvol.c 00:10:16.482 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:10:16.482 Processing file module/bdev/malloc/bdev_malloc.c 00:10:16.482 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:10:16.482 Processing file module/bdev/null/bdev_null_rpc.c 00:10:16.482 Processing file module/bdev/null/bdev_null.c 00:10:16.741 Processing file module/bdev/nvme/bdev_mdns_client.c 00:10:16.741 Processing file module/bdev/nvme/vbdev_opal.c 00:10:16.741 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:10:16.741 Processing file module/bdev/nvme/nvme_rpc.c 00:10:16.741 Processing file module/bdev/nvme/bdev_nvme.c 00:10:16.741 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:10:16.741 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:10:17.001 Processing file module/bdev/passthru/vbdev_passthru.c 00:10:17.001 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:10:17.001 Processing file module/bdev/raid/raid0.c 00:10:17.001 Processing file module/bdev/raid/raid1.c 00:10:17.001 Processing file module/bdev/raid/concat.c 00:10:17.001 Processing file module/bdev/raid/bdev_raid_rpc.c 00:10:17.001 Processing file module/bdev/raid/bdev_raid_sb.c 00:10:17.001 Processing file module/bdev/raid/bdev_raid.c 00:10:17.001 Processing file module/bdev/raid/bdev_raid.h 00:10:17.001 Processing file module/bdev/raid/raid5f.c 00:10:17.261 Processing file module/bdev/split/vbdev_split.c 00:10:17.261 Processing file module/bdev/split/vbdev_split_rpc.c 00:10:17.261 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:10:17.261 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:10:17.261 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:10:17.261 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:10:17.261 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:10:17.261 Processing file module/blob/bdev/blob_bdev.c 00:10:17.520 Processing file module/blobfs/bdev/blobfs_bdev.c 00:10:17.520 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:10:17.520 Processing file module/env_dpdk/env_dpdk_rpc.c 00:10:17.520 Processing file module/event/subsystems/accel/accel.c 00:10:17.520 Processing file module/event/subsystems/bdev/bdev.c 00:10:17.779 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:10:17.779 Processing file module/event/subsystems/iobuf/iobuf.c 00:10:17.779 Processing file module/event/subsystems/iscsi/iscsi.c 00:10:17.779 Processing file module/event/subsystems/keyring/keyring.c 00:10:17.779 Processing file module/event/subsystems/nbd/nbd.c 00:10:18.037 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:10:18.037 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:10:18.037 Processing file module/event/subsystems/scheduler/scheduler.c 00:10:18.037 Processing file module/event/subsystems/scsi/scsi.c 00:10:18.037 Processing file module/event/subsystems/sock/sock.c 00:10:18.296 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:10:18.296 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:10:18.296 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:10:18.296 Processing file module/event/subsystems/vmd/vmd.c 00:10:18.296 Processing file module/keyring/file/keyring.c 00:10:18.296 Processing file module/keyring/file/keyring_rpc.c 00:10:18.555 Processing file module/keyring/linux/keyring.c 00:10:18.555 Processing file module/keyring/linux/keyring_rpc.c 00:10:18.555 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:10:18.555 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:10:18.555 Processing file module/scheduler/gscheduler/gscheduler.c 00:10:18.814 Processing file module/sock/sock_kernel.h 00:10:18.814 Processing file module/sock/posix/posix.c 00:10:18.814 Writing directory view page. 00:10:18.814 Overall coverage rate: 00:10:18.814 lines......: 38.7% (40806 of 105395 lines) 00:10:18.814 functions..: 42.4% (3713 of 8766 functions) 00:10:18.814 00:10:18.814 00:10:18.814 ===================== 00:10:18.814 All unit tests passed 00:10:18.814 ===================== 00:10:18.814 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:10:18.814 07:20:52 unittest -- unit/unittest.sh@305 -- # set +x 00:10:18.814 00:10:18.814 00:10:18.814 00:10:18.814 real 3m45.137s 00:10:18.814 user 3m10.913s 00:10:18.814 sys 0m24.895s 00:10:18.814 07:20:52 unittest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:18.814 07:20:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:10:18.814 ************************************ 00:10:18.814 END TEST unittest 00:10:18.814 ************************************ 00:10:18.814 07:20:52 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:10:18.814 07:20:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:10:18.814 07:20:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:10:18.814 07:20:52 -- spdk/autotest.sh@162 -- # timing_enter lib 00:10:18.814 07:20:52 -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:18.814 07:20:52 -- common/autotest_common.sh@10 -- # set +x 00:10:18.814 07:20:52 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:10:18.814 07:20:52 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:18.814 07:20:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:18.814 07:20:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:18.814 07:20:52 -- common/autotest_common.sh@10 -- # set +x 00:10:19.073 ************************************ 00:10:19.073 START TEST env 00:10:19.073 ************************************ 00:10:19.073 07:20:52 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:19.073 * Looking for test storage... 00:10:19.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:10:19.074 07:20:52 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:19.074 07:20:52 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:19.074 07:20:52 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:19.074 07:20:52 env -- common/autotest_common.sh@10 -- # set +x 00:10:19.074 ************************************ 00:10:19.074 START TEST env_memory 00:10:19.074 ************************************ 00:10:19.074 07:20:52 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:19.074 00:10:19.074 00:10:19.074 CUnit - A unit testing framework for C - Version 2.1-3 00:10:19.074 http://cunit.sourceforge.net/ 00:10:19.074 00:10:19.074 00:10:19.074 Suite: memory 00:10:19.074 Test: alloc and free memory map ...[2024-07-12 07:20:52.893370] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:19.074 passed 00:10:19.074 Test: mem map translation ...[2024-07-12 07:20:52.949067] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:19.074 [2024-07-12 07:20:52.949208] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:19.074 [2024-07-12 07:20:52.949350] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:19.074 [2024-07-12 07:20:52.949440] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:19.331 passed 00:10:19.331 Test: mem map registration ...[2024-07-12 07:20:53.042186] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:10:19.331 [2024-07-12 07:20:53.042323] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:10:19.331 passed 00:10:19.331 Test: mem map adjacent registrations ...passed 00:10:19.331 00:10:19.331 Run Summary: Type Total Ran Passed Failed Inactive 00:10:19.331 suites 1 1 n/a 0 0 00:10:19.331 tests 4 4 4 0 0 00:10:19.331 asserts 152 152 152 0 n/a 00:10:19.331 00:10:19.331 Elapsed time = 0.331 seconds 00:10:19.331 00:10:19.331 real 0m0.371s 00:10:19.331 user 0m0.332s 00:10:19.331 sys 0m0.040s 00:10:19.331 07:20:53 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:19.331 07:20:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:19.331 ************************************ 00:10:19.331 END TEST env_memory 00:10:19.331 ************************************ 00:10:19.588 07:20:53 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:19.588 07:20:53 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:19.588 07:20:53 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:19.588 07:20:53 env -- common/autotest_common.sh@10 -- # set +x 00:10:19.588 ************************************ 00:10:19.588 START TEST env_vtophys 00:10:19.588 ************************************ 00:10:19.588 07:20:53 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:19.588 EAL: lib.eal log level changed from notice to debug 00:10:19.588 EAL: Detected lcore 0 as core 0 on socket 0 00:10:19.588 EAL: Detected lcore 1 as core 0 on socket 0 00:10:19.588 EAL: Detected lcore 2 as core 0 on socket 0 00:10:19.588 EAL: Detected lcore 3 as core 0 on socket 0 00:10:19.588 EAL: Detected lcore 4 as core 0 on socket 0 00:10:19.588 EAL: Detected lcore 5 as core 0 on socket 0 00:10:19.588 EAL: Detected lcore 6 as core 0 on socket 0 00:10:19.588 EAL: Detected lcore 7 as core 0 on socket 0 00:10:19.588 EAL: Detected lcore 8 as core 0 on socket 0 00:10:19.588 EAL: Detected lcore 9 as core 0 on socket 0 00:10:19.588 EAL: Maximum logical cores by configuration: 128 00:10:19.588 EAL: Detected CPU lcores: 10 00:10:19.588 EAL: Detected NUMA nodes: 1 00:10:19.588 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:10:19.588 EAL: Checking presence of .so 'librte_eal.so.23' 00:10:19.588 EAL: Checking presence of .so 'librte_eal.so' 00:10:19.588 EAL: Detected static linkage of DPDK 00:10:19.588 EAL: No shared files mode enabled, IPC will be disabled 00:10:19.588 EAL: Selected IOVA mode 'PA' 00:10:19.588 EAL: Probing VFIO support... 00:10:19.588 EAL: IOMMU type 1 (Type 1) is supported 00:10:19.588 EAL: IOMMU type 7 (sPAPR) is not supported 00:10:19.588 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:10:19.588 EAL: VFIO support initialized 00:10:19.588 EAL: Ask a virtual area of 0x2e000 bytes 00:10:19.588 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:19.588 EAL: Setting up physically contiguous memory... 00:10:19.588 EAL: Setting maximum number of open files to 1048576 00:10:19.588 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:19.588 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:19.588 EAL: Ask a virtual area of 0x61000 bytes 00:10:19.588 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:19.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:19.588 EAL: Ask a virtual area of 0x400000000 bytes 00:10:19.588 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:19.588 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:19.588 EAL: Ask a virtual area of 0x61000 bytes 00:10:19.588 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:19.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:19.588 EAL: Ask a virtual area of 0x400000000 bytes 00:10:19.588 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:19.588 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:19.588 EAL: Ask a virtual area of 0x61000 bytes 00:10:19.588 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:19.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:19.588 EAL: Ask a virtual area of 0x400000000 bytes 00:10:19.588 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:19.588 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:19.588 EAL: Ask a virtual area of 0x61000 bytes 00:10:19.588 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:19.588 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:19.588 EAL: Ask a virtual area of 0x400000000 bytes 00:10:19.588 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:19.588 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:19.588 EAL: Hugepages will be freed exactly as allocated. 00:10:19.588 EAL: No shared files mode enabled, IPC is disabled 00:10:19.588 EAL: No shared files mode enabled, IPC is disabled 00:10:19.588 EAL: TSC frequency is ~2100000 KHz 00:10:19.588 EAL: Main lcore 0 is ready (tid=7fa6a9ccda80;cpuset=[0]) 00:10:19.588 EAL: Trying to obtain current memory policy. 00:10:19.588 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:19.588 EAL: Restoring previous memory policy: 0 00:10:19.588 EAL: request: mp_malloc_sync 00:10:19.588 EAL: No shared files mode enabled, IPC is disabled 00:10:19.588 EAL: Heap on socket 0 was expanded by 2MB 00:10:19.588 EAL: No shared files mode enabled, IPC is disabled 00:10:19.588 EAL: Mem event callback 'spdk:(nil)' registered 00:10:19.588 00:10:19.588 00:10:19.588 CUnit - A unit testing framework for C - Version 2.1-3 00:10:19.588 http://cunit.sourceforge.net/ 00:10:19.588 00:10:19.588 00:10:19.588 Suite: components_suite 00:10:20.519 Test: vtophys_malloc_test ...passed 00:10:20.519 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:20.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:20.519 EAL: Restoring previous memory policy: 0 00:10:20.519 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.519 EAL: request: mp_malloc_sync 00:10:20.519 EAL: No shared files mode enabled, IPC is disabled 00:10:20.519 EAL: Heap on socket 0 was expanded by 4MB 00:10:20.519 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.519 EAL: request: mp_malloc_sync 00:10:20.519 EAL: No shared files mode enabled, IPC is disabled 00:10:20.519 EAL: Heap on socket 0 was shrunk by 4MB 00:10:20.519 EAL: Trying to obtain current memory policy. 00:10:20.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:20.519 EAL: Restoring previous memory policy: 0 00:10:20.519 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.519 EAL: request: mp_malloc_sync 00:10:20.519 EAL: No shared files mode enabled, IPC is disabled 00:10:20.519 EAL: Heap on socket 0 was expanded by 6MB 00:10:20.519 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.519 EAL: request: mp_malloc_sync 00:10:20.519 EAL: No shared files mode enabled, IPC is disabled 00:10:20.519 EAL: Heap on socket 0 was shrunk by 6MB 00:10:20.519 EAL: Trying to obtain current memory policy. 00:10:20.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:20.519 EAL: Restoring previous memory policy: 0 00:10:20.519 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.519 EAL: request: mp_malloc_sync 00:10:20.519 EAL: No shared files mode enabled, IPC is disabled 00:10:20.519 EAL: Heap on socket 0 was expanded by 10MB 00:10:20.519 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.519 EAL: request: mp_malloc_sync 00:10:20.519 EAL: No shared files mode enabled, IPC is disabled 00:10:20.519 EAL: Heap on socket 0 was shrunk by 10MB 00:10:20.519 EAL: Trying to obtain current memory policy. 00:10:20.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:20.519 EAL: Restoring previous memory policy: 0 00:10:20.519 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.519 EAL: request: mp_malloc_sync 00:10:20.519 EAL: No shared files mode enabled, IPC is disabled 00:10:20.519 EAL: Heap on socket 0 was expanded by 18MB 00:10:20.519 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.519 EAL: request: mp_malloc_sync 00:10:20.519 EAL: No shared files mode enabled, IPC is disabled 00:10:20.519 EAL: Heap on socket 0 was shrunk by 18MB 00:10:20.519 EAL: Trying to obtain current memory policy. 00:10:20.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:20.519 EAL: Restoring previous memory policy: 0 00:10:20.519 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.519 EAL: request: mp_malloc_sync 00:10:20.519 EAL: No shared files mode enabled, IPC is disabled 00:10:20.519 EAL: Heap on socket 0 was expanded by 34MB 00:10:20.520 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.520 EAL: request: mp_malloc_sync 00:10:20.520 EAL: No shared files mode enabled, IPC is disabled 00:10:20.520 EAL: Heap on socket 0 was shrunk by 34MB 00:10:20.520 EAL: Trying to obtain current memory policy. 00:10:20.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:20.520 EAL: Restoring previous memory policy: 0 00:10:20.520 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.520 EAL: request: mp_malloc_sync 00:10:20.520 EAL: No shared files mode enabled, IPC is disabled 00:10:20.520 EAL: Heap on socket 0 was expanded by 66MB 00:10:20.520 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.520 EAL: request: mp_malloc_sync 00:10:20.520 EAL: No shared files mode enabled, IPC is disabled 00:10:20.520 EAL: Heap on socket 0 was shrunk by 66MB 00:10:20.520 EAL: Trying to obtain current memory policy. 00:10:20.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:20.520 EAL: Restoring previous memory policy: 0 00:10:20.520 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.520 EAL: request: mp_malloc_sync 00:10:20.520 EAL: No shared files mode enabled, IPC is disabled 00:10:20.520 EAL: Heap on socket 0 was expanded by 130MB 00:10:20.520 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.520 EAL: request: mp_malloc_sync 00:10:20.520 EAL: No shared files mode enabled, IPC is disabled 00:10:20.520 EAL: Heap on socket 0 was shrunk by 130MB 00:10:20.520 EAL: Trying to obtain current memory policy. 00:10:20.520 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:20.520 EAL: Restoring previous memory policy: 0 00:10:20.520 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.520 EAL: request: mp_malloc_sync 00:10:20.520 EAL: No shared files mode enabled, IPC is disabled 00:10:20.520 EAL: Heap on socket 0 was expanded by 258MB 00:10:20.777 EAL: Calling mem event callback 'spdk:(nil)' 00:10:20.777 EAL: request: mp_malloc_sync 00:10:20.777 EAL: No shared files mode enabled, IPC is disabled 00:10:20.777 EAL: Heap on socket 0 was shrunk by 258MB 00:10:20.777 EAL: Trying to obtain current memory policy. 00:10:20.777 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:21.033 EAL: Restoring previous memory policy: 0 00:10:21.033 EAL: Calling mem event callback 'spdk:(nil)' 00:10:21.033 EAL: request: mp_malloc_sync 00:10:21.033 EAL: No shared files mode enabled, IPC is disabled 00:10:21.033 EAL: Heap on socket 0 was expanded by 514MB 00:10:21.291 EAL: Calling mem event callback 'spdk:(nil)' 00:10:21.291 EAL: request: mp_malloc_sync 00:10:21.291 EAL: No shared files mode enabled, IPC is disabled 00:10:21.291 EAL: Heap on socket 0 was shrunk by 514MB 00:10:21.291 EAL: Trying to obtain current memory policy. 00:10:21.291 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:21.855 EAL: Restoring previous memory policy: 0 00:10:21.856 EAL: Calling mem event callback 'spdk:(nil)' 00:10:21.856 EAL: request: mp_malloc_sync 00:10:21.856 EAL: No shared files mode enabled, IPC is disabled 00:10:21.856 EAL: Heap on socket 0 was expanded by 1026MB 00:10:22.113 EAL: Calling mem event callback 'spdk:(nil)' 00:10:22.371 EAL: request: mp_malloc_sync 00:10:22.371 EAL: No shared files mode enabled, IPC is disabled 00:10:22.371 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:22.371 passed 00:10:22.371 00:10:22.371 Run Summary: Type Total Ran Passed Failed Inactive 00:10:22.371 suites 1 1 n/a 0 0 00:10:22.371 tests 2 2 2 0 0 00:10:22.371 asserts 6345 6345 6345 0 n/a 00:10:22.371 00:10:22.371 Elapsed time = 2.576 seconds 00:10:22.371 EAL: Calling mem event callback 'spdk:(nil)' 00:10:22.371 EAL: request: mp_malloc_sync 00:10:22.371 EAL: No shared files mode enabled, IPC is disabled 00:10:22.371 EAL: Heap on socket 0 was shrunk by 2MB 00:10:22.371 EAL: No shared files mode enabled, IPC is disabled 00:10:22.371 EAL: No shared files mode enabled, IPC is disabled 00:10:22.371 EAL: No shared files mode enabled, IPC is disabled 00:10:22.371 00:10:22.371 real 0m2.866s 00:10:22.371 user 0m1.485s 00:10:22.371 sys 0m1.237s 00:10:22.371 07:20:56 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:22.371 07:20:56 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:22.371 ************************************ 00:10:22.371 END TEST env_vtophys 00:10:22.371 ************************************ 00:10:22.371 07:20:56 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:22.371 07:20:56 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:22.371 07:20:56 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:22.371 07:20:56 env -- common/autotest_common.sh@10 -- # set +x 00:10:22.371 ************************************ 00:10:22.371 START TEST env_pci 00:10:22.371 ************************************ 00:10:22.371 07:20:56 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:22.371 00:10:22.371 00:10:22.371 CUnit - A unit testing framework for C - Version 2.1-3 00:10:22.371 http://cunit.sourceforge.net/ 00:10:22.371 00:10:22.371 00:10:22.371 Suite: pci 00:10:22.371 Test: pci_hook ...[2024-07-12 07:20:56.223955] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 123086 has claimed it 00:10:22.628 passed 00:10:22.628 00:10:22.628 EAL: Cannot find device (10000:00:01.0) 00:10:22.628 EAL: Failed to attach device on primary process 00:10:22.628 Run Summary: Type Total Ran Passed Failed Inactive 00:10:22.628 suites 1 1 n/a 0 0 00:10:22.628 tests 1 1 1 0 0 00:10:22.628 asserts 25 25 25 0 n/a 00:10:22.628 00:10:22.628 Elapsed time = 0.009 seconds 00:10:22.628 00:10:22.628 real 0m0.084s 00:10:22.628 user 0m0.026s 00:10:22.628 sys 0m0.058s 00:10:22.628 07:20:56 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:22.628 07:20:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:10:22.628 ************************************ 00:10:22.628 END TEST env_pci 00:10:22.628 ************************************ 00:10:22.628 07:20:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:22.628 07:20:56 env -- env/env.sh@15 -- # uname 00:10:22.628 07:20:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:22.628 07:20:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:22.628 07:20:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:22.628 07:20:56 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:10:22.628 07:20:56 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:22.628 07:20:56 env -- common/autotest_common.sh@10 -- # set +x 00:10:22.628 ************************************ 00:10:22.628 START TEST env_dpdk_post_init 00:10:22.628 ************************************ 00:10:22.628 07:20:56 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:22.628 EAL: Detected CPU lcores: 10 00:10:22.628 EAL: Detected NUMA nodes: 1 00:10:22.628 EAL: Detected static linkage of DPDK 00:10:22.629 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:22.629 EAL: Selected IOVA mode 'PA' 00:10:22.629 EAL: VFIO support initialized 00:10:22.887 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:22.887 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:10:22.887 Starting DPDK initialization... 00:10:22.887 Starting SPDK post initialization... 00:10:22.887 SPDK NVMe probe 00:10:22.887 Attaching to 0000:00:10.0 00:10:22.887 Attached to 0000:00:10.0 00:10:22.887 Cleaning up... 00:10:22.887 00:10:22.887 real 0m0.260s 00:10:22.887 user 0m0.064s 00:10:22.887 sys 0m0.098s 00:10:22.887 07:20:56 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:22.887 07:20:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:22.887 ************************************ 00:10:22.887 END TEST env_dpdk_post_init 00:10:22.887 ************************************ 00:10:22.887 07:20:56 env -- env/env.sh@26 -- # uname 00:10:22.887 07:20:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:22.887 07:20:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:22.887 07:20:56 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:22.887 07:20:56 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:22.887 07:20:56 env -- common/autotest_common.sh@10 -- # set +x 00:10:22.887 ************************************ 00:10:22.887 START TEST env_mem_callbacks 00:10:22.887 ************************************ 00:10:22.887 07:20:56 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:22.887 EAL: Detected CPU lcores: 10 00:10:22.887 EAL: Detected NUMA nodes: 1 00:10:22.887 EAL: Detected static linkage of DPDK 00:10:22.887 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:22.887 EAL: Selected IOVA mode 'PA' 00:10:22.887 EAL: VFIO support initialized 00:10:23.145 00:10:23.145 00:10:23.145 CUnit - A unit testing framework for C - Version 2.1-3 00:10:23.145 http://cunit.sourceforge.net/ 00:10:23.145 00:10:23.145 00:10:23.145 Suite: memory 00:10:23.145 Test: test ... 00:10:23.145 register 0x200000200000 2097152 00:10:23.145 malloc 3145728 00:10:23.145 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:23.145 register 0x200000400000 4194304 00:10:23.145 buf 0x200000500000 len 3145728 PASSED 00:10:23.145 malloc 64 00:10:23.145 buf 0x2000004fff40 len 64 PASSED 00:10:23.145 malloc 4194304 00:10:23.145 register 0x200000800000 6291456 00:10:23.145 buf 0x200000a00000 len 4194304 PASSED 00:10:23.145 free 0x200000500000 3145728 00:10:23.145 free 0x2000004fff40 64 00:10:23.145 unregister 0x200000400000 4194304 PASSED 00:10:23.145 free 0x200000a00000 4194304 00:10:23.145 unregister 0x200000800000 6291456 PASSED 00:10:23.145 malloc 8388608 00:10:23.145 register 0x200000400000 10485760 00:10:23.145 buf 0x200000600000 len 8388608 PASSED 00:10:23.145 free 0x200000600000 8388608 00:10:23.145 unregister 0x200000400000 10485760 PASSED 00:10:23.145 passed 00:10:23.145 00:10:23.145 Run Summary: Type Total Ran Passed Failed Inactive 00:10:23.145 suites 1 1 n/a 0 0 00:10:23.145 tests 1 1 1 0 0 00:10:23.145 asserts 15 15 15 0 n/a 00:10:23.145 00:10:23.145 Elapsed time = 0.008 seconds 00:10:23.145 00:10:23.145 real 0m0.222s 00:10:23.145 user 0m0.035s 00:10:23.145 sys 0m0.088s 00:10:23.145 07:20:56 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:23.145 07:20:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:23.145 ************************************ 00:10:23.145 END TEST env_mem_callbacks 00:10:23.145 ************************************ 00:10:23.145 00:10:23.145 real 0m4.267s 00:10:23.145 user 0m2.163s 00:10:23.145 sys 0m1.779s 00:10:23.145 07:20:56 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:23.145 07:20:56 env -- common/autotest_common.sh@10 -- # set +x 00:10:23.145 ************************************ 00:10:23.145 END TEST env 00:10:23.145 ************************************ 00:10:23.145 07:20:57 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:23.145 07:20:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:23.145 07:20:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:23.145 07:20:57 -- common/autotest_common.sh@10 -- # set +x 00:10:23.403 ************************************ 00:10:23.403 START TEST rpc 00:10:23.403 ************************************ 00:10:23.403 07:20:57 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:23.403 * Looking for test storage... 00:10:23.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:23.403 07:20:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=123211 00:10:23.403 07:20:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:23.403 07:20:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 123211 00:10:23.403 07:20:57 rpc -- common/autotest_common.sh@827 -- # '[' -z 123211 ']' 00:10:23.403 07:20:57 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:10:23.403 07:20:57 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.403 07:20:57 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:23.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.403 07:20:57 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.403 07:20:57 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:23.403 07:20:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.403 [2024-07-12 07:20:57.260165] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:10:23.403 [2024-07-12 07:20:57.260426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123211 ] 00:10:23.661 [2024-07-12 07:20:57.415285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.661 [2024-07-12 07:20:57.490168] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:23.661 [2024-07-12 07:20:57.490498] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 123211' to capture a snapshot of events at runtime. 00:10:23.661 [2024-07-12 07:20:57.490896] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.661 [2024-07-12 07:20:57.491040] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.661 [2024-07-12 07:20:57.491178] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid123211 for offline analysis/debug. 00:10:23.661 [2024-07-12 07:20:57.491334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.595 07:20:58 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:24.595 07:20:58 rpc -- common/autotest_common.sh@860 -- # return 0 00:10:24.595 07:20:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:24.595 07:20:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:24.595 07:20:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:24.595 07:20:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:24.595 07:20:58 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:24.595 07:20:58 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:24.595 07:20:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.595 ************************************ 00:10:24.595 START TEST rpc_integrity 00:10:24.595 ************************************ 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:24.595 { 00:10:24.595 "name": "Malloc0", 00:10:24.595 "aliases": [ 00:10:24.595 "17852b81-1a6d-4d1e-8b21-83de1156cd9d" 00:10:24.595 ], 00:10:24.595 "product_name": "Malloc disk", 00:10:24.595 "block_size": 512, 00:10:24.595 "num_blocks": 16384, 00:10:24.595 "uuid": "17852b81-1a6d-4d1e-8b21-83de1156cd9d", 00:10:24.595 "assigned_rate_limits": { 00:10:24.595 "rw_ios_per_sec": 0, 00:10:24.595 "rw_mbytes_per_sec": 0, 00:10:24.595 "r_mbytes_per_sec": 0, 00:10:24.595 "w_mbytes_per_sec": 0 00:10:24.595 }, 00:10:24.595 "claimed": false, 00:10:24.595 "zoned": false, 00:10:24.595 "supported_io_types": { 00:10:24.595 "read": true, 00:10:24.595 "write": true, 00:10:24.595 "unmap": true, 00:10:24.595 "write_zeroes": true, 00:10:24.595 "flush": true, 00:10:24.595 "reset": true, 00:10:24.595 "compare": false, 00:10:24.595 "compare_and_write": false, 00:10:24.595 "abort": true, 00:10:24.595 "nvme_admin": false, 00:10:24.595 "nvme_io": false 00:10:24.595 }, 00:10:24.595 "memory_domains": [ 00:10:24.595 { 00:10:24.595 "dma_device_id": "system", 00:10:24.595 "dma_device_type": 1 00:10:24.595 }, 00:10:24.595 { 00:10:24.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.595 "dma_device_type": 2 00:10:24.595 } 00:10:24.595 ], 00:10:24.595 "driver_specific": {} 00:10:24.595 } 00:10:24.595 ]' 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:24.595 [2024-07-12 07:20:58.335875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:24.595 [2024-07-12 07:20:58.336609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.595 [2024-07-12 07:20:58.336756] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006080 00:10:24.595 [2024-07-12 07:20:58.336875] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.595 [2024-07-12 07:20:58.340012] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.595 [2024-07-12 07:20:58.340216] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:24.595 Passthru0 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:24.595 { 00:10:24.595 "name": "Malloc0", 00:10:24.595 "aliases": [ 00:10:24.595 "17852b81-1a6d-4d1e-8b21-83de1156cd9d" 00:10:24.595 ], 00:10:24.595 "product_name": "Malloc disk", 00:10:24.595 "block_size": 512, 00:10:24.595 "num_blocks": 16384, 00:10:24.595 "uuid": "17852b81-1a6d-4d1e-8b21-83de1156cd9d", 00:10:24.595 "assigned_rate_limits": { 00:10:24.595 "rw_ios_per_sec": 0, 00:10:24.595 "rw_mbytes_per_sec": 0, 00:10:24.595 "r_mbytes_per_sec": 0, 00:10:24.595 "w_mbytes_per_sec": 0 00:10:24.595 }, 00:10:24.595 "claimed": true, 00:10:24.595 "claim_type": "exclusive_write", 00:10:24.595 "zoned": false, 00:10:24.595 "supported_io_types": { 00:10:24.595 "read": true, 00:10:24.595 "write": true, 00:10:24.595 "unmap": true, 00:10:24.595 "write_zeroes": true, 00:10:24.595 "flush": true, 00:10:24.595 "reset": true, 00:10:24.595 "compare": false, 00:10:24.595 "compare_and_write": false, 00:10:24.595 "abort": true, 00:10:24.595 "nvme_admin": false, 00:10:24.595 "nvme_io": false 00:10:24.595 }, 00:10:24.595 "memory_domains": [ 00:10:24.595 { 00:10:24.595 "dma_device_id": "system", 00:10:24.595 "dma_device_type": 1 00:10:24.595 }, 00:10:24.595 { 00:10:24.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.595 "dma_device_type": 2 00:10:24.595 } 00:10:24.595 ], 00:10:24.595 "driver_specific": {} 00:10:24.595 }, 00:10:24.595 { 00:10:24.595 "name": "Passthru0", 00:10:24.595 "aliases": [ 00:10:24.595 "76b820c0-2bf6-5134-9c0b-e68973fcc8c5" 00:10:24.595 ], 00:10:24.595 "product_name": "passthru", 00:10:24.595 "block_size": 512, 00:10:24.595 "num_blocks": 16384, 00:10:24.595 "uuid": "76b820c0-2bf6-5134-9c0b-e68973fcc8c5", 00:10:24.595 "assigned_rate_limits": { 00:10:24.595 "rw_ios_per_sec": 0, 00:10:24.595 "rw_mbytes_per_sec": 0, 00:10:24.595 "r_mbytes_per_sec": 0, 00:10:24.595 "w_mbytes_per_sec": 0 00:10:24.595 }, 00:10:24.595 "claimed": false, 00:10:24.595 "zoned": false, 00:10:24.595 "supported_io_types": { 00:10:24.595 "read": true, 00:10:24.595 "write": true, 00:10:24.595 "unmap": true, 00:10:24.595 "write_zeroes": true, 00:10:24.595 "flush": true, 00:10:24.595 "reset": true, 00:10:24.595 "compare": false, 00:10:24.595 "compare_and_write": false, 00:10:24.595 "abort": true, 00:10:24.595 "nvme_admin": false, 00:10:24.595 "nvme_io": false 00:10:24.595 }, 00:10:24.595 "memory_domains": [ 00:10:24.595 { 00:10:24.595 "dma_device_id": "system", 00:10:24.595 "dma_device_type": 1 00:10:24.595 }, 00:10:24.595 { 00:10:24.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.595 "dma_device_type": 2 00:10:24.595 } 00:10:24.595 ], 00:10:24.595 "driver_specific": { 00:10:24.595 "passthru": { 00:10:24.595 "name": "Passthru0", 00:10:24.595 "base_bdev_name": "Malloc0" 00:10:24.595 } 00:10:24.595 } 00:10:24.595 } 00:10:24.595 ]' 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:24.595 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:24.595 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:24.855 07:20:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:24.855 00:10:24.855 real 0m0.293s 00:10:24.855 user 0m0.187s 00:10:24.855 sys 0m0.038s 00:10:24.855 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:24.855 ************************************ 00:10:24.855 END TEST rpc_integrity 00:10:24.855 ************************************ 00:10:24.855 07:20:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:24.855 07:20:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:24.855 07:20:58 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:24.855 07:20:58 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:24.855 07:20:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.855 ************************************ 00:10:24.855 START TEST rpc_plugins 00:10:24.855 ************************************ 00:10:24.855 07:20:58 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:10:24.855 07:20:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:24.855 07:20:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.855 07:20:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:24.855 07:20:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.855 07:20:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:24.855 07:20:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:24.855 07:20:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.855 07:20:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:24.855 07:20:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.855 07:20:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:24.855 { 00:10:24.855 "name": "Malloc1", 00:10:24.855 "aliases": [ 00:10:24.855 "25259a95-1970-4d05-8b4d-2638cb105c74" 00:10:24.855 ], 00:10:24.855 "product_name": "Malloc disk", 00:10:24.855 "block_size": 4096, 00:10:24.855 "num_blocks": 256, 00:10:24.855 "uuid": "25259a95-1970-4d05-8b4d-2638cb105c74", 00:10:24.855 "assigned_rate_limits": { 00:10:24.855 "rw_ios_per_sec": 0, 00:10:24.855 "rw_mbytes_per_sec": 0, 00:10:24.855 "r_mbytes_per_sec": 0, 00:10:24.855 "w_mbytes_per_sec": 0 00:10:24.855 }, 00:10:24.855 "claimed": false, 00:10:24.855 "zoned": false, 00:10:24.855 "supported_io_types": { 00:10:24.855 "read": true, 00:10:24.855 "write": true, 00:10:24.855 "unmap": true, 00:10:24.855 "write_zeroes": true, 00:10:24.855 "flush": true, 00:10:24.855 "reset": true, 00:10:24.855 "compare": false, 00:10:24.855 "compare_and_write": false, 00:10:24.855 "abort": true, 00:10:24.855 "nvme_admin": false, 00:10:24.855 "nvme_io": false 00:10:24.855 }, 00:10:24.855 "memory_domains": [ 00:10:24.855 { 00:10:24.855 "dma_device_id": "system", 00:10:24.855 "dma_device_type": 1 00:10:24.855 }, 00:10:24.855 { 00:10:24.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.855 "dma_device_type": 2 00:10:24.855 } 00:10:24.855 ], 00:10:24.855 "driver_specific": {} 00:10:24.855 } 00:10:24.855 ]' 00:10:24.855 07:20:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:24.855 07:20:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:24.855 07:20:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:24.855 07:20:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.855 07:20:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:24.855 07:20:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.855 07:20:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:24.855 07:20:58 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.855 07:20:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:24.855 07:20:58 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.855 07:20:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:24.855 07:20:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:24.855 07:20:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:24.855 00:10:24.855 real 0m0.143s 00:10:24.855 user 0m0.093s 00:10:24.855 sys 0m0.016s 00:10:24.855 07:20:58 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:24.855 07:20:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:24.855 ************************************ 00:10:24.855 END TEST rpc_plugins 00:10:24.855 ************************************ 00:10:25.114 07:20:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:25.114 07:20:58 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:25.114 07:20:58 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:25.114 07:20:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.114 ************************************ 00:10:25.114 START TEST rpc_trace_cmd_test 00:10:25.114 ************************************ 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:25.114 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid123211", 00:10:25.114 "tpoint_group_mask": "0x8", 00:10:25.114 "iscsi_conn": { 00:10:25.114 "mask": "0x2", 00:10:25.114 "tpoint_mask": "0x0" 00:10:25.114 }, 00:10:25.114 "scsi": { 00:10:25.114 "mask": "0x4", 00:10:25.114 "tpoint_mask": "0x0" 00:10:25.114 }, 00:10:25.114 "bdev": { 00:10:25.114 "mask": "0x8", 00:10:25.114 "tpoint_mask": "0xffffffffffffffff" 00:10:25.114 }, 00:10:25.114 "nvmf_rdma": { 00:10:25.114 "mask": "0x10", 00:10:25.114 "tpoint_mask": "0x0" 00:10:25.114 }, 00:10:25.114 "nvmf_tcp": { 00:10:25.114 "mask": "0x20", 00:10:25.114 "tpoint_mask": "0x0" 00:10:25.114 }, 00:10:25.114 "ftl": { 00:10:25.114 "mask": "0x40", 00:10:25.114 "tpoint_mask": "0x0" 00:10:25.114 }, 00:10:25.114 "blobfs": { 00:10:25.114 "mask": "0x80", 00:10:25.114 "tpoint_mask": "0x0" 00:10:25.114 }, 00:10:25.114 "dsa": { 00:10:25.114 "mask": "0x200", 00:10:25.114 "tpoint_mask": "0x0" 00:10:25.114 }, 00:10:25.114 "thread": { 00:10:25.114 "mask": "0x400", 00:10:25.114 "tpoint_mask": "0x0" 00:10:25.114 }, 00:10:25.114 "nvme_pcie": { 00:10:25.114 "mask": "0x800", 00:10:25.114 "tpoint_mask": "0x0" 00:10:25.114 }, 00:10:25.114 "iaa": { 00:10:25.114 "mask": "0x1000", 00:10:25.114 "tpoint_mask": "0x0" 00:10:25.114 }, 00:10:25.114 "nvme_tcp": { 00:10:25.114 "mask": "0x2000", 00:10:25.114 "tpoint_mask": "0x0" 00:10:25.114 }, 00:10:25.114 "bdev_nvme": { 00:10:25.114 "mask": "0x4000", 00:10:25.114 "tpoint_mask": "0x0" 00:10:25.114 }, 00:10:25.114 "sock": { 00:10:25.114 "mask": "0x8000", 00:10:25.114 "tpoint_mask": "0x0" 00:10:25.114 } 00:10:25.114 }' 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:25.114 07:20:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:25.373 07:20:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:25.373 00:10:25.373 real 0m0.263s 00:10:25.373 user 0m0.226s 00:10:25.373 sys 0m0.028s 00:10:25.373 07:20:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:25.373 ************************************ 00:10:25.373 END TEST rpc_trace_cmd_test 00:10:25.373 ************************************ 00:10:25.373 07:20:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.373 07:20:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:10:25.373 07:20:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:25.373 07:20:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:25.373 07:20:59 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:25.373 07:20:59 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:25.373 07:20:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.373 ************************************ 00:10:25.373 START TEST rpc_daemon_integrity 00:10:25.373 ************************************ 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.373 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:25.373 { 00:10:25.373 "name": "Malloc2", 00:10:25.373 "aliases": [ 00:10:25.373 "e179b3cd-3431-401e-9c81-286efbb29f22" 00:10:25.373 ], 00:10:25.373 "product_name": "Malloc disk", 00:10:25.373 "block_size": 512, 00:10:25.373 "num_blocks": 16384, 00:10:25.373 "uuid": "e179b3cd-3431-401e-9c81-286efbb29f22", 00:10:25.373 "assigned_rate_limits": { 00:10:25.373 "rw_ios_per_sec": 0, 00:10:25.373 "rw_mbytes_per_sec": 0, 00:10:25.373 "r_mbytes_per_sec": 0, 00:10:25.373 "w_mbytes_per_sec": 0 00:10:25.373 }, 00:10:25.373 "claimed": false, 00:10:25.373 "zoned": false, 00:10:25.373 "supported_io_types": { 00:10:25.373 "read": true, 00:10:25.373 "write": true, 00:10:25.373 "unmap": true, 00:10:25.374 "write_zeroes": true, 00:10:25.374 "flush": true, 00:10:25.374 "reset": true, 00:10:25.374 "compare": false, 00:10:25.374 "compare_and_write": false, 00:10:25.374 "abort": true, 00:10:25.374 "nvme_admin": false, 00:10:25.374 "nvme_io": false 00:10:25.374 }, 00:10:25.374 "memory_domains": [ 00:10:25.374 { 00:10:25.374 "dma_device_id": "system", 00:10:25.374 "dma_device_type": 1 00:10:25.374 }, 00:10:25.374 { 00:10:25.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.374 "dma_device_type": 2 00:10:25.374 } 00:10:25.374 ], 00:10:25.374 "driver_specific": {} 00:10:25.374 } 00:10:25.374 ]' 00:10:25.374 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:25.374 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:25.374 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:10:25.374 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.374 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:25.374 [2024-07-12 07:20:59.245361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:10:25.374 [2024-07-12 07:20:59.245698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.374 [2024-07-12 07:20:59.245849] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:25.374 [2024-07-12 07:20:59.245960] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.374 [2024-07-12 07:20:59.248806] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.374 [2024-07-12 07:20:59.248951] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:25.374 Passthru0 00:10:25.374 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.374 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:25.374 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.374 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:25.632 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.632 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:25.632 { 00:10:25.632 "name": "Malloc2", 00:10:25.632 "aliases": [ 00:10:25.632 "e179b3cd-3431-401e-9c81-286efbb29f22" 00:10:25.632 ], 00:10:25.632 "product_name": "Malloc disk", 00:10:25.632 "block_size": 512, 00:10:25.632 "num_blocks": 16384, 00:10:25.632 "uuid": "e179b3cd-3431-401e-9c81-286efbb29f22", 00:10:25.632 "assigned_rate_limits": { 00:10:25.632 "rw_ios_per_sec": 0, 00:10:25.632 "rw_mbytes_per_sec": 0, 00:10:25.632 "r_mbytes_per_sec": 0, 00:10:25.632 "w_mbytes_per_sec": 0 00:10:25.632 }, 00:10:25.632 "claimed": true, 00:10:25.632 "claim_type": "exclusive_write", 00:10:25.632 "zoned": false, 00:10:25.632 "supported_io_types": { 00:10:25.632 "read": true, 00:10:25.632 "write": true, 00:10:25.633 "unmap": true, 00:10:25.633 "write_zeroes": true, 00:10:25.633 "flush": true, 00:10:25.633 "reset": true, 00:10:25.633 "compare": false, 00:10:25.633 "compare_and_write": false, 00:10:25.633 "abort": true, 00:10:25.633 "nvme_admin": false, 00:10:25.633 "nvme_io": false 00:10:25.633 }, 00:10:25.633 "memory_domains": [ 00:10:25.633 { 00:10:25.633 "dma_device_id": "system", 00:10:25.633 "dma_device_type": 1 00:10:25.633 }, 00:10:25.633 { 00:10:25.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.633 "dma_device_type": 2 00:10:25.633 } 00:10:25.633 ], 00:10:25.633 "driver_specific": {} 00:10:25.633 }, 00:10:25.633 { 00:10:25.633 "name": "Passthru0", 00:10:25.633 "aliases": [ 00:10:25.633 "c0ea1c17-6e6a-5ca2-b413-555f9e77506e" 00:10:25.633 ], 00:10:25.633 "product_name": "passthru", 00:10:25.633 "block_size": 512, 00:10:25.633 "num_blocks": 16384, 00:10:25.633 "uuid": "c0ea1c17-6e6a-5ca2-b413-555f9e77506e", 00:10:25.633 "assigned_rate_limits": { 00:10:25.633 "rw_ios_per_sec": 0, 00:10:25.633 "rw_mbytes_per_sec": 0, 00:10:25.633 "r_mbytes_per_sec": 0, 00:10:25.633 "w_mbytes_per_sec": 0 00:10:25.633 }, 00:10:25.633 "claimed": false, 00:10:25.633 "zoned": false, 00:10:25.633 "supported_io_types": { 00:10:25.633 "read": true, 00:10:25.633 "write": true, 00:10:25.633 "unmap": true, 00:10:25.633 "write_zeroes": true, 00:10:25.633 "flush": true, 00:10:25.633 "reset": true, 00:10:25.633 "compare": false, 00:10:25.633 "compare_and_write": false, 00:10:25.633 "abort": true, 00:10:25.633 "nvme_admin": false, 00:10:25.633 "nvme_io": false 00:10:25.633 }, 00:10:25.633 "memory_domains": [ 00:10:25.633 { 00:10:25.633 "dma_device_id": "system", 00:10:25.633 "dma_device_type": 1 00:10:25.633 }, 00:10:25.633 { 00:10:25.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.633 "dma_device_type": 2 00:10:25.633 } 00:10:25.633 ], 00:10:25.633 "driver_specific": { 00:10:25.633 "passthru": { 00:10:25.633 "name": "Passthru0", 00:10:25.633 "base_bdev_name": "Malloc2" 00:10:25.633 } 00:10:25.633 } 00:10:25.633 } 00:10:25.633 ]' 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:25.633 00:10:25.633 real 0m0.315s 00:10:25.633 user 0m0.202s 00:10:25.633 sys 0m0.049s 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:25.633 07:20:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:25.633 ************************************ 00:10:25.633 END TEST rpc_daemon_integrity 00:10:25.633 ************************************ 00:10:25.633 07:20:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:25.633 07:20:59 rpc -- rpc/rpc.sh@84 -- # killprocess 123211 00:10:25.633 07:20:59 rpc -- common/autotest_common.sh@946 -- # '[' -z 123211 ']' 00:10:25.633 07:20:59 rpc -- common/autotest_common.sh@950 -- # kill -0 123211 00:10:25.633 07:20:59 rpc -- common/autotest_common.sh@951 -- # uname 00:10:25.633 07:20:59 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:25.633 07:20:59 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 123211 00:10:25.633 07:20:59 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:25.633 killing process with pid 123211 00:10:25.633 07:20:59 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:25.633 07:20:59 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 123211' 00:10:25.633 07:20:59 rpc -- common/autotest_common.sh@965 -- # kill 123211 00:10:25.633 07:20:59 rpc -- common/autotest_common.sh@970 -- # wait 123211 00:10:26.568 00:10:26.568 real 0m3.146s 00:10:26.568 user 0m3.757s 00:10:26.568 sys 0m0.955s 00:10:26.568 07:21:00 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:26.568 07:21:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.568 ************************************ 00:10:26.568 END TEST rpc 00:10:26.568 ************************************ 00:10:26.568 07:21:00 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:26.568 07:21:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:26.568 07:21:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:26.568 07:21:00 -- common/autotest_common.sh@10 -- # set +x 00:10:26.568 ************************************ 00:10:26.568 START TEST skip_rpc 00:10:26.568 ************************************ 00:10:26.569 07:21:00 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:26.569 * Looking for test storage... 00:10:26.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:26.569 07:21:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:26.569 07:21:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:26.569 07:21:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:26.569 07:21:00 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:26.569 07:21:00 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:26.569 07:21:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.569 ************************************ 00:10:26.569 START TEST skip_rpc 00:10:26.569 ************************************ 00:10:26.569 07:21:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:10:26.569 07:21:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=123448 00:10:26.569 07:21:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:26.569 07:21:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:26.569 07:21:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:26.827 [2024-07-12 07:21:00.459852] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:10:26.827 [2024-07-12 07:21:00.460129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123448 ] 00:10:26.827 [2024-07-12 07:21:00.613459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.827 [2024-07-12 07:21:00.684526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 123448 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 123448 ']' 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 123448 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 123448 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 123448' 00:10:32.109 killing process with pid 123448 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 123448 00:10:32.109 07:21:05 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 123448 00:10:32.368 00:10:32.368 real 0m5.719s 00:10:32.368 user 0m5.139s 00:10:32.368 sys 0m0.501s 00:10:32.368 07:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:32.368 07:21:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.368 ************************************ 00:10:32.368 END TEST skip_rpc 00:10:32.368 ************************************ 00:10:32.368 07:21:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:32.368 07:21:06 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:32.368 07:21:06 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:32.368 07:21:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.368 ************************************ 00:10:32.368 START TEST skip_rpc_with_json 00:10:32.368 ************************************ 00:10:32.368 07:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:10:32.368 07:21:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:32.368 07:21:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=123541 00:10:32.368 07:21:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:32.368 07:21:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 123541 00:10:32.368 07:21:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:32.368 07:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 123541 ']' 00:10:32.368 07:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.368 07:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:32.368 07:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.369 07:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:32.369 07:21:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:32.369 [2024-07-12 07:21:06.249797] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:10:32.369 [2024-07-12 07:21:06.250066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123541 ] 00:10:32.628 [2024-07-12 07:21:06.406456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.628 [2024-07-12 07:21:06.488618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.565 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:33.566 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:10:33.566 07:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:33.566 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.566 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:33.566 [2024-07-12 07:21:07.214580] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:33.566 request: 00:10:33.566 { 00:10:33.566 "trtype": "tcp", 00:10:33.566 "method": "nvmf_get_transports", 00:10:33.566 "req_id": 1 00:10:33.566 } 00:10:33.566 Got JSON-RPC error response 00:10:33.566 response: 00:10:33.566 { 00:10:33.566 "code": -19, 00:10:33.566 "message": "No such device" 00:10:33.566 } 00:10:33.566 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:33.566 07:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:33.566 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.566 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:33.566 [2024-07-12 07:21:07.230731] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.566 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.566 07:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:33.566 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.566 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:33.566 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.566 07:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:33.566 { 00:10:33.566 "subsystems": [ 00:10:33.566 { 00:10:33.566 "subsystem": "scheduler", 00:10:33.566 "config": [ 00:10:33.566 { 00:10:33.566 "method": "framework_set_scheduler", 00:10:33.566 "params": { 00:10:33.566 "name": "static" 00:10:33.566 } 00:10:33.566 } 00:10:33.566 ] 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "subsystem": "vmd", 00:10:33.566 "config": [] 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "subsystem": "sock", 00:10:33.566 "config": [ 00:10:33.566 { 00:10:33.566 "method": "sock_set_default_impl", 00:10:33.566 "params": { 00:10:33.566 "impl_name": "posix" 00:10:33.566 } 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "method": "sock_impl_set_options", 00:10:33.566 "params": { 00:10:33.566 "impl_name": "ssl", 00:10:33.566 "recv_buf_size": 4096, 00:10:33.566 "send_buf_size": 4096, 00:10:33.566 "enable_recv_pipe": true, 00:10:33.566 "enable_quickack": false, 00:10:33.566 "enable_placement_id": 0, 00:10:33.566 "enable_zerocopy_send_server": true, 00:10:33.566 "enable_zerocopy_send_client": false, 00:10:33.566 "zerocopy_threshold": 0, 00:10:33.566 "tls_version": 0, 00:10:33.566 "enable_ktls": false 00:10:33.566 } 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "method": "sock_impl_set_options", 00:10:33.566 "params": { 00:10:33.566 "impl_name": "posix", 00:10:33.566 "recv_buf_size": 2097152, 00:10:33.566 "send_buf_size": 2097152, 00:10:33.566 "enable_recv_pipe": true, 00:10:33.566 "enable_quickack": false, 00:10:33.566 "enable_placement_id": 0, 00:10:33.566 "enable_zerocopy_send_server": true, 00:10:33.566 "enable_zerocopy_send_client": false, 00:10:33.566 "zerocopy_threshold": 0, 00:10:33.566 "tls_version": 0, 00:10:33.566 "enable_ktls": false 00:10:33.566 } 00:10:33.566 } 00:10:33.566 ] 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "subsystem": "iobuf", 00:10:33.566 "config": [ 00:10:33.566 { 00:10:33.566 "method": "iobuf_set_options", 00:10:33.566 "params": { 00:10:33.566 "small_pool_count": 8192, 00:10:33.566 "large_pool_count": 1024, 00:10:33.566 "small_bufsize": 8192, 00:10:33.566 "large_bufsize": 135168 00:10:33.566 } 00:10:33.566 } 00:10:33.566 ] 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "subsystem": "keyring", 00:10:33.566 "config": [] 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "subsystem": "accel", 00:10:33.566 "config": [ 00:10:33.566 { 00:10:33.566 "method": "accel_set_options", 00:10:33.566 "params": { 00:10:33.566 "small_cache_size": 128, 00:10:33.566 "large_cache_size": 16, 00:10:33.566 "task_count": 2048, 00:10:33.566 "sequence_count": 2048, 00:10:33.566 "buf_count": 2048 00:10:33.566 } 00:10:33.566 } 00:10:33.566 ] 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "subsystem": "bdev", 00:10:33.566 "config": [ 00:10:33.566 { 00:10:33.566 "method": "bdev_set_options", 00:10:33.566 "params": { 00:10:33.566 "bdev_io_pool_size": 65535, 00:10:33.566 "bdev_io_cache_size": 256, 00:10:33.566 "bdev_auto_examine": true, 00:10:33.566 "iobuf_small_cache_size": 128, 00:10:33.566 "iobuf_large_cache_size": 16 00:10:33.566 } 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "method": "bdev_raid_set_options", 00:10:33.566 "params": { 00:10:33.566 "process_window_size_kb": 1024 00:10:33.566 } 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "method": "bdev_nvme_set_options", 00:10:33.566 "params": { 00:10:33.566 "action_on_timeout": "none", 00:10:33.566 "timeout_us": 0, 00:10:33.566 "timeout_admin_us": 0, 00:10:33.566 "keep_alive_timeout_ms": 10000, 00:10:33.566 "arbitration_burst": 0, 00:10:33.566 "low_priority_weight": 0, 00:10:33.566 "medium_priority_weight": 0, 00:10:33.566 "high_priority_weight": 0, 00:10:33.566 "nvme_adminq_poll_period_us": 10000, 00:10:33.566 "nvme_ioq_poll_period_us": 0, 00:10:33.566 "io_queue_requests": 0, 00:10:33.566 "delay_cmd_submit": true, 00:10:33.566 "transport_retry_count": 4, 00:10:33.566 "bdev_retry_count": 3, 00:10:33.566 "transport_ack_timeout": 0, 00:10:33.566 "ctrlr_loss_timeout_sec": 0, 00:10:33.566 "reconnect_delay_sec": 0, 00:10:33.566 "fast_io_fail_timeout_sec": 0, 00:10:33.566 "disable_auto_failback": false, 00:10:33.566 "generate_uuids": false, 00:10:33.566 "transport_tos": 0, 00:10:33.566 "nvme_error_stat": false, 00:10:33.566 "rdma_srq_size": 0, 00:10:33.566 "io_path_stat": false, 00:10:33.566 "allow_accel_sequence": false, 00:10:33.566 "rdma_max_cq_size": 0, 00:10:33.566 "rdma_cm_event_timeout_ms": 0, 00:10:33.566 "dhchap_digests": [ 00:10:33.566 "sha256", 00:10:33.566 "sha384", 00:10:33.566 "sha512" 00:10:33.566 ], 00:10:33.566 "dhchap_dhgroups": [ 00:10:33.566 "null", 00:10:33.566 "ffdhe2048", 00:10:33.566 "ffdhe3072", 00:10:33.566 "ffdhe4096", 00:10:33.566 "ffdhe6144", 00:10:33.566 "ffdhe8192" 00:10:33.566 ] 00:10:33.566 } 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "method": "bdev_nvme_set_hotplug", 00:10:33.566 "params": { 00:10:33.566 "period_us": 100000, 00:10:33.566 "enable": false 00:10:33.566 } 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "method": "bdev_iscsi_set_options", 00:10:33.566 "params": { 00:10:33.566 "timeout_sec": 30 00:10:33.566 } 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "method": "bdev_wait_for_examine" 00:10:33.566 } 00:10:33.566 ] 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "subsystem": "nvmf", 00:10:33.566 "config": [ 00:10:33.566 { 00:10:33.566 "method": "nvmf_set_config", 00:10:33.566 "params": { 00:10:33.566 "discovery_filter": "match_any", 00:10:33.566 "admin_cmd_passthru": { 00:10:33.566 "identify_ctrlr": false 00:10:33.566 } 00:10:33.566 } 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "method": "nvmf_set_max_subsystems", 00:10:33.566 "params": { 00:10:33.566 "max_subsystems": 1024 00:10:33.566 } 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "method": "nvmf_set_crdt", 00:10:33.566 "params": { 00:10:33.566 "crdt1": 0, 00:10:33.566 "crdt2": 0, 00:10:33.566 "crdt3": 0 00:10:33.566 } 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "method": "nvmf_create_transport", 00:10:33.566 "params": { 00:10:33.566 "trtype": "TCP", 00:10:33.566 "max_queue_depth": 128, 00:10:33.566 "max_io_qpairs_per_ctrlr": 127, 00:10:33.566 "in_capsule_data_size": 4096, 00:10:33.566 "max_io_size": 131072, 00:10:33.566 "io_unit_size": 131072, 00:10:33.566 "max_aq_depth": 128, 00:10:33.566 "num_shared_buffers": 511, 00:10:33.566 "buf_cache_size": 4294967295, 00:10:33.566 "dif_insert_or_strip": false, 00:10:33.566 "zcopy": false, 00:10:33.566 "c2h_success": true, 00:10:33.566 "sock_priority": 0, 00:10:33.566 "abort_timeout_sec": 1, 00:10:33.566 "ack_timeout": 0, 00:10:33.566 "data_wr_pool_size": 0 00:10:33.566 } 00:10:33.566 } 00:10:33.566 ] 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "subsystem": "nbd", 00:10:33.566 "config": [] 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "subsystem": "vhost_blk", 00:10:33.566 "config": [] 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "subsystem": "scsi", 00:10:33.566 "config": null 00:10:33.566 }, 00:10:33.566 { 00:10:33.566 "subsystem": "iscsi", 00:10:33.566 "config": [ 00:10:33.566 { 00:10:33.566 "method": "iscsi_set_options", 00:10:33.566 "params": { 00:10:33.566 "node_base": "iqn.2016-06.io.spdk", 00:10:33.566 "max_sessions": 128, 00:10:33.566 "max_connections_per_session": 2, 00:10:33.567 "max_queue_depth": 64, 00:10:33.567 "default_time2wait": 2, 00:10:33.567 "default_time2retain": 20, 00:10:33.567 "first_burst_length": 8192, 00:10:33.567 "immediate_data": true, 00:10:33.567 "allow_duplicated_isid": false, 00:10:33.567 "error_recovery_level": 0, 00:10:33.567 "nop_timeout": 60, 00:10:33.567 "nop_in_interval": 30, 00:10:33.567 "disable_chap": false, 00:10:33.567 "require_chap": false, 00:10:33.567 "mutual_chap": false, 00:10:33.567 "chap_group": 0, 00:10:33.567 "max_large_datain_per_connection": 64, 00:10:33.567 "max_r2t_per_connection": 4, 00:10:33.567 "pdu_pool_size": 36864, 00:10:33.567 "immediate_data_pool_size": 16384, 00:10:33.567 "data_out_pool_size": 2048 00:10:33.567 } 00:10:33.567 } 00:10:33.567 ] 00:10:33.567 }, 00:10:33.567 { 00:10:33.567 "subsystem": "vhost_scsi", 00:10:33.567 "config": [] 00:10:33.567 } 00:10:33.567 ] 00:10:33.567 } 00:10:33.567 07:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:33.567 07:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 123541 00:10:33.567 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 123541 ']' 00:10:33.567 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 123541 00:10:33.567 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:10:33.567 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:33.567 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 123541 00:10:33.567 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:33.567 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:33.567 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 123541' 00:10:33.567 killing process with pid 123541 00:10:33.567 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 123541 00:10:33.567 07:21:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 123541 00:10:34.504 07:21:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=123583 00:10:34.504 07:21:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:34.504 07:21:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:10:39.776 07:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 123583 00:10:39.776 07:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 123583 ']' 00:10:39.776 07:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 123583 00:10:39.776 07:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:10:39.776 07:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:39.776 07:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 123583 00:10:39.776 killing process with pid 123583 00:10:39.776 07:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:39.776 07:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:39.776 07:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 123583' 00:10:39.776 07:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 123583 00:10:39.776 07:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 123583 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:40.035 00:10:40.035 real 0m7.621s 00:10:40.035 user 0m6.943s 00:10:40.035 sys 0m1.071s 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:40.035 ************************************ 00:10:40.035 END TEST skip_rpc_with_json 00:10:40.035 ************************************ 00:10:40.035 07:21:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:10:40.035 07:21:13 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:40.035 07:21:13 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:40.035 07:21:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.035 ************************************ 00:10:40.035 START TEST skip_rpc_with_delay 00:10:40.035 ************************************ 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:40.035 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:10:40.294 [2024-07-12 07:21:13.925423] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:10:40.294 [2024-07-12 07:21:13.926309] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:10:40.294 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:10:40.294 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:40.294 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:40.294 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:40.294 00:10:40.294 real 0m0.144s 00:10:40.294 user 0m0.079s 00:10:40.294 sys 0m0.063s 00:10:40.294 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:40.294 07:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:10:40.294 ************************************ 00:10:40.294 END TEST skip_rpc_with_delay 00:10:40.294 ************************************ 00:10:40.294 07:21:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:10:40.294 07:21:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:10:40.294 07:21:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:10:40.294 07:21:14 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:40.294 07:21:14 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:40.294 07:21:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.294 ************************************ 00:10:40.294 START TEST exit_on_failed_rpc_init 00:10:40.294 ************************************ 00:10:40.294 07:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:10:40.294 07:21:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=123705 00:10:40.294 07:21:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 123705 00:10:40.294 07:21:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:40.294 07:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 123705 ']' 00:10:40.294 07:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.294 07:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:40.294 07:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.294 07:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:40.294 07:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:40.294 [2024-07-12 07:21:14.130645] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:10:40.294 [2024-07-12 07:21:14.131311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123705 ] 00:10:40.552 [2024-07-12 07:21:14.275070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.552 [2024-07-12 07:21:14.355068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.487 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:41.487 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:10:41.487 07:21:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:41.487 07:21:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:41.487 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:10:41.487 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:41.487 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:41.487 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:41.487 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:41.487 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:41.487 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:41.487 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:41.487 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:41.487 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:41.487 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:41.487 [2024-07-12 07:21:15.176207] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:10:41.488 [2024-07-12 07:21:15.176399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123728 ] 00:10:41.488 [2024-07-12 07:21:15.325948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.746 [2024-07-12 07:21:15.418431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.746 [2024-07-12 07:21:15.418618] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:41.746 [2024-07-12 07:21:15.418673] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:41.746 [2024-07-12 07:21:15.418724] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 123705 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 123705 ']' 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 123705 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 123705 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 123705' 00:10:42.053 killing process with pid 123705 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 123705 00:10:42.053 07:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 123705 00:10:42.644 00:10:42.644 real 0m2.268s 00:10:42.644 user 0m2.472s 00:10:42.644 sys 0m0.674s 00:10:42.644 07:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:42.644 07:21:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:42.644 ************************************ 00:10:42.644 END TEST exit_on_failed_rpc_init 00:10:42.644 ************************************ 00:10:42.644 07:21:16 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:42.644 00:10:42.644 real 0m16.141s 00:10:42.644 user 0m14.803s 00:10:42.644 sys 0m2.536s 00:10:42.644 07:21:16 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:42.644 07:21:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.644 ************************************ 00:10:42.644 END TEST skip_rpc 00:10:42.644 ************************************ 00:10:42.644 07:21:16 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:42.644 07:21:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:42.644 07:21:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:42.644 07:21:16 -- common/autotest_common.sh@10 -- # set +x 00:10:42.644 ************************************ 00:10:42.644 START TEST rpc_client 00:10:42.644 ************************************ 00:10:42.644 07:21:16 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:42.903 * Looking for test storage... 00:10:42.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:42.903 07:21:16 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:42.903 OK 00:10:42.903 07:21:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:42.903 00:10:42.903 real 0m0.170s 00:10:42.903 user 0m0.065s 00:10:42.903 sys 0m0.119s 00:10:42.903 07:21:16 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:42.903 07:21:16 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:42.903 ************************************ 00:10:42.903 END TEST rpc_client 00:10:42.903 ************************************ 00:10:42.903 07:21:16 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:42.903 07:21:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:42.903 07:21:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:42.903 07:21:16 -- common/autotest_common.sh@10 -- # set +x 00:10:42.903 ************************************ 00:10:42.903 START TEST json_config 00:10:42.903 ************************************ 00:10:42.903 07:21:16 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:42.903 07:21:16 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b33d595d-669e-4881-a8e2-cda56cb90807 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b33d595d-669e-4881-a8e2-cda56cb90807 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:42.903 07:21:16 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.903 07:21:16 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.903 07:21:16 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.903 07:21:16 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:42.903 07:21:16 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:42.903 07:21:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:42.903 07:21:16 json_config -- paths/export.sh@5 -- # export PATH 00:10:42.903 07:21:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@47 -- # : 0 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.903 07:21:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.904 07:21:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.904 07:21:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.904 07:21:16 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.904 07:21:16 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.904 07:21:16 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:42.904 INFO: JSON configuration test init 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:10:42.904 07:21:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:42.904 07:21:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:10:42.904 07:21:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:42.904 07:21:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:42.904 07:21:16 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:10:42.904 07:21:16 json_config -- json_config/common.sh@9 -- # local app=target 00:10:42.904 07:21:16 json_config -- json_config/common.sh@10 -- # shift 00:10:42.904 07:21:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:42.904 07:21:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:42.904 07:21:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:42.904 07:21:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:42.904 07:21:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:42.904 07:21:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=123874 00:10:42.904 07:21:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:42.904 Waiting for target to run... 00:10:42.904 07:21:16 json_config -- json_config/common.sh@25 -- # waitforlisten 123874 /var/tmp/spdk_tgt.sock 00:10:42.904 07:21:16 json_config -- common/autotest_common.sh@827 -- # '[' -z 123874 ']' 00:10:42.904 07:21:16 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:42.904 07:21:16 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:42.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:42.904 07:21:16 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:42.904 07:21:16 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:42.904 07:21:16 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:10:42.904 07:21:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:43.163 [2024-07-12 07:21:16.856049] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:10:43.163 [2024-07-12 07:21:16.856315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123874 ] 00:10:43.731 [2024-07-12 07:21:17.439819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.731 [2024-07-12 07:21:17.493721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.989 07:21:17 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:43.989 07:21:17 json_config -- common/autotest_common.sh@860 -- # return 0 00:10:43.989 00:10:43.989 07:21:17 json_config -- json_config/common.sh@26 -- # echo '' 00:10:43.989 07:21:17 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:10:43.989 07:21:17 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:10:43.989 07:21:17 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:43.989 07:21:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:43.989 07:21:17 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:10:43.989 07:21:17 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:10:43.989 07:21:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:43.989 07:21:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:43.989 07:21:17 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:10:43.989 07:21:17 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:10:43.989 07:21:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:10:44.554 07:21:18 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:10:44.554 07:21:18 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:10:44.554 07:21:18 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:44.554 07:21:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:44.554 07:21:18 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:10:44.554 07:21:18 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:10:44.554 07:21:18 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:10:44.554 07:21:18 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:10:44.554 07:21:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:10:44.554 07:21:18 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@48 -- # local get_types 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:10:44.811 07:21:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.811 07:21:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@55 -- # return 0 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:10:44.811 07:21:18 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:44.811 07:21:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:44.811 07:21:18 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:10:44.811 07:21:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:45.068 07:21:18 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:10:45.068 07:21:18 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:45.068 07:21:18 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:45.068 07:21:18 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:10:45.068 07:21:18 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:10:45.068 07:21:18 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:10:45.068 07:21:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:10:45.325 Nvme0n1p0 Nvme0n1p1 00:10:45.325 07:21:18 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:10:45.325 07:21:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:10:45.584 [2024-07-12 07:21:19.253140] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:45.584 [2024-07-12 07:21:19.254001] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:45.584 00:10:45.584 07:21:19 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:10:45.584 07:21:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:10:45.584 Malloc3 00:10:45.584 07:21:19 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:45.584 07:21:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:10:45.842 [2024-07-12 07:21:19.621263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:45.842 [2024-07-12 07:21:19.621687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.842 [2024-07-12 07:21:19.621850] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:10:45.842 [2024-07-12 07:21:19.621966] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.842 [2024-07-12 07:21:19.625020] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.842 [2024-07-12 07:21:19.625176] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:45.842 PTBdevFromMalloc3 00:10:45.842 07:21:19 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:10:45.842 07:21:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:10:46.100 Null0 00:10:46.100 07:21:19 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:10:46.100 07:21:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:10:46.358 Malloc0 00:10:46.358 07:21:20 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:10:46.358 07:21:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:10:46.358 Malloc1 00:10:46.358 07:21:20 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:10:46.358 07:21:20 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:10:46.923 102400+0 records in 00:10:46.923 102400+0 records out 00:10:46.923 104857600 bytes (105 MB, 100 MiB) copied, 0.355811 s, 295 MB/s 00:10:46.923 07:21:20 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:10:46.923 07:21:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:10:46.923 aio_disk 00:10:46.923 07:21:20 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:10:46.923 07:21:20 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:46.923 07:21:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:47.181 dd7f5fd0-2e88-4821-b78b-1e25481bee71 00:10:47.181 07:21:20 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:10:47.181 07:21:20 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:10:47.181 07:21:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:10:47.439 07:21:21 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:10:47.439 07:21:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:10:47.698 07:21:21 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:47.698 07:21:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:47.698 07:21:21 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:47.698 07:21:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:48.264 07:21:21 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:10:48.264 07:21:21 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:10:48.264 07:21:21 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:f3fb3a3a-2591-432f-a62e-d6860ae1127b bdev_register:e3e874a1-c438-4cf3-aa44-f40b16826ee7 bdev_register:abb60dff-3b61-4b4b-a1f7-1828fbb4617d bdev_register:b2bcc44d-69ac-4904-b77f-654714ecfa12 00:10:48.264 07:21:21 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:10:48.264 07:21:21 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:10:48.264 07:21:21 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:10:48.264 07:21:21 json_config -- json_config/json_config.sh@71 -- # sort 00:10:48.264 07:21:21 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:f3fb3a3a-2591-432f-a62e-d6860ae1127b bdev_register:e3e874a1-c438-4cf3-aa44-f40b16826ee7 bdev_register:abb60dff-3b61-4b4b-a1f7-1828fbb4617d bdev_register:b2bcc44d-69ac-4904-b77f-654714ecfa12 00:10:48.264 07:21:21 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:10:48.265 07:21:21 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:10:48.265 07:21:21 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:10:48.265 07:21:21 json_config -- json_config/json_config.sh@72 -- # sort 00:10:48.265 07:21:21 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:21 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:21 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:10:48.265 07:21:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:48.265 07:21:21 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:f3fb3a3a-2591-432f-a62e-d6860ae1127b 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:e3e874a1-c438-4cf3-aa44-f40b16826ee7 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:abb60dff-3b61-4b4b-a1f7-1828fbb4617d 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:b2bcc44d-69ac-4904-b77f-654714ecfa12 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:abb60dff-3b61-4b4b-a1f7-1828fbb4617d bdev_register:aio_disk bdev_register:b2bcc44d-69ac-4904-b77f-654714ecfa12 bdev_register:e3e874a1-c438-4cf3-aa44-f40b16826ee7 bdev_register:f3fb3a3a-2591-432f-a62e-d6860ae1127b != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\b\b\6\0\d\f\f\-\3\b\6\1\-\4\b\4\b\-\a\1\f\7\-\1\8\2\8\f\b\b\4\6\1\7\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\2\b\c\c\4\4\d\-\6\9\a\c\-\4\9\0\4\-\b\7\7\f\-\6\5\4\7\1\4\e\c\f\a\1\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\3\e\8\7\4\a\1\-\c\4\3\8\-\4\c\f\3\-\a\a\4\4\-\f\4\0\b\1\6\8\2\6\e\e\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\3\f\b\3\a\3\a\-\2\5\9\1\-\4\3\2\f\-\a\6\2\e\-\d\6\8\6\0\a\e\1\1\2\7\b ]] 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@86 -- # cat 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:abb60dff-3b61-4b4b-a1f7-1828fbb4617d bdev_register:aio_disk bdev_register:b2bcc44d-69ac-4904-b77f-654714ecfa12 bdev_register:e3e874a1-c438-4cf3-aa44-f40b16826ee7 bdev_register:f3fb3a3a-2591-432f-a62e-d6860ae1127b 00:10:48.265 Expected events matched: 00:10:48.265 bdev_register:Malloc0 00:10:48.265 bdev_register:Malloc0p0 00:10:48.265 bdev_register:Malloc0p1 00:10:48.265 bdev_register:Malloc0p2 00:10:48.265 bdev_register:Malloc1 00:10:48.265 bdev_register:Malloc3 00:10:48.265 bdev_register:Null0 00:10:48.265 bdev_register:Nvme0n1 00:10:48.265 bdev_register:Nvme0n1p0 00:10:48.265 bdev_register:Nvme0n1p1 00:10:48.265 bdev_register:PTBdevFromMalloc3 00:10:48.265 bdev_register:abb60dff-3b61-4b4b-a1f7-1828fbb4617d 00:10:48.265 bdev_register:aio_disk 00:10:48.265 bdev_register:b2bcc44d-69ac-4904-b77f-654714ecfa12 00:10:48.265 bdev_register:e3e874a1-c438-4cf3-aa44-f40b16826ee7 00:10:48.265 bdev_register:f3fb3a3a-2591-432f-a62e-d6860ae1127b 00:10:48.265 07:21:22 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:10:48.265 07:21:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.265 07:21:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:48.523 07:21:22 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:10:48.523 07:21:22 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:10:48.523 07:21:22 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:10:48.524 07:21:22 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:10:48.524 07:21:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.524 07:21:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:48.524 07:21:22 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:10:48.524 07:21:22 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:48.524 07:21:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:48.782 MallocBdevForConfigChangeCheck 00:10:48.782 07:21:22 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:10:48.782 07:21:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.782 07:21:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:48.782 07:21:22 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:10:48.782 07:21:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:49.039 INFO: shutting down applications... 00:10:49.039 07:21:22 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:10:49.039 07:21:22 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:10:49.040 07:21:22 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:10:49.040 07:21:22 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:10:49.040 07:21:22 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:49.305 [2024-07-12 07:21:23.089183] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:10:49.567 Calling clear_vhost_scsi_subsystem 00:10:49.567 Calling clear_iscsi_subsystem 00:10:49.567 Calling clear_vhost_blk_subsystem 00:10:49.567 Calling clear_nbd_subsystem 00:10:49.567 Calling clear_nvmf_subsystem 00:10:49.567 Calling clear_bdev_subsystem 00:10:49.567 07:21:23 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:10:49.567 07:21:23 json_config -- json_config/json_config.sh@343 -- # count=100 00:10:49.567 07:21:23 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:10:49.567 07:21:23 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:49.567 07:21:23 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:49.567 07:21:23 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:10:49.826 07:21:23 json_config -- json_config/json_config.sh@345 -- # break 00:10:49.826 07:21:23 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:10:49.826 07:21:23 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:10:49.826 07:21:23 json_config -- json_config/common.sh@31 -- # local app=target 00:10:49.826 07:21:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:49.826 07:21:23 json_config -- json_config/common.sh@35 -- # [[ -n 123874 ]] 00:10:49.826 07:21:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 123874 00:10:49.826 07:21:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:49.826 07:21:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:49.826 07:21:23 json_config -- json_config/common.sh@41 -- # kill -0 123874 00:10:49.826 07:21:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:50.393 07:21:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:50.393 07:21:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:50.393 07:21:24 json_config -- json_config/common.sh@41 -- # kill -0 123874 00:10:50.393 07:21:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:50.393 07:21:24 json_config -- json_config/common.sh@43 -- # break 00:10:50.393 07:21:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:50.393 SPDK target shutdown done 00:10:50.393 07:21:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:50.393 INFO: relaunching applications... 00:10:50.393 07:21:24 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:10:50.393 07:21:24 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:50.393 07:21:24 json_config -- json_config/common.sh@9 -- # local app=target 00:10:50.393 07:21:24 json_config -- json_config/common.sh@10 -- # shift 00:10:50.393 07:21:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:50.393 07:21:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:50.393 07:21:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:50.393 07:21:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:50.393 07:21:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:50.393 07:21:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=124118 00:10:50.393 Waiting for target to run... 00:10:50.394 07:21:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:50.394 07:21:24 json_config -- json_config/common.sh@25 -- # waitforlisten 124118 /var/tmp/spdk_tgt.sock 00:10:50.394 07:21:24 json_config -- common/autotest_common.sh@827 -- # '[' -z 124118 ']' 00:10:50.394 07:21:24 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:50.394 07:21:24 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:50.394 07:21:24 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:50.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:50.394 07:21:24 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:50.394 07:21:24 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:50.394 07:21:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:50.394 [2024-07-12 07:21:24.190329] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:10:50.394 [2024-07-12 07:21:24.190603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124118 ] 00:10:50.959 [2024-07-12 07:21:24.768917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.959 [2024-07-12 07:21:24.810997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.217 [2024-07-12 07:21:24.968558] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:51.217 [2024-07-12 07:21:24.968891] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:51.217 [2024-07-12 07:21:24.976504] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:51.217 [2024-07-12 07:21:24.976702] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:51.217 [2024-07-12 07:21:24.984538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:51.217 [2024-07-12 07:21:24.984695] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:51.217 [2024-07-12 07:21:24.984805] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:51.217 [2024-07-12 07:21:25.071060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:51.217 [2024-07-12 07:21:25.071320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.217 [2024-07-12 07:21:25.071452] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:51.217 [2024-07-12 07:21:25.071566] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.217 [2024-07-12 07:21:25.072177] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.217 [2024-07-12 07:21:25.072322] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:51.475 07:21:25 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:51.475 07:21:25 json_config -- common/autotest_common.sh@860 -- # return 0 00:10:51.475 00:10:51.475 INFO: Checking if target configuration is the same... 00:10:51.475 07:21:25 json_config -- json_config/common.sh@26 -- # echo '' 00:10:51.475 07:21:25 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:10:51.475 07:21:25 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:51.475 07:21:25 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:51.475 07:21:25 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:10:51.475 07:21:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:51.475 + '[' 2 -ne 2 ']' 00:10:51.475 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:51.475 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:51.475 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:51.475 +++ basename /dev/fd/62 00:10:51.475 ++ mktemp /tmp/62.XXX 00:10:51.475 + tmp_file_1=/tmp/62.6tp 00:10:51.475 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:51.475 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:51.475 + tmp_file_2=/tmp/spdk_tgt_config.json.ol6 00:10:51.475 + ret=0 00:10:51.475 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:51.734 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:51.993 + diff -u /tmp/62.6tp /tmp/spdk_tgt_config.json.ol6 00:10:51.993 INFO: JSON config files are the same 00:10:51.993 + echo 'INFO: JSON config files are the same' 00:10:51.993 + rm /tmp/62.6tp /tmp/spdk_tgt_config.json.ol6 00:10:51.993 + exit 0 00:10:51.993 07:21:25 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:10:51.993 INFO: changing configuration and checking if this can be detected... 00:10:51.993 07:21:25 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:51.993 07:21:25 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:51.993 07:21:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:51.993 07:21:25 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:51.993 07:21:25 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:10:51.993 07:21:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:51.993 + '[' 2 -ne 2 ']' 00:10:51.993 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:51.993 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:51.993 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:51.993 +++ basename /dev/fd/62 00:10:51.993 ++ mktemp /tmp/62.XXX 00:10:51.993 + tmp_file_1=/tmp/62.6iU 00:10:51.993 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:51.993 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:51.993 + tmp_file_2=/tmp/spdk_tgt_config.json.IeF 00:10:51.993 + ret=0 00:10:51.993 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:52.561 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:52.561 + diff -u /tmp/62.6iU /tmp/spdk_tgt_config.json.IeF 00:10:52.561 + ret=1 00:10:52.561 + echo '=== Start of file: /tmp/62.6iU ===' 00:10:52.561 + cat /tmp/62.6iU 00:10:52.561 + echo '=== End of file: /tmp/62.6iU ===' 00:10:52.561 + echo '' 00:10:52.561 + echo '=== Start of file: /tmp/spdk_tgt_config.json.IeF ===' 00:10:52.561 + cat /tmp/spdk_tgt_config.json.IeF 00:10:52.561 + echo '=== End of file: /tmp/spdk_tgt_config.json.IeF ===' 00:10:52.561 + echo '' 00:10:52.561 + rm /tmp/62.6iU /tmp/spdk_tgt_config.json.IeF 00:10:52.561 + exit 1 00:10:52.561 INFO: configuration change detected. 00:10:52.561 07:21:26 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:10:52.561 07:21:26 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:10:52.561 07:21:26 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:10:52.561 07:21:26 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:52.561 07:21:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:52.561 07:21:26 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:10:52.561 07:21:26 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:10:52.561 07:21:26 json_config -- json_config/json_config.sh@317 -- # [[ -n 124118 ]] 00:10:52.561 07:21:26 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:10:52.561 07:21:26 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:10:52.561 07:21:26 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:52.561 07:21:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:52.561 07:21:26 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:10:52.561 07:21:26 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:10:52.561 07:21:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:10:52.820 07:21:26 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:10:52.820 07:21:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:10:53.080 07:21:26 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:10:53.080 07:21:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:10:53.348 07:21:27 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:10:53.348 07:21:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:10:53.348 07:21:27 json_config -- json_config/json_config.sh@193 -- # uname -s 00:10:53.348 07:21:27 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:10:53.348 07:21:27 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:10:53.348 07:21:27 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:10:53.348 07:21:27 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:10:53.348 07:21:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:53.348 07:21:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:53.614 07:21:27 json_config -- json_config/json_config.sh@323 -- # killprocess 124118 00:10:53.614 07:21:27 json_config -- common/autotest_common.sh@946 -- # '[' -z 124118 ']' 00:10:53.614 07:21:27 json_config -- common/autotest_common.sh@950 -- # kill -0 124118 00:10:53.614 07:21:27 json_config -- common/autotest_common.sh@951 -- # uname 00:10:53.614 07:21:27 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:53.614 07:21:27 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 124118 00:10:53.614 07:21:27 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:53.614 killing process with pid 124118 00:10:53.614 07:21:27 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:53.614 07:21:27 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 124118' 00:10:53.614 07:21:27 json_config -- common/autotest_common.sh@965 -- # kill 124118 00:10:53.614 07:21:27 json_config -- common/autotest_common.sh@970 -- # wait 124118 00:10:54.181 07:21:27 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:54.182 07:21:27 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:10:54.182 07:21:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:54.182 07:21:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:54.182 07:21:27 json_config -- json_config/json_config.sh@328 -- # return 0 00:10:54.182 INFO: Success 00:10:54.182 07:21:27 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:10:54.182 00:10:54.182 real 0m11.151s 00:10:54.182 user 0m16.100s 00:10:54.182 sys 0m3.119s 00:10:54.182 07:21:27 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:54.182 07:21:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:54.182 ************************************ 00:10:54.182 END TEST json_config 00:10:54.182 ************************************ 00:10:54.182 07:21:27 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:54.182 07:21:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:54.182 07:21:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:54.182 07:21:27 -- common/autotest_common.sh@10 -- # set +x 00:10:54.182 ************************************ 00:10:54.182 START TEST json_config_extra_key 00:10:54.182 ************************************ 00:10:54.182 07:21:27 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:54.182 07:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f1bfa900-e46b-4200-91d2-e0c142572023 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f1bfa900-e46b-4200-91d2-e0c142572023 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:54.182 07:21:27 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.182 07:21:27 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.182 07:21:27 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.182 07:21:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:54.182 07:21:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:54.182 07:21:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:54.182 07:21:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:54.182 07:21:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:54.182 07:21:27 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:54.182 07:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:54.182 07:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:54.182 07:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:54.182 07:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:54.182 07:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:54.182 07:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:54.182 07:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:54.182 07:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:54.182 07:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:54.182 07:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:54.182 INFO: launching applications... 00:10:54.182 07:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:54.182 07:21:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:54.182 07:21:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:54.182 07:21:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:54.182 07:21:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:54.182 07:21:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:54.182 07:21:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:54.182 07:21:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:54.182 07:21:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:54.182 07:21:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=124280 00:10:54.182 Waiting for target to run... 00:10:54.182 07:21:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:54.182 07:21:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 124280 /var/tmp/spdk_tgt.sock 00:10:54.182 07:21:27 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 124280 ']' 00:10:54.182 07:21:27 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:54.182 07:21:27 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:54.182 07:21:27 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:54.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:54.182 07:21:27 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:54.182 07:21:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:54.182 07:21:27 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:54.441 [2024-07-12 07:21:28.091644] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:10:54.441 [2024-07-12 07:21:28.091995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124280 ] 00:10:55.008 [2024-07-12 07:21:28.671296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.008 [2024-07-12 07:21:28.712680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.267 07:21:29 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:55.267 07:21:29 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:10:55.267 00:10:55.267 INFO: shutting down applications... 00:10:55.267 07:21:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:55.267 07:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:55.267 07:21:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:55.267 07:21:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:55.267 07:21:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:55.267 07:21:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 124280 ]] 00:10:55.267 07:21:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 124280 00:10:55.267 07:21:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:55.267 07:21:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:55.267 07:21:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 124280 00:10:55.267 07:21:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:55.835 07:21:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:55.835 07:21:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:55.835 07:21:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 124280 00:10:55.835 07:21:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:56.403 07:21:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:56.403 07:21:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:56.403 07:21:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 124280 00:10:56.403 07:21:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:56.403 07:21:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:56.403 07:21:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:56.403 SPDK target shutdown done 00:10:56.403 07:21:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:56.403 Success 00:10:56.403 07:21:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:56.403 00:10:56.403 real 0m2.215s 00:10:56.403 user 0m1.744s 00:10:56.403 sys 0m0.679s 00:10:56.403 07:21:30 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:56.403 07:21:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:56.403 ************************************ 00:10:56.403 END TEST json_config_extra_key 00:10:56.403 ************************************ 00:10:56.403 07:21:30 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:56.403 07:21:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:56.403 07:21:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:56.403 07:21:30 -- common/autotest_common.sh@10 -- # set +x 00:10:56.403 ************************************ 00:10:56.403 START TEST alias_rpc 00:10:56.403 ************************************ 00:10:56.403 07:21:30 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:56.403 * Looking for test storage... 00:10:56.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:56.403 07:21:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:56.403 07:21:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=124368 00:10:56.403 07:21:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 124368 00:10:56.403 07:21:30 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 124368 ']' 00:10:56.403 07:21:30 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.403 07:21:30 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:56.403 07:21:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:56.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.403 07:21:30 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.403 07:21:30 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:56.403 07:21:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.662 [2024-07-12 07:21:30.365008] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:10:56.662 [2024-07-12 07:21:30.365292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124368 ] 00:10:56.662 [2024-07-12 07:21:30.524639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.921 [2024-07-12 07:21:30.607645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.488 07:21:31 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:57.488 07:21:31 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:10:57.488 07:21:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:57.747 07:21:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 124368 00:10:57.747 07:21:31 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 124368 ']' 00:10:57.747 07:21:31 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 124368 00:10:57.747 07:21:31 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:10:57.747 07:21:31 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:57.747 07:21:31 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 124368 00:10:57.747 07:21:31 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:57.747 07:21:31 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:57.747 07:21:31 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 124368' 00:10:57.747 killing process with pid 124368 00:10:57.747 07:21:31 alias_rpc -- common/autotest_common.sh@965 -- # kill 124368 00:10:57.747 07:21:31 alias_rpc -- common/autotest_common.sh@970 -- # wait 124368 00:10:58.682 00:10:58.682 real 0m2.130s 00:10:58.682 user 0m2.139s 00:10:58.682 sys 0m0.681s 00:10:58.682 07:21:32 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:58.682 07:21:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.682 ************************************ 00:10:58.682 END TEST alias_rpc 00:10:58.682 ************************************ 00:10:58.682 07:21:32 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:10:58.682 07:21:32 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:58.682 07:21:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:58.682 07:21:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:58.682 07:21:32 -- common/autotest_common.sh@10 -- # set +x 00:10:58.682 ************************************ 00:10:58.682 START TEST spdkcli_tcp 00:10:58.682 ************************************ 00:10:58.682 07:21:32 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:58.682 * Looking for test storage... 00:10:58.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:58.682 07:21:32 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:58.682 07:21:32 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:58.682 07:21:32 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:58.682 07:21:32 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:58.682 07:21:32 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:58.682 07:21:32 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:58.683 07:21:32 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:58.683 07:21:32 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:58.683 07:21:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:58.683 07:21:32 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=124455 00:10:58.683 07:21:32 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:58.683 07:21:32 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 124455 00:10:58.683 07:21:32 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 124455 ']' 00:10:58.683 07:21:32 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.683 07:21:32 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:58.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.683 07:21:32 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.683 07:21:32 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:58.683 07:21:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:58.683 [2024-07-12 07:21:32.561604] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:10:58.683 [2024-07-12 07:21:32.561851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124455 ] 00:10:58.940 [2024-07-12 07:21:32.718351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:58.940 [2024-07-12 07:21:32.802262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.940 [2024-07-12 07:21:32.802262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.874 07:21:33 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:59.874 07:21:33 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:10:59.874 07:21:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=124477 00:10:59.874 07:21:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:59.874 07:21:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:59.874 [ 00:10:59.874 "spdk_get_version", 00:10:59.874 "rpc_get_methods", 00:10:59.874 "keyring_get_keys", 00:10:59.874 "trace_get_info", 00:10:59.874 "trace_get_tpoint_group_mask", 00:10:59.874 "trace_disable_tpoint_group", 00:10:59.874 "trace_enable_tpoint_group", 00:10:59.874 "trace_clear_tpoint_mask", 00:10:59.874 "trace_set_tpoint_mask", 00:10:59.874 "framework_get_pci_devices", 00:10:59.874 "framework_get_config", 00:10:59.874 "framework_get_subsystems", 00:10:59.874 "iobuf_get_stats", 00:10:59.874 "iobuf_set_options", 00:10:59.874 "sock_get_default_impl", 00:10:59.874 "sock_set_default_impl", 00:10:59.874 "sock_impl_set_options", 00:10:59.874 "sock_impl_get_options", 00:10:59.874 "vmd_rescan", 00:10:59.874 "vmd_remove_device", 00:10:59.874 "vmd_enable", 00:10:59.874 "accel_get_stats", 00:10:59.874 "accel_set_options", 00:10:59.874 "accel_set_driver", 00:10:59.874 "accel_crypto_key_destroy", 00:10:59.874 "accel_crypto_keys_get", 00:10:59.874 "accel_crypto_key_create", 00:10:59.874 "accel_assign_opc", 00:10:59.874 "accel_get_module_info", 00:10:59.874 "accel_get_opc_assignments", 00:10:59.874 "notify_get_notifications", 00:10:59.874 "notify_get_types", 00:10:59.874 "bdev_get_histogram", 00:10:59.874 "bdev_enable_histogram", 00:10:59.874 "bdev_set_qos_limit", 00:10:59.874 "bdev_set_qd_sampling_period", 00:10:59.874 "bdev_get_bdevs", 00:10:59.874 "bdev_reset_iostat", 00:10:59.874 "bdev_get_iostat", 00:10:59.874 "bdev_examine", 00:10:59.874 "bdev_wait_for_examine", 00:10:59.874 "bdev_set_options", 00:10:59.874 "scsi_get_devices", 00:10:59.874 "thread_set_cpumask", 00:10:59.874 "framework_get_scheduler", 00:10:59.874 "framework_set_scheduler", 00:10:59.874 "framework_get_reactors", 00:10:59.874 "thread_get_io_channels", 00:10:59.874 "thread_get_pollers", 00:10:59.874 "thread_get_stats", 00:10:59.874 "framework_monitor_context_switch", 00:10:59.874 "spdk_kill_instance", 00:10:59.874 "log_enable_timestamps", 00:10:59.874 "log_get_flags", 00:10:59.874 "log_clear_flag", 00:10:59.874 "log_set_flag", 00:10:59.874 "log_get_level", 00:10:59.874 "log_set_level", 00:10:59.874 "log_get_print_level", 00:10:59.874 "log_set_print_level", 00:10:59.874 "framework_enable_cpumask_locks", 00:10:59.874 "framework_disable_cpumask_locks", 00:10:59.874 "framework_wait_init", 00:10:59.874 "framework_start_init", 00:10:59.874 "virtio_blk_create_transport", 00:10:59.874 "virtio_blk_get_transports", 00:10:59.874 "vhost_controller_set_coalescing", 00:10:59.874 "vhost_get_controllers", 00:10:59.874 "vhost_delete_controller", 00:10:59.874 "vhost_create_blk_controller", 00:10:59.874 "vhost_scsi_controller_remove_target", 00:10:59.874 "vhost_scsi_controller_add_target", 00:10:59.874 "vhost_start_scsi_controller", 00:10:59.874 "vhost_create_scsi_controller", 00:10:59.874 "nbd_get_disks", 00:10:59.874 "nbd_stop_disk", 00:10:59.874 "nbd_start_disk", 00:10:59.874 "env_dpdk_get_mem_stats", 00:10:59.874 "nvmf_stop_mdns_prr", 00:10:59.874 "nvmf_publish_mdns_prr", 00:10:59.874 "nvmf_subsystem_get_listeners", 00:10:59.874 "nvmf_subsystem_get_qpairs", 00:10:59.874 "nvmf_subsystem_get_controllers", 00:10:59.874 "nvmf_get_stats", 00:10:59.874 "nvmf_get_transports", 00:10:59.874 "nvmf_create_transport", 00:10:59.874 "nvmf_get_targets", 00:10:59.874 "nvmf_delete_target", 00:10:59.874 "nvmf_create_target", 00:10:59.874 "nvmf_subsystem_allow_any_host", 00:10:59.874 "nvmf_subsystem_remove_host", 00:10:59.874 "nvmf_subsystem_add_host", 00:10:59.874 "nvmf_ns_remove_host", 00:10:59.874 "nvmf_ns_add_host", 00:10:59.874 "nvmf_subsystem_remove_ns", 00:10:59.874 "nvmf_subsystem_add_ns", 00:10:59.874 "nvmf_subsystem_listener_set_ana_state", 00:10:59.874 "nvmf_discovery_get_referrals", 00:10:59.874 "nvmf_discovery_remove_referral", 00:10:59.874 "nvmf_discovery_add_referral", 00:10:59.874 "nvmf_subsystem_remove_listener", 00:10:59.874 "nvmf_subsystem_add_listener", 00:10:59.874 "nvmf_delete_subsystem", 00:10:59.874 "nvmf_create_subsystem", 00:10:59.874 "nvmf_get_subsystems", 00:10:59.874 "nvmf_set_crdt", 00:10:59.874 "nvmf_set_config", 00:10:59.874 "nvmf_set_max_subsystems", 00:10:59.874 "iscsi_get_histogram", 00:10:59.874 "iscsi_enable_histogram", 00:10:59.874 "iscsi_set_options", 00:10:59.874 "iscsi_get_auth_groups", 00:10:59.874 "iscsi_auth_group_remove_secret", 00:10:59.874 "iscsi_auth_group_add_secret", 00:10:59.874 "iscsi_delete_auth_group", 00:10:59.874 "iscsi_create_auth_group", 00:10:59.874 "iscsi_set_discovery_auth", 00:10:59.874 "iscsi_get_options", 00:10:59.874 "iscsi_target_node_request_logout", 00:10:59.874 "iscsi_target_node_set_redirect", 00:10:59.874 "iscsi_target_node_set_auth", 00:10:59.874 "iscsi_target_node_add_lun", 00:10:59.874 "iscsi_get_stats", 00:10:59.874 "iscsi_get_connections", 00:10:59.874 "iscsi_portal_group_set_auth", 00:10:59.874 "iscsi_start_portal_group", 00:10:59.874 "iscsi_delete_portal_group", 00:10:59.874 "iscsi_create_portal_group", 00:10:59.874 "iscsi_get_portal_groups", 00:10:59.874 "iscsi_delete_target_node", 00:10:59.874 "iscsi_target_node_remove_pg_ig_maps", 00:10:59.874 "iscsi_target_node_add_pg_ig_maps", 00:10:59.874 "iscsi_create_target_node", 00:10:59.874 "iscsi_get_target_nodes", 00:10:59.874 "iscsi_delete_initiator_group", 00:10:59.874 "iscsi_initiator_group_remove_initiators", 00:10:59.874 "iscsi_initiator_group_add_initiators", 00:10:59.874 "iscsi_create_initiator_group", 00:10:59.874 "iscsi_get_initiator_groups", 00:10:59.874 "keyring_linux_set_options", 00:10:59.874 "keyring_file_remove_key", 00:10:59.874 "keyring_file_add_key", 00:10:59.874 "iaa_scan_accel_module", 00:10:59.874 "dsa_scan_accel_module", 00:10:59.874 "ioat_scan_accel_module", 00:10:59.874 "accel_error_inject_error", 00:10:59.874 "bdev_iscsi_delete", 00:10:59.874 "bdev_iscsi_create", 00:10:59.874 "bdev_iscsi_set_options", 00:10:59.874 "bdev_virtio_attach_controller", 00:10:59.874 "bdev_virtio_scsi_get_devices", 00:10:59.874 "bdev_virtio_detach_controller", 00:10:59.874 "bdev_virtio_blk_set_hotplug", 00:10:59.874 "bdev_ftl_set_property", 00:10:59.874 "bdev_ftl_get_properties", 00:10:59.874 "bdev_ftl_get_stats", 00:10:59.874 "bdev_ftl_unmap", 00:10:59.875 "bdev_ftl_unload", 00:10:59.875 "bdev_ftl_delete", 00:10:59.875 "bdev_ftl_load", 00:10:59.875 "bdev_ftl_create", 00:10:59.875 "bdev_aio_delete", 00:10:59.875 "bdev_aio_rescan", 00:10:59.875 "bdev_aio_create", 00:10:59.875 "blobfs_create", 00:10:59.875 "blobfs_detect", 00:10:59.875 "blobfs_set_cache_size", 00:10:59.875 "bdev_zone_block_delete", 00:10:59.875 "bdev_zone_block_create", 00:10:59.875 "bdev_delay_delete", 00:10:59.875 "bdev_delay_create", 00:10:59.875 "bdev_delay_update_latency", 00:10:59.875 "bdev_split_delete", 00:10:59.875 "bdev_split_create", 00:10:59.875 "bdev_error_inject_error", 00:10:59.875 "bdev_error_delete", 00:10:59.875 "bdev_error_create", 00:10:59.875 "bdev_raid_set_options", 00:10:59.875 "bdev_raid_remove_base_bdev", 00:10:59.875 "bdev_raid_add_base_bdev", 00:10:59.875 "bdev_raid_delete", 00:10:59.875 "bdev_raid_create", 00:10:59.875 "bdev_raid_get_bdevs", 00:10:59.875 "bdev_lvol_set_parent_bdev", 00:10:59.875 "bdev_lvol_set_parent", 00:10:59.875 "bdev_lvol_check_shallow_copy", 00:10:59.875 "bdev_lvol_start_shallow_copy", 00:10:59.875 "bdev_lvol_grow_lvstore", 00:10:59.875 "bdev_lvol_get_lvols", 00:10:59.875 "bdev_lvol_get_lvstores", 00:10:59.875 "bdev_lvol_delete", 00:10:59.875 "bdev_lvol_set_read_only", 00:10:59.875 "bdev_lvol_resize", 00:10:59.875 "bdev_lvol_decouple_parent", 00:10:59.875 "bdev_lvol_inflate", 00:10:59.875 "bdev_lvol_rename", 00:10:59.875 "bdev_lvol_clone_bdev", 00:10:59.875 "bdev_lvol_clone", 00:10:59.875 "bdev_lvol_snapshot", 00:10:59.875 "bdev_lvol_create", 00:10:59.875 "bdev_lvol_delete_lvstore", 00:10:59.875 "bdev_lvol_rename_lvstore", 00:10:59.875 "bdev_lvol_create_lvstore", 00:10:59.875 "bdev_passthru_delete", 00:10:59.875 "bdev_passthru_create", 00:10:59.875 "bdev_nvme_cuse_unregister", 00:10:59.875 "bdev_nvme_cuse_register", 00:10:59.875 "bdev_opal_new_user", 00:10:59.875 "bdev_opal_set_lock_state", 00:10:59.875 "bdev_opal_delete", 00:10:59.875 "bdev_opal_get_info", 00:10:59.875 "bdev_opal_create", 00:10:59.875 "bdev_nvme_opal_revert", 00:10:59.875 "bdev_nvme_opal_init", 00:10:59.875 "bdev_nvme_send_cmd", 00:10:59.875 "bdev_nvme_get_path_iostat", 00:10:59.875 "bdev_nvme_get_mdns_discovery_info", 00:10:59.875 "bdev_nvme_stop_mdns_discovery", 00:10:59.875 "bdev_nvme_start_mdns_discovery", 00:10:59.875 "bdev_nvme_set_multipath_policy", 00:10:59.875 "bdev_nvme_set_preferred_path", 00:10:59.875 "bdev_nvme_get_io_paths", 00:10:59.875 "bdev_nvme_remove_error_injection", 00:10:59.875 "bdev_nvme_add_error_injection", 00:10:59.875 "bdev_nvme_get_discovery_info", 00:10:59.875 "bdev_nvme_stop_discovery", 00:10:59.875 "bdev_nvme_start_discovery", 00:10:59.875 "bdev_nvme_get_controller_health_info", 00:10:59.875 "bdev_nvme_disable_controller", 00:10:59.875 "bdev_nvme_enable_controller", 00:10:59.875 "bdev_nvme_reset_controller", 00:10:59.875 "bdev_nvme_get_transport_statistics", 00:10:59.875 "bdev_nvme_apply_firmware", 00:10:59.875 "bdev_nvme_detach_controller", 00:10:59.875 "bdev_nvme_get_controllers", 00:10:59.875 "bdev_nvme_attach_controller", 00:10:59.875 "bdev_nvme_set_hotplug", 00:10:59.875 "bdev_nvme_set_options", 00:10:59.875 "bdev_null_resize", 00:10:59.875 "bdev_null_delete", 00:10:59.875 "bdev_null_create", 00:10:59.875 "bdev_malloc_delete", 00:10:59.875 "bdev_malloc_create" 00:10:59.875 ] 00:11:00.134 07:21:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:11:00.134 07:21:33 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.134 07:21:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:00.134 07:21:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:00.134 07:21:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 124455 00:11:00.134 07:21:33 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 124455 ']' 00:11:00.134 07:21:33 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 124455 00:11:00.134 07:21:33 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:11:00.134 07:21:33 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:00.134 07:21:33 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 124455 00:11:00.134 07:21:33 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:00.134 07:21:33 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:00.134 07:21:33 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 124455' 00:11:00.134 killing process with pid 124455 00:11:00.134 07:21:33 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 124455 00:11:00.134 07:21:33 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 124455 00:11:00.700 00:11:00.700 real 0m2.151s 00:11:00.700 user 0m3.675s 00:11:00.700 sys 0m0.731s 00:11:00.700 07:21:34 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:00.700 07:21:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:00.700 ************************************ 00:11:00.700 END TEST spdkcli_tcp 00:11:00.700 ************************************ 00:11:00.700 07:21:34 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:00.700 07:21:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:00.700 07:21:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:00.700 07:21:34 -- common/autotest_common.sh@10 -- # set +x 00:11:00.700 ************************************ 00:11:00.700 START TEST dpdk_mem_utility 00:11:00.700 ************************************ 00:11:00.700 07:21:34 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:00.959 * Looking for test storage... 00:11:00.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:00.959 07:21:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:00.959 07:21:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=124557 00:11:00.959 07:21:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:00.959 07:21:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 124557 00:11:00.959 07:21:34 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 124557 ']' 00:11:00.959 07:21:34 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.959 07:21:34 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:00.959 07:21:34 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.959 07:21:34 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:00.959 07:21:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:00.959 [2024-07-12 07:21:34.765472] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:00.959 [2024-07-12 07:21:34.765725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124557 ] 00:11:01.217 [2024-07-12 07:21:34.920834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.217 [2024-07-12 07:21:34.995911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.784 07:21:35 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:01.784 07:21:35 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:11:01.784 07:21:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:01.784 07:21:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:01.784 07:21:35 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.784 07:21:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:01.784 { 00:11:01.784 "filename": "/tmp/spdk_mem_dump.txt" 00:11:01.784 } 00:11:01.784 07:21:35 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.784 07:21:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:02.045 DPDK memory size 814.000000 MiB in 1 heap(s) 00:11:02.045 1 heaps totaling size 814.000000 MiB 00:11:02.045 size: 814.000000 MiB heap id: 0 00:11:02.045 end heaps---------- 00:11:02.045 8 mempools totaling size 598.116089 MiB 00:11:02.045 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:02.045 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:02.045 size: 84.521057 MiB name: bdev_io_124557 00:11:02.045 size: 51.011292 MiB name: evtpool_124557 00:11:02.045 size: 50.003479 MiB name: msgpool_124557 00:11:02.045 size: 21.763794 MiB name: PDU_Pool 00:11:02.045 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:02.045 size: 0.026123 MiB name: Session_Pool 00:11:02.045 end mempools------- 00:11:02.045 6 memzones totaling size 4.142822 MiB 00:11:02.045 size: 1.000366 MiB name: RG_ring_0_124557 00:11:02.045 size: 1.000366 MiB name: RG_ring_1_124557 00:11:02.045 size: 1.000366 MiB name: RG_ring_4_124557 00:11:02.045 size: 1.000366 MiB name: RG_ring_5_124557 00:11:02.045 size: 0.125366 MiB name: RG_ring_2_124557 00:11:02.045 size: 0.015991 MiB name: RG_ring_3_124557 00:11:02.045 end memzones------- 00:11:02.045 07:21:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:02.045 heap id: 0 total size: 814.000000 MiB number of busy elements: 222 number of free elements: 15 00:11:02.045 list of free elements. size: 12.486206 MiB 00:11:02.045 element at address: 0x200000400000 with size: 1.999512 MiB 00:11:02.045 element at address: 0x200018e00000 with size: 0.999878 MiB 00:11:02.045 element at address: 0x200019000000 with size: 0.999878 MiB 00:11:02.045 element at address: 0x200003e00000 with size: 0.996277 MiB 00:11:02.045 element at address: 0x200031c00000 with size: 0.994446 MiB 00:11:02.045 element at address: 0x200013800000 with size: 0.978699 MiB 00:11:02.045 element at address: 0x200007000000 with size: 0.959839 MiB 00:11:02.045 element at address: 0x200019200000 with size: 0.936584 MiB 00:11:02.045 element at address: 0x200000200000 with size: 0.836853 MiB 00:11:02.045 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:11:02.045 element at address: 0x20000b200000 with size: 0.489807 MiB 00:11:02.045 element at address: 0x200000800000 with size: 0.487061 MiB 00:11:02.045 element at address: 0x200019400000 with size: 0.485657 MiB 00:11:02.045 element at address: 0x200027e00000 with size: 0.402161 MiB 00:11:02.045 element at address: 0x200003a00000 with size: 0.351135 MiB 00:11:02.045 list of standard malloc elements. size: 199.251221 MiB 00:11:02.045 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:11:02.045 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:11:02.045 element at address: 0x200018efff80 with size: 1.000122 MiB 00:11:02.045 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:11:02.045 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:11:02.045 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:11:02.045 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:11:02.045 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:11:02.045 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:11:02.045 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:11:02.045 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:11:02.045 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:11:02.045 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:11:02.045 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:11:02.045 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:11:02.045 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:11:02.045 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:11:02.045 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:11:02.045 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:11:02.045 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:11:02.045 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:11:02.045 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:11:02.045 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:11:02.045 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:11:02.045 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:11:02.045 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:11:02.045 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:11:02.045 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:11:02.045 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:11:02.045 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:11:02.045 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003adb300 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003adb500 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003affa80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003affb40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:11:02.046 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e66f40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e67000 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6dc00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:11:02.046 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:11:02.046 list of memzone associated elements. size: 602.262573 MiB 00:11:02.047 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:11:02.047 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:02.047 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:11:02.047 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:02.047 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:11:02.047 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_124557_0 00:11:02.047 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:11:02.047 associated memzone info: size: 48.002930 MiB name: MP_evtpool_124557_0 00:11:02.047 element at address: 0x200003fff380 with size: 48.003052 MiB 00:11:02.047 associated memzone info: size: 48.002930 MiB name: MP_msgpool_124557_0 00:11:02.047 element at address: 0x2000195be940 with size: 20.255554 MiB 00:11:02.047 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:02.047 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:11:02.047 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:02.047 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:11:02.047 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_124557 00:11:02.047 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:11:02.047 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_124557 00:11:02.047 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:11:02.047 associated memzone info: size: 1.007996 MiB name: MP_evtpool_124557 00:11:02.047 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:11:02.047 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:02.047 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:11:02.047 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:02.047 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:11:02.047 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:02.047 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:11:02.047 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:02.047 element at address: 0x200003eff180 with size: 1.000488 MiB 00:11:02.047 associated memzone info: size: 1.000366 MiB name: RG_ring_0_124557 00:11:02.047 element at address: 0x200003affc00 with size: 1.000488 MiB 00:11:02.047 associated memzone info: size: 1.000366 MiB name: RG_ring_1_124557 00:11:02.047 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:11:02.047 associated memzone info: size: 1.000366 MiB name: RG_ring_4_124557 00:11:02.047 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:11:02.047 associated memzone info: size: 1.000366 MiB name: RG_ring_5_124557 00:11:02.047 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:11:02.047 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_124557 00:11:02.047 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:11:02.047 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:02.047 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:11:02.047 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:02.047 element at address: 0x20001947c540 with size: 0.250488 MiB 00:11:02.047 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:02.047 element at address: 0x200003adf880 with size: 0.125488 MiB 00:11:02.047 associated memzone info: size: 0.125366 MiB name: RG_ring_2_124557 00:11:02.047 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:11:02.047 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:02.047 element at address: 0x200027e670c0 with size: 0.023743 MiB 00:11:02.047 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:02.047 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:11:02.047 associated memzone info: size: 0.015991 MiB name: RG_ring_3_124557 00:11:02.047 element at address: 0x200027e6d200 with size: 0.002441 MiB 00:11:02.047 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:02.047 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:11:02.047 associated memzone info: size: 0.000183 MiB name: MP_msgpool_124557 00:11:02.047 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:11:02.047 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_124557 00:11:02.047 element at address: 0x200027e6dcc0 with size: 0.000305 MiB 00:11:02.047 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:02.047 07:21:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:02.047 07:21:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 124557 00:11:02.047 07:21:35 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 124557 ']' 00:11:02.047 07:21:35 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 124557 00:11:02.047 07:21:35 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:11:02.047 07:21:35 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:02.047 07:21:35 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 124557 00:11:02.047 07:21:35 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:02.047 07:21:35 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:02.047 07:21:35 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 124557' 00:11:02.047 killing process with pid 124557 00:11:02.047 07:21:35 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 124557 00:11:02.047 07:21:35 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 124557 00:11:02.615 ************************************ 00:11:02.615 END TEST dpdk_mem_utility 00:11:02.615 ************************************ 00:11:02.615 00:11:02.615 real 0m1.871s 00:11:02.615 user 0m1.720s 00:11:02.615 sys 0m0.615s 00:11:02.615 07:21:36 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:02.615 07:21:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:02.873 07:21:36 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:02.873 07:21:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:02.873 07:21:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:02.873 07:21:36 -- common/autotest_common.sh@10 -- # set +x 00:11:02.873 ************************************ 00:11:02.873 START TEST event 00:11:02.873 ************************************ 00:11:02.873 07:21:36 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:02.873 * Looking for test storage... 00:11:02.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:02.873 07:21:36 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:02.873 07:21:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:11:02.873 07:21:36 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:02.873 07:21:36 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:11:02.873 07:21:36 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:02.873 07:21:36 event -- common/autotest_common.sh@10 -- # set +x 00:11:02.873 ************************************ 00:11:02.873 START TEST event_perf 00:11:02.873 ************************************ 00:11:02.873 07:21:36 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:02.873 Running I/O for 1 seconds...[2024-07-12 07:21:36.674692] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:02.873 [2024-07-12 07:21:36.675022] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124647 ] 00:11:03.134 [2024-07-12 07:21:36.856366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.134 [2024-07-12 07:21:36.939646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.134 [2024-07-12 07:21:36.939792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.134 [2024-07-12 07:21:36.939933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.135 [2024-07-12 07:21:36.939938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.531 Running I/O for 1 seconds... 00:11:04.532 lcore 0: 190878 00:11:04.532 lcore 1: 190878 00:11:04.532 lcore 2: 190877 00:11:04.532 lcore 3: 190878 00:11:04.532 done. 00:11:04.532 00:11:04.532 real 0m1.487s 00:11:04.532 user 0m4.233s 00:11:04.532 sys 0m0.144s 00:11:04.532 07:21:38 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:04.532 ************************************ 00:11:04.532 END TEST event_perf 00:11:04.532 ************************************ 00:11:04.532 07:21:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:11:04.532 07:21:38 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:04.532 07:21:38 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:04.532 07:21:38 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:04.532 07:21:38 event -- common/autotest_common.sh@10 -- # set +x 00:11:04.532 ************************************ 00:11:04.532 START TEST event_reactor 00:11:04.532 ************************************ 00:11:04.532 07:21:38 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:04.532 [2024-07-12 07:21:38.231696] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:04.532 [2024-07-12 07:21:38.232164] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124697 ] 00:11:04.532 [2024-07-12 07:21:38.390326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.789 [2024-07-12 07:21:38.496249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.166 test_start 00:11:06.166 oneshot 00:11:06.166 tick 100 00:11:06.166 tick 100 00:11:06.166 tick 250 00:11:06.166 tick 100 00:11:06.166 tick 100 00:11:06.166 tick 100 00:11:06.166 tick 250 00:11:06.166 tick 500 00:11:06.166 tick 100 00:11:06.166 tick 100 00:11:06.166 tick 250 00:11:06.166 tick 100 00:11:06.166 tick 100 00:11:06.166 test_end 00:11:06.166 00:11:06.166 real 0m1.494s 00:11:06.166 user 0m1.256s 00:11:06.166 sys 0m0.136s 00:11:06.166 07:21:39 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:06.166 07:21:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:11:06.166 ************************************ 00:11:06.166 END TEST event_reactor 00:11:06.166 ************************************ 00:11:06.166 07:21:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:06.166 07:21:39 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:06.166 07:21:39 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:06.166 07:21:39 event -- common/autotest_common.sh@10 -- # set +x 00:11:06.166 ************************************ 00:11:06.166 START TEST event_reactor_perf 00:11:06.166 ************************************ 00:11:06.166 07:21:39 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:06.166 [2024-07-12 07:21:39.788527] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:06.166 [2024-07-12 07:21:39.788805] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124741 ] 00:11:06.166 [2024-07-12 07:21:39.944188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.166 [2024-07-12 07:21:40.025013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.542 test_start 00:11:07.542 test_end 00:11:07.542 Performance: 390598 events per second 00:11:07.542 00:11:07.542 real 0m1.459s 00:11:07.542 user 0m1.258s 00:11:07.542 sys 0m0.101s 00:11:07.542 07:21:41 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:07.542 07:21:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:11:07.542 ************************************ 00:11:07.542 END TEST event_reactor_perf 00:11:07.542 ************************************ 00:11:07.542 07:21:41 event -- event/event.sh@49 -- # uname -s 00:11:07.542 07:21:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:07.542 07:21:41 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:07.542 07:21:41 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:07.542 07:21:41 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:07.542 07:21:41 event -- common/autotest_common.sh@10 -- # set +x 00:11:07.542 ************************************ 00:11:07.542 START TEST event_scheduler 00:11:07.542 ************************************ 00:11:07.542 07:21:41 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:07.542 * Looking for test storage... 00:11:07.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:07.542 07:21:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:07.542 07:21:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=124816 00:11:07.542 07:21:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:07.542 07:21:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:07.542 07:21:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 124816 00:11:07.542 07:21:41 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 124816 ']' 00:11:07.542 07:21:41 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.542 07:21:41 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:07.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.542 07:21:41 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.542 07:21:41 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:07.542 07:21:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:07.800 [2024-07-12 07:21:41.486675] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:07.800 [2024-07-12 07:21:41.486954] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124816 ] 00:11:07.800 [2024-07-12 07:21:41.666898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.058 [2024-07-12 07:21:41.766447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.058 [2024-07-12 07:21:41.766603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.058 [2024-07-12 07:21:41.766785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.058 [2024-07-12 07:21:41.766790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.624 07:21:42 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:08.624 07:21:42 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:11:08.625 07:21:42 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:08.625 07:21:42 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.625 07:21:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:08.625 POWER: Env isn't set yet! 00:11:08.625 POWER: Attempting to initialise ACPI cpufreq power management... 00:11:08.625 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:08.625 POWER: Cannot set governor of lcore 0 to userspace 00:11:08.625 POWER: Attempting to initialise PSTAT power management... 00:11:08.625 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:08.625 POWER: Cannot set governor of lcore 0 to performance 00:11:08.625 POWER: Attempting to initialise CPPC power management... 00:11:08.625 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:08.625 POWER: Cannot set governor of lcore 0 to userspace 00:11:08.625 POWER: Attempting to initialise VM power management... 00:11:08.625 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:08.625 POWER: Unable to set Power Management Environment for lcore 0 00:11:08.625 [2024-07-12 07:21:42.366051] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:11:08.625 [2024-07-12 07:21:42.366109] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:11:08.625 [2024-07-12 07:21:42.366150] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:11:08.625 [2024-07-12 07:21:42.366208] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:11:08.625 [2024-07-12 07:21:42.366257] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:11:08.625 [2024-07-12 07:21:42.366289] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:11:08.625 07:21:42 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.625 07:21:42 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:08.625 07:21:42 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.625 07:21:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:08.625 [2024-07-12 07:21:42.491921] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:08.625 07:21:42 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.625 07:21:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:08.625 07:21:42 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:08.625 07:21:42 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:08.625 07:21:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:08.883 ************************************ 00:11:08.883 START TEST scheduler_create_thread 00:11:08.883 ************************************ 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:08.883 2 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:08.883 3 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:08.883 4 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:08.883 5 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:08.883 6 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:08.883 7 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:08.883 8 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:08.883 9 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:08.883 10 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.883 07:21:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:10.256 07:21:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.256 07:21:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:10.256 07:21:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:10.256 07:21:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.256 07:21:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:11.188 07:21:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.188 07:21:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:11.188 07:21:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.188 07:21:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:12.119 07:21:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.119 07:21:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:12.119 07:21:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:12.119 07:21:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.119 07:21:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:12.684 07:21:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.684 00:11:12.684 real 0m3.900s 00:11:12.684 user 0m0.025s 00:11:12.684 sys 0m0.016s 00:11:12.684 07:21:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:12.684 07:21:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:12.684 ************************************ 00:11:12.684 END TEST scheduler_create_thread 00:11:12.684 ************************************ 00:11:12.684 07:21:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:12.684 07:21:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 124816 00:11:12.684 07:21:46 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 124816 ']' 00:11:12.684 07:21:46 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 124816 00:11:12.684 07:21:46 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:11:12.684 07:21:46 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:12.684 07:21:46 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 124816 00:11:12.684 07:21:46 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:11:12.684 07:21:46 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:11:12.684 killing process with pid 124816 00:11:12.684 07:21:46 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 124816' 00:11:12.684 07:21:46 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 124816 00:11:12.684 07:21:46 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 124816 00:11:12.941 [2024-07-12 07:21:46.791038] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:13.508 ************************************ 00:11:13.508 END TEST event_scheduler 00:11:13.508 ************************************ 00:11:13.508 00:11:13.508 real 0m6.000s 00:11:13.508 user 0m12.636s 00:11:13.508 sys 0m0.563s 00:11:13.508 07:21:47 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:13.508 07:21:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:13.508 07:21:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:13.508 07:21:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:13.508 07:21:47 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:13.508 07:21:47 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:13.508 07:21:47 event -- common/autotest_common.sh@10 -- # set +x 00:11:13.508 ************************************ 00:11:13.508 START TEST app_repeat 00:11:13.508 ************************************ 00:11:13.508 07:21:47 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:11:13.508 07:21:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.508 07:21:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:13.508 07:21:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:13.508 07:21:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:13.508 07:21:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:13.508 07:21:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:13.508 07:21:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:13.508 07:21:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=124942 00:11:13.508 07:21:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:13.508 07:21:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 124942' 00:11:13.508 Process app_repeat pid: 124942 00:11:13.508 07:21:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:13.508 07:21:47 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:13.508 07:21:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:13.508 spdk_app_start Round 0 00:11:13.508 07:21:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 124942 /var/tmp/spdk-nbd.sock 00:11:13.508 07:21:47 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 124942 ']' 00:11:13.508 07:21:47 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:13.508 07:21:47 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:13.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:13.508 07:21:47 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:13.508 07:21:47 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:13.508 07:21:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:13.766 [2024-07-12 07:21:47.413042] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:13.766 [2024-07-12 07:21:47.413364] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124942 ] 00:11:13.766 [2024-07-12 07:21:47.572773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:14.026 [2024-07-12 07:21:47.652134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.026 [2024-07-12 07:21:47.652135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.593 07:21:48 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:14.593 07:21:48 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:11:14.593 07:21:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:14.852 Malloc0 00:11:14.852 07:21:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:15.111 Malloc1 00:11:15.111 07:21:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:15.111 07:21:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:15.111 07:21:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:15.111 07:21:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:15.111 07:21:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:15.111 07:21:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:15.111 07:21:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:15.111 07:21:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:15.111 07:21:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:15.111 07:21:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:15.111 07:21:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:15.111 07:21:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:15.111 07:21:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:15.111 07:21:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:15.111 07:21:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:15.111 07:21:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:15.370 /dev/nbd0 00:11:15.370 07:21:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:15.370 07:21:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:15.370 07:21:49 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:11:15.370 07:21:49 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:11:15.370 07:21:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:15.370 07:21:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:15.370 07:21:49 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:11:15.370 07:21:49 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:11:15.370 07:21:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:15.370 07:21:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:15.370 07:21:49 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:15.370 1+0 records in 00:11:15.370 1+0 records out 00:11:15.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338336 s, 12.1 MB/s 00:11:15.370 07:21:49 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:15.370 07:21:49 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:11:15.370 07:21:49 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:15.370 07:21:49 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:15.370 07:21:49 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:11:15.370 07:21:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.370 07:21:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:15.370 07:21:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:15.629 /dev/nbd1 00:11:15.629 07:21:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:15.629 07:21:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:15.629 07:21:49 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:11:15.629 07:21:49 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:11:15.629 07:21:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:15.629 07:21:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:15.629 07:21:49 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:11:15.629 07:21:49 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:11:15.629 07:21:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:15.629 07:21:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:15.629 07:21:49 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:15.629 1+0 records in 00:11:15.629 1+0 records out 00:11:15.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034479 s, 11.9 MB/s 00:11:15.629 07:21:49 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:15.629 07:21:49 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:11:15.629 07:21:49 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:15.629 07:21:49 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:15.629 07:21:49 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:11:15.629 07:21:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.629 07:21:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:15.629 07:21:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:15.629 07:21:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:15.629 07:21:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:15.905 { 00:11:15.905 "nbd_device": "/dev/nbd0", 00:11:15.905 "bdev_name": "Malloc0" 00:11:15.905 }, 00:11:15.905 { 00:11:15.905 "nbd_device": "/dev/nbd1", 00:11:15.905 "bdev_name": "Malloc1" 00:11:15.905 } 00:11:15.905 ]' 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:15.905 { 00:11:15.905 "nbd_device": "/dev/nbd0", 00:11:15.905 "bdev_name": "Malloc0" 00:11:15.905 }, 00:11:15.905 { 00:11:15.905 "nbd_device": "/dev/nbd1", 00:11:15.905 "bdev_name": "Malloc1" 00:11:15.905 } 00:11:15.905 ]' 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:15.905 /dev/nbd1' 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:15.905 /dev/nbd1' 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:15.905 256+0 records in 00:11:15.905 256+0 records out 00:11:15.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00810302 s, 129 MB/s 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:15.905 256+0 records in 00:11:15.905 256+0 records out 00:11:15.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303132 s, 34.6 MB/s 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:15.905 256+0 records in 00:11:15.905 256+0 records out 00:11:15.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285685 s, 36.7 MB/s 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.905 07:21:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:16.164 07:21:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:16.164 07:21:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:16.164 07:21:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:16.164 07:21:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.164 07:21:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.164 07:21:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:16.164 07:21:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:16.164 07:21:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.164 07:21:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:16.164 07:21:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:16.424 07:21:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:16.424 07:21:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:16.424 07:21:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:16.424 07:21:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.424 07:21:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.424 07:21:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:16.424 07:21:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:16.424 07:21:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.424 07:21:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:16.424 07:21:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.424 07:21:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:16.683 07:21:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:16.683 07:21:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:16.683 07:21:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:16.683 07:21:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:16.683 07:21:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:16.683 07:21:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:16.683 07:21:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:16.683 07:21:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:16.683 07:21:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:16.683 07:21:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:16.683 07:21:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:16.683 07:21:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:16.683 07:21:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:16.942 07:21:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:17.513 [2024-07-12 07:21:51.134709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:17.513 [2024-07-12 07:21:51.214189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.513 [2024-07-12 07:21:51.214189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.513 [2024-07-12 07:21:51.292435] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:17.513 [2024-07-12 07:21:51.292838] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:20.045 07:21:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:20.045 07:21:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:20.045 spdk_app_start Round 1 00:11:20.045 07:21:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 124942 /var/tmp/spdk-nbd.sock 00:11:20.045 07:21:53 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 124942 ']' 00:11:20.045 07:21:53 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:20.045 07:21:53 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:20.045 07:21:53 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:20.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:20.045 07:21:53 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:20.045 07:21:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:20.303 07:21:54 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:20.303 07:21:54 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:11:20.303 07:21:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:20.561 Malloc0 00:11:20.561 07:21:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:20.820 Malloc1 00:11:20.820 07:21:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:20.820 07:21:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:20.820 07:21:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:20.820 07:21:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:20.820 07:21:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:20.820 07:21:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:20.820 07:21:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:20.820 07:21:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:20.820 07:21:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:20.820 07:21:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:20.820 07:21:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:20.820 07:21:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:20.820 07:21:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:20.820 07:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:20.820 07:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:20.820 07:21:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:21.079 /dev/nbd0 00:11:21.079 07:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:21.079 07:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:21.079 07:21:54 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:11:21.079 07:21:54 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:11:21.079 07:21:54 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:21.079 07:21:54 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:21.079 07:21:54 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:11:21.079 07:21:54 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:11:21.079 07:21:54 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:21.079 07:21:54 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:21.079 07:21:54 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:21.079 1+0 records in 00:11:21.079 1+0 records out 00:11:21.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393624 s, 10.4 MB/s 00:11:21.079 07:21:54 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:21.079 07:21:54 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:11:21.079 07:21:54 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:21.079 07:21:54 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:21.079 07:21:54 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:11:21.079 07:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:21.079 07:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:21.079 07:21:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:21.338 /dev/nbd1 00:11:21.338 07:21:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:21.338 07:21:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:21.338 07:21:55 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:11:21.338 07:21:55 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:11:21.338 07:21:55 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:21.338 07:21:55 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:21.338 07:21:55 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:11:21.338 07:21:55 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:11:21.338 07:21:55 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:21.338 07:21:55 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:21.338 07:21:55 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:21.338 1+0 records in 00:11:21.338 1+0 records out 00:11:21.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506023 s, 8.1 MB/s 00:11:21.338 07:21:55 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:21.338 07:21:55 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:11:21.338 07:21:55 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:21.338 07:21:55 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:21.338 07:21:55 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:11:21.338 07:21:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:21.338 07:21:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:21.338 07:21:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:21.338 07:21:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:21.338 07:21:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:21.597 { 00:11:21.597 "nbd_device": "/dev/nbd0", 00:11:21.597 "bdev_name": "Malloc0" 00:11:21.597 }, 00:11:21.597 { 00:11:21.597 "nbd_device": "/dev/nbd1", 00:11:21.597 "bdev_name": "Malloc1" 00:11:21.597 } 00:11:21.597 ]' 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:21.597 { 00:11:21.597 "nbd_device": "/dev/nbd0", 00:11:21.597 "bdev_name": "Malloc0" 00:11:21.597 }, 00:11:21.597 { 00:11:21.597 "nbd_device": "/dev/nbd1", 00:11:21.597 "bdev_name": "Malloc1" 00:11:21.597 } 00:11:21.597 ]' 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:21.597 /dev/nbd1' 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:21.597 /dev/nbd1' 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:21.597 256+0 records in 00:11:21.597 256+0 records out 00:11:21.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120009 s, 87.4 MB/s 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:21.597 256+0 records in 00:11:21.597 256+0 records out 00:11:21.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259885 s, 40.3 MB/s 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:21.597 07:21:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:21.857 256+0 records in 00:11:21.857 256+0 records out 00:11:21.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284118 s, 36.9 MB/s 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.857 07:21:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:22.117 07:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:22.117 07:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:22.117 07:21:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:22.117 07:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.117 07:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.117 07:21:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:22.117 07:21:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:22.117 07:21:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.117 07:21:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.117 07:21:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:22.376 07:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:22.376 07:21:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:22.376 07:21:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:22.944 07:21:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:23.204 [2024-07-12 07:21:56.872685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:23.204 [2024-07-12 07:21:56.955834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.204 [2024-07-12 07:21:56.955835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.204 [2024-07-12 07:21:57.033514] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:23.204 [2024-07-12 07:21:57.033858] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:25.739 07:21:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:25.739 07:21:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:25.739 spdk_app_start Round 2 00:11:25.739 07:21:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 124942 /var/tmp/spdk-nbd.sock 00:11:25.739 07:21:59 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 124942 ']' 00:11:25.739 07:21:59 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:25.739 07:21:59 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:25.739 07:21:59 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:25.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:25.739 07:21:59 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:25.739 07:21:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:26.000 07:21:59 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:26.000 07:21:59 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:11:26.000 07:21:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:26.274 Malloc0 00:11:26.274 07:21:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:26.533 Malloc1 00:11:26.533 07:22:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:26.533 07:22:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.533 07:22:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:26.533 07:22:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:26.533 07:22:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:26.533 07:22:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:26.533 07:22:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:26.533 07:22:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.533 07:22:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:26.533 07:22:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:26.533 07:22:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:26.533 07:22:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:26.533 07:22:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:26.533 07:22:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:26.533 07:22:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:26.533 07:22:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:26.792 /dev/nbd0 00:11:26.792 07:22:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:26.792 07:22:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:26.792 07:22:00 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:11:26.792 07:22:00 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:11:26.792 07:22:00 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:26.792 07:22:00 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:26.792 07:22:00 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:11:26.792 07:22:00 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:11:26.792 07:22:00 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:26.792 07:22:00 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:26.792 07:22:00 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:26.792 1+0 records in 00:11:26.792 1+0 records out 00:11:26.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390913 s, 10.5 MB/s 00:11:26.792 07:22:00 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:26.792 07:22:00 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:11:26.792 07:22:00 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:26.792 07:22:00 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:26.792 07:22:00 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:11:26.792 07:22:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.792 07:22:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:26.792 07:22:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:27.051 /dev/nbd1 00:11:27.051 07:22:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:27.051 07:22:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:27.051 07:22:00 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:11:27.051 07:22:00 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:11:27.051 07:22:00 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:27.051 07:22:00 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:27.051 07:22:00 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:11:27.051 07:22:00 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:11:27.051 07:22:00 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:27.051 07:22:00 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:27.051 07:22:00 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:27.051 1+0 records in 00:11:27.051 1+0 records out 00:11:27.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296992 s, 13.8 MB/s 00:11:27.051 07:22:00 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:27.051 07:22:00 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:11:27.051 07:22:00 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:27.051 07:22:00 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:27.051 07:22:00 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:11:27.051 07:22:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.051 07:22:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:27.051 07:22:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:27.051 07:22:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.051 07:22:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:27.323 07:22:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:27.323 { 00:11:27.323 "nbd_device": "/dev/nbd0", 00:11:27.323 "bdev_name": "Malloc0" 00:11:27.323 }, 00:11:27.323 { 00:11:27.323 "nbd_device": "/dev/nbd1", 00:11:27.323 "bdev_name": "Malloc1" 00:11:27.323 } 00:11:27.323 ]' 00:11:27.323 07:22:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:27.323 07:22:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:27.323 { 00:11:27.323 "nbd_device": "/dev/nbd0", 00:11:27.323 "bdev_name": "Malloc0" 00:11:27.323 }, 00:11:27.323 { 00:11:27.323 "nbd_device": "/dev/nbd1", 00:11:27.323 "bdev_name": "Malloc1" 00:11:27.323 } 00:11:27.323 ]' 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:27.323 /dev/nbd1' 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:27.323 /dev/nbd1' 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:27.323 256+0 records in 00:11:27.323 256+0 records out 00:11:27.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00942467 s, 111 MB/s 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:27.323 256+0 records in 00:11:27.323 256+0 records out 00:11:27.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256584 s, 40.9 MB/s 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:27.323 256+0 records in 00:11:27.323 256+0 records out 00:11:27.323 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280793 s, 37.3 MB/s 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:27.323 07:22:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:27.324 07:22:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:27.324 07:22:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:27.324 07:22:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:27.324 07:22:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:27.324 07:22:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:27.324 07:22:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:27.324 07:22:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:27.324 07:22:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:27.324 07:22:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.324 07:22:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:27.324 07:22:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:27.324 07:22:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:27.324 07:22:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:27.324 07:22:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:27.582 07:22:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:27.582 07:22:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:27.582 07:22:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:27.582 07:22:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:27.582 07:22:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:27.582 07:22:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:27.582 07:22:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:27.582 07:22:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:27.582 07:22:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:27.582 07:22:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:27.840 07:22:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:27.840 07:22:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:27.840 07:22:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:27.840 07:22:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:27.840 07:22:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:27.840 07:22:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:27.840 07:22:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:27.840 07:22:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.098 07:22:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:28.098 07:22:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.098 07:22:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:28.357 07:22:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:28.357 07:22:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:28.357 07:22:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:28.357 07:22:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:28.357 07:22:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:28.357 07:22:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:28.357 07:22:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:28.357 07:22:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:28.357 07:22:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:28.357 07:22:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:28.357 07:22:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:28.357 07:22:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:28.357 07:22:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:28.616 07:22:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:28.875 [2024-07-12 07:22:02.611959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:28.875 [2024-07-12 07:22:02.688011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.875 [2024-07-12 07:22:02.688011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.133 [2024-07-12 07:22:02.766499] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:29.133 [2024-07-12 07:22:02.766606] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:31.665 07:22:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 124942 /var/tmp/spdk-nbd.sock 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 124942 ']' 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:31.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:11:31.665 07:22:05 event.app_repeat -- event/event.sh@39 -- # killprocess 124942 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 124942 ']' 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 124942 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 124942 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 124942' 00:11:31.665 killing process with pid 124942 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@965 -- # kill 124942 00:11:31.665 07:22:05 event.app_repeat -- common/autotest_common.sh@970 -- # wait 124942 00:11:32.233 spdk_app_start is called in Round 0. 00:11:32.233 Shutdown signal received, stop current app iteration 00:11:32.233 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:11:32.233 spdk_app_start is called in Round 1. 00:11:32.233 Shutdown signal received, stop current app iteration 00:11:32.233 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:11:32.234 spdk_app_start is called in Round 2. 00:11:32.234 Shutdown signal received, stop current app iteration 00:11:32.234 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:11:32.234 spdk_app_start is called in Round 3. 00:11:32.234 Shutdown signal received, stop current app iteration 00:11:32.234 07:22:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:32.234 07:22:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:32.234 00:11:32.234 real 0m18.553s 00:11:32.234 user 0m39.894s 00:11:32.234 sys 0m3.725s 00:11:32.234 07:22:05 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:32.234 07:22:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:32.234 ************************************ 00:11:32.234 END TEST app_repeat 00:11:32.234 ************************************ 00:11:32.234 07:22:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:32.234 07:22:05 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:32.234 07:22:05 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:32.234 07:22:05 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:32.234 07:22:05 event -- common/autotest_common.sh@10 -- # set +x 00:11:32.234 ************************************ 00:11:32.234 START TEST cpu_locks 00:11:32.234 ************************************ 00:11:32.234 07:22:05 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:32.234 * Looking for test storage... 00:11:32.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:32.234 07:22:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:32.234 07:22:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:32.234 07:22:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:32.234 07:22:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:32.234 07:22:06 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:32.234 07:22:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:32.234 07:22:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:32.234 ************************************ 00:11:32.234 START TEST default_locks 00:11:32.234 ************************************ 00:11:32.234 07:22:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:11:32.234 07:22:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=125454 00:11:32.234 07:22:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 125454 00:11:32.234 07:22:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:32.234 07:22:06 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 125454 ']' 00:11:32.234 07:22:06 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.234 07:22:06 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:32.234 07:22:06 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.234 07:22:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:32.234 07:22:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:32.493 [2024-07-12 07:22:06.169002] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:32.493 [2024-07-12 07:22:06.169201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125454 ] 00:11:32.493 [2024-07-12 07:22:06.308528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.752 [2024-07-12 07:22:06.382091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.319 07:22:07 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:33.319 07:22:07 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:11:33.319 07:22:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 125454 00:11:33.319 07:22:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 125454 00:11:33.319 07:22:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:33.578 07:22:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 125454 00:11:33.578 07:22:07 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 125454 ']' 00:11:33.578 07:22:07 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 125454 00:11:33.578 07:22:07 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:11:33.579 07:22:07 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:33.579 07:22:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125454 00:11:33.579 07:22:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:33.579 07:22:07 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:33.579 07:22:07 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125454' 00:11:33.579 killing process with pid 125454 00:11:33.579 07:22:07 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 125454 00:11:33.579 07:22:07 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 125454 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 125454 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 125454 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 125454 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 125454 ']' 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:34.515 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (125454) - No such process 00:11:34.515 ERROR: process (pid: 125454) is no longer running 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:34.515 00:11:34.515 real 0m2.023s 00:11:34.515 user 0m1.969s 00:11:34.515 sys 0m0.737s 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:34.515 07:22:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:34.515 ************************************ 00:11:34.515 END TEST default_locks 00:11:34.515 ************************************ 00:11:34.516 07:22:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:34.516 07:22:08 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:34.516 07:22:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:34.516 07:22:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 ************************************ 00:11:34.516 START TEST default_locks_via_rpc 00:11:34.516 ************************************ 00:11:34.516 07:22:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:11:34.516 07:22:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=125515 00:11:34.516 07:22:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:34.516 07:22:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 125515 00:11:34.516 07:22:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 125515 ']' 00:11:34.516 07:22:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.516 07:22:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:34.516 07:22:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.516 07:22:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:34.516 07:22:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.516 [2024-07-12 07:22:08.277392] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:34.516 [2024-07-12 07:22:08.277659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125515 ] 00:11:34.775 [2024-07-12 07:22:08.431982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.775 [2024-07-12 07:22:08.514213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 125515 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 125515 00:11:35.343 07:22:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:35.910 07:22:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 125515 00:11:35.910 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 125515 ']' 00:11:35.910 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 125515 00:11:35.910 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:11:35.910 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:35.910 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125515 00:11:35.910 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:35.910 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:35.910 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125515' 00:11:35.910 killing process with pid 125515 00:11:35.910 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 125515 00:11:35.910 07:22:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 125515 00:11:36.845 ************************************ 00:11:36.845 END TEST default_locks_via_rpc 00:11:36.845 ************************************ 00:11:36.845 00:11:36.845 real 0m2.171s 00:11:36.845 user 0m2.101s 00:11:36.845 sys 0m0.805s 00:11:36.845 07:22:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:36.845 07:22:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.845 07:22:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:36.845 07:22:10 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:36.845 07:22:10 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:36.845 07:22:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:36.845 ************************************ 00:11:36.845 START TEST non_locking_app_on_locked_coremask 00:11:36.845 ************************************ 00:11:36.845 07:22:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:11:36.845 07:22:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=125577 00:11:36.845 07:22:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:36.845 07:22:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 125577 /var/tmp/spdk.sock 00:11:36.845 07:22:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 125577 ']' 00:11:36.845 07:22:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.845 07:22:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:36.845 07:22:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.845 07:22:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:36.845 07:22:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:36.845 [2024-07-12 07:22:10.503945] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:36.845 [2024-07-12 07:22:10.504145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125577 ] 00:11:36.845 [2024-07-12 07:22:10.646063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.845 [2024-07-12 07:22:10.725306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:37.806 07:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:37.806 07:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:11:37.806 07:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=125598 00:11:37.806 07:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:37.806 07:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 125598 /var/tmp/spdk2.sock 00:11:37.806 07:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 125598 ']' 00:11:37.806 07:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:37.806 07:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:37.806 07:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:37.806 07:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:37.806 07:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:37.806 [2024-07-12 07:22:11.575854] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:37.806 [2024-07-12 07:22:11.576602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125598 ] 00:11:38.064 [2024-07-12 07:22:11.729443] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:38.064 [2024-07-12 07:22:11.729538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.064 [2024-07-12 07:22:11.885049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.631 07:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:38.631 07:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:11:38.631 07:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 125577 00:11:38.631 07:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125577 00:11:38.631 07:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:39.207 07:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 125577 00:11:39.207 07:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 125577 ']' 00:11:39.207 07:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 125577 00:11:39.207 07:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:11:39.207 07:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:39.207 07:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125577 00:11:39.207 07:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:39.207 07:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:39.207 killing process with pid 125577 00:11:39.207 07:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125577' 00:11:39.207 07:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 125577 00:11:39.207 07:22:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 125577 00:11:40.579 07:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 125598 00:11:40.579 07:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 125598 ']' 00:11:40.579 07:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 125598 00:11:40.579 07:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:11:40.579 07:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:40.579 07:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125598 00:11:40.579 07:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:40.579 07:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:40.579 killing process with pid 125598 00:11:40.579 07:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125598' 00:11:40.579 07:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 125598 00:11:40.579 07:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 125598 00:11:41.144 00:11:41.144 real 0m4.557s 00:11:41.144 user 0m4.616s 00:11:41.144 sys 0m1.459s 00:11:41.144 07:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:41.144 07:22:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:41.144 ************************************ 00:11:41.144 END TEST non_locking_app_on_locked_coremask 00:11:41.144 ************************************ 00:11:41.401 07:22:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:41.402 07:22:15 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:41.402 07:22:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:41.402 07:22:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:41.402 ************************************ 00:11:41.402 START TEST locking_app_on_unlocked_coremask 00:11:41.402 ************************************ 00:11:41.402 07:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:11:41.402 07:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=125672 00:11:41.402 07:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 125672 /var/tmp/spdk.sock 00:11:41.402 07:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 125672 ']' 00:11:41.402 07:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.402 07:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:41.402 07:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:41.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.402 07:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.402 07:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:41.402 07:22:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:41.402 [2024-07-12 07:22:15.140775] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:41.402 [2024-07-12 07:22:15.141038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125672 ] 00:11:41.661 [2024-07-12 07:22:15.298228] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:41.661 [2024-07-12 07:22:15.298316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.661 [2024-07-12 07:22:15.376968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.230 07:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:42.230 07:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:11:42.230 07:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=125693 00:11:42.230 07:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:42.230 07:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 125693 /var/tmp/spdk2.sock 00:11:42.230 07:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 125693 ']' 00:11:42.230 07:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:42.230 07:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:42.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:42.230 07:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:42.230 07:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:42.230 07:22:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:42.489 [2024-07-12 07:22:16.146293] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:42.489 [2024-07-12 07:22:16.146628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125693 ] 00:11:42.489 [2024-07-12 07:22:16.297712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.748 [2024-07-12 07:22:16.495570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.316 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:43.317 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:11:43.317 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 125693 00:11:43.317 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125693 00:11:43.317 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:43.884 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 125672 00:11:43.884 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 125672 ']' 00:11:43.884 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 125672 00:11:43.884 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:11:43.884 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:43.884 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125672 00:11:43.884 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:43.884 killing process with pid 125672 00:11:43.884 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:43.884 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125672' 00:11:43.884 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 125672 00:11:43.884 07:22:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 125672 00:11:45.262 07:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 125693 00:11:45.262 07:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 125693 ']' 00:11:45.262 07:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 125693 00:11:45.262 07:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:11:45.262 07:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:45.262 07:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125693 00:11:45.262 07:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:45.262 07:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:45.262 killing process with pid 125693 00:11:45.262 07:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125693' 00:11:45.262 07:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 125693 00:11:45.262 07:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 125693 00:11:45.830 00:11:45.830 real 0m4.539s 00:11:45.830 user 0m4.598s 00:11:45.830 sys 0m1.397s 00:11:45.830 07:22:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:45.830 07:22:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:45.830 ************************************ 00:11:45.830 END TEST locking_app_on_unlocked_coremask 00:11:45.830 ************************************ 00:11:45.830 07:22:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:45.830 07:22:19 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:45.830 07:22:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:45.830 07:22:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:45.831 ************************************ 00:11:45.831 START TEST locking_app_on_locked_coremask 00:11:45.831 ************************************ 00:11:45.831 07:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:11:45.831 07:22:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=125776 00:11:45.831 07:22:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 125776 /var/tmp/spdk.sock 00:11:45.831 07:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 125776 ']' 00:11:45.831 07:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.831 07:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:45.831 07:22:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:45.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.831 07:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.831 07:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:45.831 07:22:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:46.089 [2024-07-12 07:22:19.742700] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:46.089 [2024-07-12 07:22:19.742985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125776 ] 00:11:46.090 [2024-07-12 07:22:19.899691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.348 [2024-07-12 07:22:19.977073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=125797 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 125797 /var/tmp/spdk2.sock 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 125797 /var/tmp/spdk2.sock 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 125797 /var/tmp/spdk2.sock 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 125797 ']' 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:46.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:46.925 07:22:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:46.925 [2024-07-12 07:22:20.714665] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:46.925 [2024-07-12 07:22:20.714946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125797 ] 00:11:47.184 [2024-07-12 07:22:20.865605] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 125776 has claimed it. 00:11:47.184 [2024-07-12 07:22:20.865709] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:47.751 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (125797) - No such process 00:11:47.751 ERROR: process (pid: 125797) is no longer running 00:11:47.751 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:47.751 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:11:47.751 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:11:47.751 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.751 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.751 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.751 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 125776 00:11:47.751 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125776 00:11:47.751 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:47.751 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 125776 00:11:47.751 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 125776 ']' 00:11:47.751 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 125776 00:11:47.751 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:11:48.011 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:48.011 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125776 00:11:48.011 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:48.011 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:48.011 killing process with pid 125776 00:11:48.011 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125776' 00:11:48.011 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 125776 00:11:48.011 07:22:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 125776 00:11:48.579 00:11:48.579 real 0m2.684s 00:11:48.579 user 0m2.784s 00:11:48.579 sys 0m0.895s 00:11:48.579 07:22:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:48.579 ************************************ 00:11:48.579 END TEST locking_app_on_locked_coremask 00:11:48.579 ************************************ 00:11:48.579 07:22:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:48.579 07:22:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:48.579 07:22:22 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:48.579 07:22:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:48.579 07:22:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:48.579 ************************************ 00:11:48.579 START TEST locking_overlapped_coremask 00:11:48.579 ************************************ 00:11:48.579 07:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:11:48.579 07:22:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=125848 00:11:48.579 07:22:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:48.579 07:22:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 125848 /var/tmp/spdk.sock 00:11:48.579 07:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 125848 ']' 00:11:48.579 07:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.579 07:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:48.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.579 07:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.579 07:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:48.579 07:22:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:48.838 [2024-07-12 07:22:22.500836] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:48.838 [2024-07-12 07:22:22.501113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125848 ] 00:11:48.838 [2024-07-12 07:22:22.665990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:49.097 [2024-07-12 07:22:22.750462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.097 [2024-07-12 07:22:22.750641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.097 [2024-07-12 07:22:22.750645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.663 07:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:49.663 07:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:11:49.663 07:22:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=125871 00:11:49.663 07:22:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 125871 /var/tmp/spdk2.sock 00:11:49.663 07:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:11:49.663 07:22:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:49.663 07:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 125871 /var/tmp/spdk2.sock 00:11:49.663 07:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:49.663 07:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.663 07:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:49.663 07:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:49.663 07:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 125871 /var/tmp/spdk2.sock 00:11:49.663 07:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 125871 ']' 00:11:49.663 07:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:49.663 07:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:49.664 07:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:49.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:49.664 07:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:49.664 07:22:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:49.664 [2024-07-12 07:22:23.465479] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:49.664 [2024-07-12 07:22:23.465905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125871 ] 00:11:49.921 [2024-07-12 07:22:23.629557] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 125848 has claimed it. 00:11:49.921 [2024-07-12 07:22:23.629669] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:50.485 ERROR: process (pid: 125871) is no longer running 00:11:50.485 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (125871) - No such process 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 125848 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 125848 ']' 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 125848 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:50.485 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125848 00:11:50.486 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:50.486 killing process with pid 125848 00:11:50.486 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:50.486 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125848' 00:11:50.486 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 125848 00:11:50.486 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 125848 00:11:51.051 00:11:51.051 real 0m2.474s 00:11:51.051 user 0m6.338s 00:11:51.051 sys 0m0.739s 00:11:51.051 ************************************ 00:11:51.051 END TEST locking_overlapped_coremask 00:11:51.051 ************************************ 00:11:51.051 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:51.051 07:22:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:51.310 07:22:24 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:51.310 07:22:24 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:51.310 07:22:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:51.310 07:22:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:51.310 ************************************ 00:11:51.310 START TEST locking_overlapped_coremask_via_rpc 00:11:51.310 ************************************ 00:11:51.310 07:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:11:51.310 07:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=125923 00:11:51.310 07:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 125923 /var/tmp/spdk.sock 00:11:51.310 07:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 125923 ']' 00:11:51.310 07:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:51.310 07:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.310 07:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:51.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.310 07:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.310 07:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:51.311 07:22:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.311 [2024-07-12 07:22:25.050250] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:51.311 [2024-07-12 07:22:25.051255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125923 ] 00:11:51.576 [2024-07-12 07:22:25.215764] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:51.576 [2024-07-12 07:22:25.215846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:51.576 [2024-07-12 07:22:25.298503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.576 [2024-07-12 07:22:25.298649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.576 [2024-07-12 07:22:25.298649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.144 07:22:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:52.144 07:22:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:11:52.144 07:22:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=125946 00:11:52.144 07:22:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 125946 /var/tmp/spdk2.sock 00:11:52.144 07:22:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 125946 ']' 00:11:52.144 07:22:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:52.144 07:22:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:52.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:52.144 07:22:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:52.144 07:22:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:52.144 07:22:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.144 07:22:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:52.403 [2024-07-12 07:22:26.080064] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:52.403 [2024-07-12 07:22:26.080851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125946 ] 00:11:52.403 [2024-07-12 07:22:26.241553] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:52.403 [2024-07-12 07:22:26.241626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:52.662 [2024-07-12 07:22:26.428526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.662 [2024-07-12 07:22:26.428649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.662 [2024-07-12 07:22:26.428653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.596 [2024-07-12 07:22:27.133501] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 125923 has claimed it. 00:11:53.596 request: 00:11:53.596 { 00:11:53.596 "method": "framework_enable_cpumask_locks", 00:11:53.596 "req_id": 1 00:11:53.596 } 00:11:53.596 Got JSON-RPC error response 00:11:53.596 response: 00:11:53.596 { 00:11:53.596 "code": -32603, 00:11:53.596 "message": "Failed to claim CPU core: 2" 00:11:53.596 } 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 125923 /var/tmp/spdk.sock 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 125923 ']' 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 125946 /var/tmp/spdk2.sock 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 125946 ']' 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:53.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:53.596 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.853 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:53.853 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:11:53.853 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:53.853 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:53.853 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:53.853 ************************************ 00:11:53.853 END TEST locking_overlapped_coremask_via_rpc 00:11:53.853 ************************************ 00:11:53.853 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:53.853 00:11:53.853 real 0m2.719s 00:11:53.853 user 0m1.363s 00:11:53.853 sys 0m0.215s 00:11:53.853 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:53.853 07:22:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.853 07:22:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:53.853 07:22:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 125923 ]] 00:11:53.853 07:22:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 125923 00:11:53.853 07:22:27 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 125923 ']' 00:11:53.853 07:22:27 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 125923 00:11:53.853 07:22:27 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:11:53.853 07:22:27 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:53.853 07:22:27 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125923 00:11:53.853 07:22:27 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:53.853 07:22:27 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:53.853 07:22:27 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125923' 00:11:53.853 killing process with pid 125923 00:11:53.853 07:22:27 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 125923 00:11:53.853 07:22:27 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 125923 00:11:54.787 07:22:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 125946 ]] 00:11:54.787 07:22:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 125946 00:11:54.787 07:22:28 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 125946 ']' 00:11:54.787 07:22:28 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 125946 00:11:54.787 07:22:28 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:11:54.787 07:22:28 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:54.787 07:22:28 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125946 00:11:54.787 07:22:28 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:11:54.787 07:22:28 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:11:54.787 07:22:28 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125946' 00:11:54.787 killing process with pid 125946 00:11:54.787 07:22:28 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 125946 00:11:54.787 07:22:28 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 125946 00:11:55.354 07:22:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:55.354 07:22:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:55.354 07:22:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 125923 ]] 00:11:55.354 07:22:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 125923 00:11:55.354 07:22:29 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 125923 ']' 00:11:55.354 07:22:29 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 125923 00:11:55.354 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (125923) - No such process 00:11:55.354 Process with pid 125923 is not found 00:11:55.354 07:22:29 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 125923 is not found' 00:11:55.354 07:22:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 125946 ]] 00:11:55.354 07:22:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 125946 00:11:55.354 07:22:29 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 125946 ']' 00:11:55.354 07:22:29 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 125946 00:11:55.354 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (125946) - No such process 00:11:55.354 Process with pid 125946 is not found 00:11:55.354 07:22:29 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 125946 is not found' 00:11:55.354 07:22:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:55.354 00:11:55.354 real 0m23.172s 00:11:55.354 user 0m38.217s 00:11:55.354 sys 0m7.551s 00:11:55.354 07:22:29 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:55.354 07:22:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:55.354 ************************************ 00:11:55.354 END TEST cpu_locks 00:11:55.354 ************************************ 00:11:55.354 00:11:55.354 real 0m52.678s 00:11:55.354 user 1m37.752s 00:11:55.354 sys 0m12.489s 00:11:55.354 07:22:29 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:55.354 ************************************ 00:11:55.354 END TEST event 00:11:55.354 ************************************ 00:11:55.354 07:22:29 event -- common/autotest_common.sh@10 -- # set +x 00:11:55.611 07:22:29 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:55.611 07:22:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:55.611 07:22:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:55.611 07:22:29 -- common/autotest_common.sh@10 -- # set +x 00:11:55.611 ************************************ 00:11:55.611 START TEST thread 00:11:55.611 ************************************ 00:11:55.611 07:22:29 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:55.611 * Looking for test storage... 00:11:55.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:55.611 07:22:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:55.611 07:22:29 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:11:55.611 07:22:29 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:55.611 07:22:29 thread -- common/autotest_common.sh@10 -- # set +x 00:11:55.611 ************************************ 00:11:55.611 START TEST thread_poller_perf 00:11:55.611 ************************************ 00:11:55.611 07:22:29 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:55.611 [2024-07-12 07:22:29.410410] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:55.611 [2024-07-12 07:22:29.410629] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126087 ] 00:11:55.869 [2024-07-12 07:22:29.560441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.869 [2024-07-12 07:22:29.641762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.869 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:57.269 ====================================== 00:11:57.269 busy:2110504572 (cyc) 00:11:57.269 total_run_count: 360000 00:11:57.269 tsc_hz: 2100000000 (cyc) 00:11:57.269 ====================================== 00:11:57.269 poller_cost: 5862 (cyc), 2791 (nsec) 00:11:57.269 00:11:57.269 real 0m1.461s 00:11:57.269 user 0m1.246s 00:11:57.269 sys 0m0.116s 00:11:57.269 07:22:30 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:57.269 07:22:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:57.269 ************************************ 00:11:57.269 END TEST thread_poller_perf 00:11:57.269 ************************************ 00:11:57.269 07:22:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:57.269 07:22:30 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:11:57.269 07:22:30 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:57.269 07:22:30 thread -- common/autotest_common.sh@10 -- # set +x 00:11:57.269 ************************************ 00:11:57.269 START TEST thread_poller_perf 00:11:57.269 ************************************ 00:11:57.269 07:22:30 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:57.269 [2024-07-12 07:22:30.940830] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:57.269 [2024-07-12 07:22:30.941135] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126137 ] 00:11:57.269 [2024-07-12 07:22:31.099187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.527 [2024-07-12 07:22:31.174110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.527 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:58.901 ====================================== 00:11:58.901 busy:2103437284 (cyc) 00:11:58.901 total_run_count: 4854000 00:11:58.901 tsc_hz: 2100000000 (cyc) 00:11:58.901 ====================================== 00:11:58.901 poller_cost: 433 (cyc), 206 (nsec) 00:11:58.901 00:11:58.901 real 0m1.458s 00:11:58.901 user 0m1.243s 00:11:58.901 sys 0m0.115s 00:11:58.901 07:22:32 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:58.901 07:22:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:58.901 ************************************ 00:11:58.901 END TEST thread_poller_perf 00:11:58.901 ************************************ 00:11:58.901 07:22:32 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:11:58.901 07:22:32 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:58.901 07:22:32 thread -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:58.901 07:22:32 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:58.901 07:22:32 thread -- common/autotest_common.sh@10 -- # set +x 00:11:58.901 ************************************ 00:11:58.901 START TEST thread_spdk_lock 00:11:58.901 ************************************ 00:11:58.901 07:22:32 thread.thread_spdk_lock -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:58.901 [2024-07-12 07:22:32.464151] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:58.901 [2024-07-12 07:22:32.464380] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126173 ] 00:11:58.901 [2024-07-12 07:22:32.610767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:58.901 [2024-07-12 07:22:32.698129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.901 [2024-07-12 07:22:32.698132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.467 [2024-07-12 07:22:33.207930] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:59.467 [2024-07-12 07:22:33.208064] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:11:59.467 [2024-07-12 07:22:33.208112] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x55f293722d80 00:11:59.467 [2024-07-12 07:22:33.209544] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:59.467 [2024-07-12 07:22:33.209647] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:59.467 [2024-07-12 07:22:33.209688] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:59.725 Starting test contend 00:11:59.725 Worker Delay Wait us Hold us Total us 00:11:59.725 0 3 141739 191331 333070 00:11:59.725 1 5 63488 291985 355473 00:11:59.725 PASS test contend 00:11:59.725 Starting test hold_by_poller 00:11:59.725 PASS test hold_by_poller 00:11:59.725 Starting test hold_by_message 00:11:59.725 PASS test hold_by_message 00:11:59.725 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:11:59.725 100014 assertions passed 00:11:59.725 0 assertions failed 00:11:59.725 00:11:59.725 real 0m0.970s 00:11:59.725 user 0m1.266s 00:11:59.725 sys 0m0.116s 00:11:59.725 07:22:33 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:59.725 07:22:33 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:11:59.725 ************************************ 00:11:59.725 END TEST thread_spdk_lock 00:11:59.725 ************************************ 00:11:59.725 00:11:59.725 real 0m4.199s 00:11:59.725 user 0m3.871s 00:11:59.725 sys 0m0.556s 00:11:59.725 07:22:33 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:59.725 07:22:33 thread -- common/autotest_common.sh@10 -- # set +x 00:11:59.725 ************************************ 00:11:59.725 END TEST thread 00:11:59.725 ************************************ 00:11:59.725 07:22:33 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:59.725 07:22:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:59.725 07:22:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:59.725 07:22:33 -- common/autotest_common.sh@10 -- # set +x 00:11:59.725 ************************************ 00:11:59.725 START TEST accel 00:11:59.725 ************************************ 00:11:59.725 07:22:33 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:59.984 * Looking for test storage... 00:11:59.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:59.984 07:22:33 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:11:59.984 07:22:33 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:11:59.984 07:22:33 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:59.984 07:22:33 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=126253 00:11:59.984 07:22:33 accel -- accel/accel.sh@63 -- # waitforlisten 126253 00:11:59.984 07:22:33 accel -- common/autotest_common.sh@827 -- # '[' -z 126253 ']' 00:11:59.984 07:22:33 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.984 07:22:33 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:59.984 07:22:33 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.984 07:22:33 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:11:59.984 07:22:33 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:59.984 07:22:33 accel -- accel/accel.sh@61 -- # build_accel_config 00:11:59.984 07:22:33 accel -- common/autotest_common.sh@10 -- # set +x 00:11:59.984 07:22:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:59.984 07:22:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:59.984 07:22:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:59.984 07:22:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:59.984 07:22:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:59.984 07:22:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:11:59.984 07:22:33 accel -- accel/accel.sh@41 -- # jq -r . 00:11:59.984 [2024-07-12 07:22:33.713622] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:11:59.984 [2024-07-12 07:22:33.714471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126253 ] 00:11:59.984 [2024-07-12 07:22:33.867693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.243 [2024-07-12 07:22:33.955682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.808 07:22:34 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:00.808 07:22:34 accel -- common/autotest_common.sh@860 -- # return 0 00:12:00.808 07:22:34 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:12:00.808 07:22:34 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:12:00.808 07:22:34 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:12:00.808 07:22:34 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:12:00.808 07:22:34 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:12:00.808 07:22:34 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:12:00.808 07:22:34 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.808 07:22:34 accel -- common/autotest_common.sh@10 -- # set +x 00:12:00.808 07:22:34 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:12:00.808 07:22:34 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.067 07:22:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # IFS== 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:01.067 07:22:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:01.067 07:22:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # IFS== 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:01.067 07:22:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:01.067 07:22:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # IFS== 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:01.067 07:22:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:01.067 07:22:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # IFS== 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:01.067 07:22:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:01.067 07:22:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # IFS== 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:01.067 07:22:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:01.067 07:22:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # IFS== 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:01.067 07:22:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:01.067 07:22:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # IFS== 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:01.067 07:22:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:01.067 07:22:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # IFS== 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:01.067 07:22:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:01.067 07:22:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # IFS== 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:01.067 07:22:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:01.067 07:22:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # IFS== 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:01.067 07:22:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:01.067 07:22:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # IFS== 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:01.067 07:22:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:01.067 07:22:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # IFS== 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:01.067 07:22:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:01.067 07:22:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # IFS== 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:01.067 07:22:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:01.067 07:22:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # IFS== 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:01.067 07:22:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:01.067 07:22:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # IFS== 00:12:01.067 07:22:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:01.067 07:22:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:01.067 07:22:34 accel -- accel/accel.sh@75 -- # killprocess 126253 00:12:01.067 07:22:34 accel -- common/autotest_common.sh@946 -- # '[' -z 126253 ']' 00:12:01.067 07:22:34 accel -- common/autotest_common.sh@950 -- # kill -0 126253 00:12:01.067 07:22:34 accel -- common/autotest_common.sh@951 -- # uname 00:12:01.067 07:22:34 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:01.067 07:22:34 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 126253 00:12:01.067 07:22:34 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:01.067 07:22:34 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:01.067 07:22:34 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 126253' 00:12:01.067 killing process with pid 126253 00:12:01.067 07:22:34 accel -- common/autotest_common.sh@965 -- # kill 126253 00:12:01.067 07:22:34 accel -- common/autotest_common.sh@970 -- # wait 126253 00:12:01.633 07:22:35 accel -- accel/accel.sh@76 -- # trap - ERR 00:12:01.633 07:22:35 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:12:01.633 07:22:35 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:01.633 07:22:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:01.633 07:22:35 accel -- common/autotest_common.sh@10 -- # set +x 00:12:01.633 07:22:35 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:12:01.633 07:22:35 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:12:01.633 07:22:35 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:12:01.633 07:22:35 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:01.633 07:22:35 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:01.633 07:22:35 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:01.633 07:22:35 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:01.633 07:22:35 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:01.633 07:22:35 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:12:01.633 07:22:35 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:12:01.891 07:22:35 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:01.891 07:22:35 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:12:01.891 07:22:35 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:12:01.891 07:22:35 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:01.891 07:22:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:01.891 07:22:35 accel -- common/autotest_common.sh@10 -- # set +x 00:12:01.891 ************************************ 00:12:01.891 START TEST accel_missing_filename 00:12:01.891 ************************************ 00:12:01.891 07:22:35 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:12:01.891 07:22:35 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:12:01.891 07:22:35 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:12:01.891 07:22:35 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:01.891 07:22:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:01.891 07:22:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:01.891 07:22:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:01.891 07:22:35 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:12:01.891 07:22:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:12:01.891 07:22:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:12:01.891 07:22:35 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:01.891 07:22:35 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:01.891 07:22:35 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:01.891 07:22:35 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:01.891 07:22:35 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:01.891 07:22:35 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:12:01.891 07:22:35 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:12:01.891 [2024-07-12 07:22:35.626605] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:01.891 [2024-07-12 07:22:35.626888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126323 ] 00:12:02.150 [2024-07-12 07:22:35.783335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.150 [2024-07-12 07:22:35.860288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.150 [2024-07-12 07:22:35.941934] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:02.409 [2024-07-12 07:22:36.065304] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:12:02.409 A filename is required. 00:12:02.409 07:22:36 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:12:02.409 07:22:36 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:02.409 07:22:36 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:12:02.409 07:22:36 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:12:02.409 07:22:36 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:12:02.409 07:22:36 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:02.409 00:12:02.409 real 0m0.674s 00:12:02.409 user 0m0.405s 00:12:02.409 sys 0m0.217s 00:12:02.409 07:22:36 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:02.409 07:22:36 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:12:02.409 ************************************ 00:12:02.409 END TEST accel_missing_filename 00:12:02.409 ************************************ 00:12:02.667 07:22:36 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:02.667 07:22:36 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:12:02.667 07:22:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:02.667 07:22:36 accel -- common/autotest_common.sh@10 -- # set +x 00:12:02.667 ************************************ 00:12:02.667 START TEST accel_compress_verify 00:12:02.667 ************************************ 00:12:02.667 07:22:36 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:02.667 07:22:36 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:12:02.667 07:22:36 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:02.667 07:22:36 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:02.667 07:22:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.667 07:22:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:02.667 07:22:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.667 07:22:36 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:02.667 07:22:36 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:02.667 07:22:36 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:02.667 07:22:36 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:02.667 07:22:36 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:02.667 07:22:36 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:02.667 07:22:36 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:02.667 07:22:36 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:02.667 07:22:36 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:02.667 07:22:36 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:12:02.667 [2024-07-12 07:22:36.369564] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:02.667 [2024-07-12 07:22:36.370430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126355 ] 00:12:02.667 [2024-07-12 07:22:36.526497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.925 [2024-07-12 07:22:36.608974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.925 [2024-07-12 07:22:36.690116] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:03.183 [2024-07-12 07:22:36.813656] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:12:03.183 00:12:03.183 Compression does not support the verify option, aborting. 00:12:03.183 07:22:37 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:12:03.183 07:22:37 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.183 07:22:37 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:12:03.183 07:22:37 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:12:03.183 07:22:37 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:12:03.183 07:22:37 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.183 00:12:03.183 real 0m0.680s 00:12:03.183 user 0m0.409s 00:12:03.183 sys 0m0.205s 00:12:03.183 07:22:37 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:03.183 07:22:37 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:12:03.183 ************************************ 00:12:03.183 END TEST accel_compress_verify 00:12:03.183 ************************************ 00:12:03.183 07:22:37 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:12:03.183 07:22:37 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:03.183 07:22:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:03.183 07:22:37 accel -- common/autotest_common.sh@10 -- # set +x 00:12:03.441 ************************************ 00:12:03.441 START TEST accel_wrong_workload 00:12:03.441 ************************************ 00:12:03.441 07:22:37 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:12:03.441 07:22:37 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:12:03.442 07:22:37 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:12:03.442 07:22:37 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:03.442 07:22:37 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.442 07:22:37 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:03.442 07:22:37 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.442 07:22:37 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:12:03.442 07:22:37 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:12:03.442 07:22:37 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:12:03.442 07:22:37 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:03.442 07:22:37 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:03.442 07:22:37 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:03.442 07:22:37 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:03.442 07:22:37 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:03.442 07:22:37 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:12:03.442 07:22:37 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:12:03.442 Unsupported workload type: foobar 00:12:03.442 [2024-07-12 07:22:37.101443] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:12:03.442 accel_perf options: 00:12:03.442 [-h help message] 00:12:03.442 [-q queue depth per core] 00:12:03.442 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:03.442 [-T number of threads per core 00:12:03.442 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:03.442 [-t time in seconds] 00:12:03.442 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:03.442 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:12:03.442 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:03.442 [-l for compress/decompress workloads, name of uncompressed input file 00:12:03.442 [-S for crc32c workload, use this seed value (default 0) 00:12:03.442 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:03.442 [-f for fill workload, use this BYTE value (default 255) 00:12:03.442 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:03.442 [-y verify result if this switch is on] 00:12:03.442 [-a tasks to allocate per core (default: same value as -q)] 00:12:03.442 Can be used to spread operations across a wider range of memory. 00:12:03.442 07:22:37 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:12:03.442 07:22:37 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.442 07:22:37 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.442 07:22:37 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.442 00:12:03.442 real 0m0.066s 00:12:03.442 user 0m0.070s 00:12:03.442 sys 0m0.045s 00:12:03.442 07:22:37 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:03.442 07:22:37 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:12:03.442 ************************************ 00:12:03.442 END TEST accel_wrong_workload 00:12:03.442 ************************************ 00:12:03.442 07:22:37 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:12:03.442 07:22:37 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:12:03.442 07:22:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:03.442 07:22:37 accel -- common/autotest_common.sh@10 -- # set +x 00:12:03.442 ************************************ 00:12:03.442 START TEST accel_negative_buffers 00:12:03.442 ************************************ 00:12:03.442 07:22:37 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:12:03.442 07:22:37 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:12:03.442 07:22:37 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:12:03.442 07:22:37 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:12:03.442 07:22:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.442 07:22:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:12:03.442 07:22:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.442 07:22:37 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:12:03.442 07:22:37 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:12:03.442 07:22:37 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:12:03.442 07:22:37 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:03.442 07:22:37 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:03.442 07:22:37 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:03.442 07:22:37 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:03.442 07:22:37 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:03.442 07:22:37 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:12:03.442 07:22:37 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:12:03.442 -x option must be non-negative. 00:12:03.442 [2024-07-12 07:22:37.230948] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:12:03.442 accel_perf options: 00:12:03.442 [-h help message] 00:12:03.442 [-q queue depth per core] 00:12:03.442 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:03.442 [-T number of threads per core 00:12:03.442 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:03.442 [-t time in seconds] 00:12:03.442 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:03.442 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:12:03.442 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:03.442 [-l for compress/decompress workloads, name of uncompressed input file 00:12:03.442 [-S for crc32c workload, use this seed value (default 0) 00:12:03.442 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:03.442 [-f for fill workload, use this BYTE value (default 255) 00:12:03.442 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:03.442 [-y verify result if this switch is on] 00:12:03.442 [-a tasks to allocate per core (default: same value as -q)] 00:12:03.442 Can be used to spread operations across a wider range of memory. 00:12:03.442 07:22:37 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:12:03.442 07:22:37 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.442 07:22:37 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.442 07:22:37 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.442 00:12:03.442 real 0m0.066s 00:12:03.442 user 0m0.075s 00:12:03.442 sys 0m0.037s 00:12:03.442 07:22:37 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:03.442 07:22:37 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:12:03.442 ************************************ 00:12:03.442 END TEST accel_negative_buffers 00:12:03.442 ************************************ 00:12:03.442 07:22:37 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:12:03.442 07:22:37 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:12:03.442 07:22:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:03.442 07:22:37 accel -- common/autotest_common.sh@10 -- # set +x 00:12:03.442 ************************************ 00:12:03.442 START TEST accel_crc32c 00:12:03.442 ************************************ 00:12:03.442 07:22:37 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:12:03.442 07:22:37 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:03.442 07:22:37 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:03.442 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.442 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.442 07:22:37 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:12:03.701 07:22:37 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:12:03.701 07:22:37 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:03.701 07:22:37 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:03.701 07:22:37 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:03.701 07:22:37 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:03.701 07:22:37 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:03.701 07:22:37 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:03.701 07:22:37 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:03.701 07:22:37 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:03.701 [2024-07-12 07:22:37.355814] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:03.701 [2024-07-12 07:22:37.356079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126438 ] 00:12:03.701 [2024-07-12 07:22:37.510121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.959 [2024-07-12 07:22:37.610038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.959 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:03.959 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.959 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.959 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:03.960 07:22:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:05.421 07:22:39 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:05.421 00:12:05.421 real 0m1.702s 00:12:05.421 user 0m1.431s 00:12:05.421 sys 0m0.207s 00:12:05.421 07:22:39 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:05.421 ************************************ 00:12:05.421 END TEST accel_crc32c 00:12:05.421 ************************************ 00:12:05.421 07:22:39 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:05.421 07:22:39 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:12:05.421 07:22:39 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:12:05.421 07:22:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:05.421 07:22:39 accel -- common/autotest_common.sh@10 -- # set +x 00:12:05.421 ************************************ 00:12:05.421 START TEST accel_crc32c_C2 00:12:05.421 ************************************ 00:12:05.421 07:22:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:12:05.421 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:05.421 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:05.421 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.421 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.421 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:12:05.421 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:12:05.421 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:05.421 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:05.421 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:05.421 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:05.421 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:05.421 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:05.421 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:05.421 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:05.421 [2024-07-12 07:22:39.117607] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:05.421 [2024-07-12 07:22:39.117970] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126489 ] 00:12:05.421 [2024-07-12 07:22:39.270652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.679 [2024-07-12 07:22:39.372571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:05.679 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:05.680 07:22:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:07.056 00:12:07.056 real 0m1.710s 00:12:07.056 user 0m1.401s 00:12:07.056 sys 0m0.238s 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:07.056 ************************************ 00:12:07.056 END TEST accel_crc32c_C2 00:12:07.056 ************************************ 00:12:07.056 07:22:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:07.056 07:22:40 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:07.056 07:22:40 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:07.056 07:22:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:07.056 07:22:40 accel -- common/autotest_common.sh@10 -- # set +x 00:12:07.056 ************************************ 00:12:07.056 START TEST accel_copy 00:12:07.056 ************************************ 00:12:07.056 07:22:40 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:12:07.056 07:22:40 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:07.056 07:22:40 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:12:07.056 07:22:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.056 07:22:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.056 07:22:40 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:07.056 07:22:40 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:07.056 07:22:40 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:07.056 07:22:40 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:07.056 07:22:40 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:07.056 07:22:40 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:07.056 07:22:40 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:07.056 07:22:40 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:07.056 07:22:40 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:07.056 07:22:40 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:12:07.056 [2024-07-12 07:22:40.889653] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:07.056 [2024-07-12 07:22:40.889937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126530 ] 00:12:07.340 [2024-07-12 07:22:41.038692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.340 [2024-07-12 07:22:41.139993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.599 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:07.599 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.599 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.599 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.599 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:07.599 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:07.600 07:22:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:08.994 07:22:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:08.995 07:22:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:08.995 00:12:08.995 real 0m1.712s 00:12:08.995 user 0m1.431s 00:12:08.995 sys 0m0.200s 00:12:08.995 07:22:42 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:08.995 07:22:42 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:12:08.995 ************************************ 00:12:08.995 END TEST accel_copy 00:12:08.995 ************************************ 00:12:08.995 07:22:42 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:08.995 07:22:42 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:12:08.995 07:22:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:08.995 07:22:42 accel -- common/autotest_common.sh@10 -- # set +x 00:12:08.995 ************************************ 00:12:08.995 START TEST accel_fill 00:12:08.995 ************************************ 00:12:08.995 07:22:42 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:08.995 07:22:42 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:12:08.995 07:22:42 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:12:08.995 07:22:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:08.995 07:22:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:08.995 07:22:42 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:08.995 07:22:42 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:08.995 07:22:42 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:12:08.995 07:22:42 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:08.995 07:22:42 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:08.995 07:22:42 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:08.995 07:22:42 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:08.995 07:22:42 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:08.995 07:22:42 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:12:08.995 07:22:42 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:12:08.995 [2024-07-12 07:22:42.675447] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:08.995 [2024-07-12 07:22:42.676456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126581 ] 00:12:08.995 [2024-07-12 07:22:42.834332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.254 [2024-07-12 07:22:42.930875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:09.254 07:22:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:10.629 07:22:44 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:10.629 00:12:10.629 real 0m1.712s 00:12:10.629 user 0m1.431s 00:12:10.629 sys 0m0.214s 00:12:10.629 07:22:44 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:10.629 ************************************ 00:12:10.629 END TEST accel_fill 00:12:10.629 07:22:44 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:12:10.629 ************************************ 00:12:10.629 07:22:44 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:10.629 07:22:44 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:10.629 07:22:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:10.629 07:22:44 accel -- common/autotest_common.sh@10 -- # set +x 00:12:10.629 ************************************ 00:12:10.629 START TEST accel_copy_crc32c 00:12:10.629 ************************************ 00:12:10.630 07:22:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:12:10.630 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:10.630 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:10.630 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:10.630 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:10.630 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:10.630 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:10.630 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:10.630 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:10.630 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:10.630 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:10.630 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:10.630 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:10.630 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:10.630 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:10.630 [2024-07-12 07:22:44.447253] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:10.630 [2024-07-12 07:22:44.447538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126632 ] 00:12:10.888 [2024-07-12 07:22:44.603225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.888 [2024-07-12 07:22:44.693575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:11.146 07:22:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:12.525 00:12:12.525 real 0m1.700s 00:12:12.525 user 0m1.425s 00:12:12.525 sys 0m0.217s 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:12.525 07:22:46 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:12.525 ************************************ 00:12:12.525 END TEST accel_copy_crc32c 00:12:12.525 ************************************ 00:12:12.525 07:22:46 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:12.525 07:22:46 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:12:12.525 07:22:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:12.525 07:22:46 accel -- common/autotest_common.sh@10 -- # set +x 00:12:12.525 ************************************ 00:12:12.525 START TEST accel_copy_crc32c_C2 00:12:12.525 ************************************ 00:12:12.525 07:22:46 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:12.525 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:12.525 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:12.525 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.525 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.525 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:12.525 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:12.525 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:12.525 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:12.525 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:12.525 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:12.525 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:12.525 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:12.525 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:12.525 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:12.525 [2024-07-12 07:22:46.212859] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:12.525 [2024-07-12 07:22:46.213929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126671 ] 00:12:12.525 [2024-07-12 07:22:46.370736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.787 [2024-07-12 07:22:46.463270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:12.787 07:22:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:14.192 00:12:14.192 real 0m1.708s 00:12:14.192 user 0m1.417s 00:12:14.192 sys 0m0.215s 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:14.192 07:22:47 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:14.192 ************************************ 00:12:14.192 END TEST accel_copy_crc32c_C2 00:12:14.192 ************************************ 00:12:14.192 07:22:47 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:14.192 07:22:47 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:14.192 07:22:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:14.192 07:22:47 accel -- common/autotest_common.sh@10 -- # set +x 00:12:14.192 ************************************ 00:12:14.192 START TEST accel_dualcast 00:12:14.192 ************************************ 00:12:14.192 07:22:47 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:12:14.192 07:22:47 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:12:14.192 07:22:47 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:12:14.192 07:22:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.192 07:22:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.192 07:22:47 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:14.192 07:22:47 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:14.192 07:22:47 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:12:14.192 07:22:47 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:14.192 07:22:47 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:14.192 07:22:47 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:14.192 07:22:47 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:14.192 07:22:47 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:14.192 07:22:47 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:12:14.192 07:22:47 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:12:14.192 [2024-07-12 07:22:47.977778] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:14.192 [2024-07-12 07:22:47.977960] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126722 ] 00:12:14.450 [2024-07-12 07:22:48.121748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.450 [2024-07-12 07:22:48.211711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.450 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:14.451 07:22:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:15.825 07:22:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:15.825 00:12:15.825 real 0m1.684s 00:12:15.825 user 0m1.386s 00:12:15.825 sys 0m0.231s 00:12:15.825 07:22:49 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:15.825 07:22:49 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:12:15.825 ************************************ 00:12:15.825 END TEST accel_dualcast 00:12:15.825 ************************************ 00:12:15.825 07:22:49 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:15.825 07:22:49 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:15.825 07:22:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:15.825 07:22:49 accel -- common/autotest_common.sh@10 -- # set +x 00:12:15.825 ************************************ 00:12:15.825 START TEST accel_compare 00:12:15.825 ************************************ 00:12:15.825 07:22:49 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:12:15.825 07:22:49 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:12:15.825 07:22:49 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:12:15.825 07:22:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:15.825 07:22:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:15.825 07:22:49 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:15.825 07:22:49 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:15.825 07:22:49 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:12:15.825 07:22:49 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:15.825 07:22:49 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:15.825 07:22:49 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:15.825 07:22:49 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:15.825 07:22:49 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:15.825 07:22:49 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:12:15.825 07:22:49 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:12:16.084 [2024-07-12 07:22:49.733782] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:16.084 [2024-07-12 07:22:49.734046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126768 ] 00:12:16.084 [2024-07-12 07:22:49.887033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.342 [2024-07-12 07:22:49.981042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:16.342 07:22:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:17.718 07:22:51 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:17.718 00:12:17.718 real 0m1.701s 00:12:17.718 user 0m1.410s 00:12:17.718 sys 0m0.235s 00:12:17.718 07:22:51 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:17.718 ************************************ 00:12:17.718 END TEST accel_compare 00:12:17.718 ************************************ 00:12:17.718 07:22:51 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:12:17.718 07:22:51 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:17.718 07:22:51 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:17.718 07:22:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:17.718 07:22:51 accel -- common/autotest_common.sh@10 -- # set +x 00:12:17.718 ************************************ 00:12:17.718 START TEST accel_xor 00:12:17.718 ************************************ 00:12:17.718 07:22:51 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:12:17.718 07:22:51 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:17.718 07:22:51 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:17.718 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.718 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.718 07:22:51 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:17.718 07:22:51 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:17.718 07:22:51 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:17.718 07:22:51 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:17.718 07:22:51 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:17.718 07:22:51 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:17.718 07:22:51 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:17.718 07:22:51 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:17.718 07:22:51 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:17.718 07:22:51 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:17.718 [2024-07-12 07:22:51.492357] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:17.718 [2024-07-12 07:22:51.492642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126821 ] 00:12:17.977 [2024-07-12 07:22:51.647240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.977 [2024-07-12 07:22:51.743828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:17.977 07:22:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:19.351 ************************************ 00:12:19.351 END TEST accel_xor 00:12:19.351 ************************************ 00:12:19.351 00:12:19.351 real 0m1.708s 00:12:19.351 user 0m1.430s 00:12:19.351 sys 0m0.214s 00:12:19.351 07:22:53 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:19.351 07:22:53 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:19.351 07:22:53 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:19.351 07:22:53 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:12:19.351 07:22:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.351 07:22:53 accel -- common/autotest_common.sh@10 -- # set +x 00:12:19.351 ************************************ 00:12:19.351 START TEST accel_xor 00:12:19.351 ************************************ 00:12:19.351 07:22:53 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:19.351 07:22:53 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:19.608 [2024-07-12 07:22:53.261097] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:19.608 [2024-07-12 07:22:53.261392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126866 ] 00:12:19.608 [2024-07-12 07:22:53.412453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.865 [2024-07-12 07:22:53.503804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:19.865 07:22:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:21.248 07:22:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:21.248 00:12:21.248 real 0m1.700s 00:12:21.248 user 0m1.390s 00:12:21.248 sys 0m0.225s 00:12:21.248 07:22:54 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:21.248 07:22:54 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:21.248 ************************************ 00:12:21.248 END TEST accel_xor 00:12:21.248 ************************************ 00:12:21.248 07:22:54 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:12:21.248 07:22:54 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:12:21.248 07:22:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:21.248 07:22:54 accel -- common/autotest_common.sh@10 -- # set +x 00:12:21.248 ************************************ 00:12:21.248 START TEST accel_dif_verify 00:12:21.248 ************************************ 00:12:21.248 07:22:54 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:12:21.248 07:22:54 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:12:21.248 07:22:54 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:12:21.248 07:22:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.248 07:22:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.248 07:22:54 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:12:21.248 07:22:54 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:21.248 07:22:54 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:21.248 07:22:54 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:21.248 07:22:54 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:21.248 07:22:55 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:21.248 07:22:55 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:21.248 07:22:55 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:21.248 07:22:55 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:21.248 07:22:55 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:12:21.248 [2024-07-12 07:22:55.033000] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:21.248 [2024-07-12 07:22:55.033296] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126909 ] 00:12:21.522 [2024-07-12 07:22:55.190556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.522 [2024-07-12 07:22:55.284730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.522 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:21.523 07:22:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:12:22.898 07:22:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:22.898 00:12:22.898 real 0m1.710s 00:12:22.898 user 0m1.415s 00:12:22.898 sys 0m0.231s 00:12:22.898 07:22:56 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:22.898 07:22:56 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:12:22.898 ************************************ 00:12:22.898 END TEST accel_dif_verify 00:12:22.898 ************************************ 00:12:22.898 07:22:56 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:12:22.898 07:22:56 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:12:22.898 07:22:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:22.898 07:22:56 accel -- common/autotest_common.sh@10 -- # set +x 00:12:22.898 ************************************ 00:12:22.898 START TEST accel_dif_generate 00:12:22.898 ************************************ 00:12:22.898 07:22:56 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:12:22.898 07:22:56 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:12:22.898 07:22:56 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:12:22.898 07:22:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:22.898 07:22:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:22.898 07:22:56 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:22.898 07:22:56 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:22.898 07:22:56 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:12:22.898 07:22:56 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:22.898 07:22:56 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:22.898 07:22:56 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:22.898 07:22:56 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:22.898 07:22:56 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:22.898 07:22:56 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:12:22.898 07:22:56 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:12:23.156 [2024-07-12 07:22:56.813206] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:23.156 [2024-07-12 07:22:56.813711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126960 ] 00:12:23.156 [2024-07-12 07:22:56.969943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.415 [2024-07-12 07:22:57.066530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:23.415 07:22:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:12:24.790 07:22:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:24.790 00:12:24.790 real 0m1.710s 00:12:24.790 user 0m1.418s 00:12:24.790 sys 0m0.232s 00:12:24.790 07:22:58 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:24.790 07:22:58 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:12:24.790 ************************************ 00:12:24.790 END TEST accel_dif_generate 00:12:24.790 ************************************ 00:12:24.790 07:22:58 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:12:24.790 07:22:58 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:12:24.790 07:22:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:24.790 07:22:58 accel -- common/autotest_common.sh@10 -- # set +x 00:12:24.790 ************************************ 00:12:24.790 START TEST accel_dif_generate_copy 00:12:24.790 ************************************ 00:12:24.790 07:22:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:12:24.790 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:24.790 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:12:24.790 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:24.790 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:12:24.790 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:24.790 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:24.790 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:24.790 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:24.790 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:24.790 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:24.790 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:24.790 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:24.790 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:24.790 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:12:24.790 [2024-07-12 07:22:58.592184] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:24.790 [2024-07-12 07:22:58.592486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126999 ] 00:12:25.049 [2024-07-12 07:22:58.746943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.049 [2024-07-12 07:22:58.845663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:25.308 07:22:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:26.685 00:12:26.685 real 0m1.714s 00:12:26.685 user 0m1.418s 00:12:26.685 sys 0m0.235s 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:26.685 ************************************ 00:12:26.685 END TEST accel_dif_generate_copy 00:12:26.685 07:23:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:12:26.685 ************************************ 00:12:26.685 07:23:00 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:12:26.685 07:23:00 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:26.685 07:23:00 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:12:26.685 07:23:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:26.685 07:23:00 accel -- common/autotest_common.sh@10 -- # set +x 00:12:26.685 ************************************ 00:12:26.685 START TEST accel_comp 00:12:26.685 ************************************ 00:12:26.685 07:23:00 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:26.685 07:23:00 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:12:26.685 07:23:00 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:12:26.686 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.686 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.686 07:23:00 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:26.686 07:23:00 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:26.686 07:23:00 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:12:26.686 07:23:00 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:26.686 07:23:00 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:26.686 07:23:00 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:26.686 07:23:00 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:26.686 07:23:00 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:26.686 07:23:00 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:12:26.686 07:23:00 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:12:26.686 [2024-07-12 07:23:00.382499] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:26.686 [2024-07-12 07:23:00.382796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127052 ] 00:12:26.686 [2024-07-12 07:23:00.538558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.944 [2024-07-12 07:23:00.637651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:26.944 07:23:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:12:28.370 07:23:02 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:28.370 00:12:28.370 real 0m1.727s 00:12:28.370 user 0m1.431s 00:12:28.370 sys 0m0.215s 00:12:28.370 ************************************ 00:12:28.370 END TEST accel_comp 00:12:28.370 ************************************ 00:12:28.370 07:23:02 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:28.370 07:23:02 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:12:28.370 07:23:02 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:28.370 07:23:02 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:12:28.370 07:23:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:28.370 07:23:02 accel -- common/autotest_common.sh@10 -- # set +x 00:12:28.370 ************************************ 00:12:28.370 START TEST accel_decomp 00:12:28.370 ************************************ 00:12:28.370 07:23:02 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:28.370 07:23:02 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:12:28.370 07:23:02 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:12:28.370 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.370 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.370 07:23:02 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:28.370 07:23:02 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:28.370 07:23:02 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:12:28.370 07:23:02 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:28.370 07:23:02 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:28.370 07:23:02 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:28.370 07:23:02 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:28.370 07:23:02 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:28.370 07:23:02 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:12:28.370 07:23:02 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:12:28.370 [2024-07-12 07:23:02.170523] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:28.370 [2024-07-12 07:23:02.170822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127104 ] 00:12:28.629 [2024-07-12 07:23:02.326866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.629 [2024-07-12 07:23:02.417162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.887 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:28.888 07:23:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:30.262 07:23:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:30.263 07:23:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:30.263 07:23:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:30.263 07:23:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:30.263 07:23:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:30.263 07:23:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:30.263 07:23:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:30.263 07:23:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:30.263 ************************************ 00:12:30.263 END TEST accel_decomp 00:12:30.263 ************************************ 00:12:30.263 07:23:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:30.263 07:23:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:30.263 07:23:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:30.263 00:12:30.263 real 0m1.716s 00:12:30.263 user 0m1.433s 00:12:30.263 sys 0m0.203s 00:12:30.263 07:23:03 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:30.263 07:23:03 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:12:30.263 07:23:03 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:30.263 07:23:03 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:12:30.263 07:23:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:30.263 07:23:03 accel -- common/autotest_common.sh@10 -- # set +x 00:12:30.263 ************************************ 00:12:30.263 START TEST accel_decmop_full 00:12:30.263 ************************************ 00:12:30.263 07:23:03 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:30.263 07:23:03 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:12:30.263 07:23:03 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:12:30.263 07:23:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.263 07:23:03 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:30.263 07:23:03 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.263 07:23:03 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:30.263 07:23:03 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:12:30.263 07:23:03 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:30.263 07:23:03 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:30.263 07:23:03 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:30.263 07:23:03 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:30.263 07:23:03 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:30.263 07:23:03 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:12:30.263 07:23:03 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:12:30.263 [2024-07-12 07:23:03.961516] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:30.263 [2024-07-12 07:23:03.962037] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127149 ] 00:12:30.263 [2024-07-12 07:23:04.117301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.522 [2024-07-12 07:23:04.217597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.522 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:12:30.523 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:30.523 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:30.523 07:23:04 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:30.523 07:23:04 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:30.523 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:30.523 07:23:04 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:31.902 ************************************ 00:12:31.902 END TEST accel_decmop_full 00:12:31.902 ************************************ 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:31.902 07:23:05 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:31.902 00:12:31.902 real 0m1.753s 00:12:31.902 user 0m1.429s 00:12:31.902 sys 0m0.229s 00:12:31.902 07:23:05 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:31.902 07:23:05 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:12:31.902 07:23:05 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:31.902 07:23:05 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:12:31.902 07:23:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:31.902 07:23:05 accel -- common/autotest_common.sh@10 -- # set +x 00:12:31.902 ************************************ 00:12:31.902 START TEST accel_decomp_mcore 00:12:31.902 ************************************ 00:12:31.902 07:23:05 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:31.902 07:23:05 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:12:31.902 07:23:05 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:12:31.902 07:23:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:31.902 07:23:05 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:31.902 07:23:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:31.902 07:23:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:31.902 07:23:05 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:12:31.902 07:23:05 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:31.902 07:23:05 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:31.902 07:23:05 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:31.902 07:23:05 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:31.902 07:23:05 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:31.902 07:23:05 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:12:31.902 07:23:05 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:12:31.902 [2024-07-12 07:23:05.781109] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:31.902 [2024-07-12 07:23:05.782415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127193 ] 00:12:32.162 [2024-07-12 07:23:05.956573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.425 [2024-07-12 07:23:06.060992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.425 [2024-07-12 07:23:06.061177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.425 [2024-07-12 07:23:06.061370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.426 [2024-07-12 07:23:06.061436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:32.426 07:23:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:33.806 00:12:33.806 real 0m1.760s 00:12:33.806 user 0m5.177s 00:12:33.806 sys 0m0.247s 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:33.806 07:23:07 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:12:33.806 ************************************ 00:12:33.806 END TEST accel_decomp_mcore 00:12:33.806 ************************************ 00:12:33.806 07:23:07 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:33.806 07:23:07 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:12:33.806 07:23:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:33.806 07:23:07 accel -- common/autotest_common.sh@10 -- # set +x 00:12:33.806 ************************************ 00:12:33.806 START TEST accel_decomp_full_mcore 00:12:33.806 ************************************ 00:12:33.806 07:23:07 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:33.806 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:12:33.806 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:12:33.806 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:33.806 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:33.806 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:33.806 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:33.806 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:12:33.806 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:33.806 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:33.806 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:33.806 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:33.806 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:33.806 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:12:33.806 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:12:33.806 [2024-07-12 07:23:07.595266] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:33.806 [2024-07-12 07:23:07.595476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127242 ] 00:12:34.066 [2024-07-12 07:23:07.756688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.066 [2024-07-12 07:23:07.863372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.066 [2024-07-12 07:23:07.863547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.066 [2024-07-12 07:23:07.863763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.066 [2024-07-12 07:23:07.863768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:34.326 07:23:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:35.705 00:12:35.705 real 0m1.752s 00:12:35.705 user 0m5.187s 00:12:35.705 sys 0m0.228s 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:35.705 ************************************ 00:12:35.705 07:23:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:12:35.705 END TEST accel_decomp_full_mcore 00:12:35.705 ************************************ 00:12:35.705 07:23:09 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:35.705 07:23:09 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:12:35.705 07:23:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:35.705 07:23:09 accel -- common/autotest_common.sh@10 -- # set +x 00:12:35.705 ************************************ 00:12:35.705 START TEST accel_decomp_mthread 00:12:35.705 ************************************ 00:12:35.705 07:23:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:35.705 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:12:35.705 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:12:35.705 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.705 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.705 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:35.705 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:35.705 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:12:35.705 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:35.705 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:35.705 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:35.705 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:35.705 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:35.705 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:12:35.705 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:12:35.705 [2024-07-12 07:23:09.413858] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:35.705 [2024-07-12 07:23:09.414963] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127296 ] 00:12:35.705 [2024-07-12 07:23:09.570774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.964 [2024-07-12 07:23:09.672426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:35.964 07:23:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:37.342 00:12:37.342 real 0m1.738s 00:12:37.342 user 0m1.414s 00:12:37.342 sys 0m0.257s 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:37.342 07:23:11 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:12:37.342 ************************************ 00:12:37.342 END TEST accel_decomp_mthread 00:12:37.342 ************************************ 00:12:37.342 07:23:11 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:37.342 07:23:11 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:12:37.342 07:23:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:37.342 07:23:11 accel -- common/autotest_common.sh@10 -- # set +x 00:12:37.342 ************************************ 00:12:37.342 START TEST accel_decomp_full_mthread 00:12:37.342 ************************************ 00:12:37.342 07:23:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:37.342 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:12:37.342 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:12:37.342 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.342 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.342 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:37.342 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:37.342 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:12:37.342 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:37.342 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:37.342 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:37.342 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:37.342 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:37.342 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:12:37.342 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:12:37.342 [2024-07-12 07:23:11.212383] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:37.342 [2024-07-12 07:23:11.212692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127349 ] 00:12:37.601 [2024-07-12 07:23:11.370240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.601 [2024-07-12 07:23:11.470520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:37.860 07:23:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:39.238 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:39.238 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:39.238 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:39.238 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:39.238 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:39.239 00:12:39.239 real 0m1.764s 00:12:39.239 user 0m1.489s 00:12:39.239 sys 0m0.210s 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:39.239 07:23:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:12:39.239 ************************************ 00:12:39.239 END TEST accel_decomp_full_mthread 00:12:39.239 ************************************ 00:12:39.239 07:23:12 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:12:39.239 07:23:12 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:39.239 07:23:12 accel -- accel/accel.sh@137 -- # build_accel_config 00:12:39.239 07:23:12 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:39.239 07:23:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:39.239 07:23:12 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:39.239 07:23:12 accel -- common/autotest_common.sh@10 -- # set +x 00:12:39.239 07:23:12 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:39.239 07:23:12 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:39.239 07:23:12 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:39.239 07:23:12 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:39.239 07:23:12 accel -- accel/accel.sh@40 -- # local IFS=, 00:12:39.239 07:23:12 accel -- accel/accel.sh@41 -- # jq -r . 00:12:39.239 ************************************ 00:12:39.239 START TEST accel_dif_functional_tests 00:12:39.239 ************************************ 00:12:39.239 07:23:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:39.239 [2024-07-12 07:23:13.099189] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:39.239 [2024-07-12 07:23:13.099483] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127388 ] 00:12:39.513 [2024-07-12 07:23:13.270340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:39.513 [2024-07-12 07:23:13.363330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.513 [2024-07-12 07:23:13.363220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.513 [2024-07-12 07:23:13.363344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.789 00:12:39.789 00:12:39.789 CUnit - A unit testing framework for C - Version 2.1-3 00:12:39.789 http://cunit.sourceforge.net/ 00:12:39.789 00:12:39.789 00:12:39.789 Suite: accel_dif 00:12:39.789 Test: verify: DIF generated, GUARD check ...passed 00:12:39.789 Test: verify: DIF generated, APPTAG check ...passed 00:12:39.789 Test: verify: DIF generated, REFTAG check ...passed 00:12:39.789 Test: verify: DIF not generated, GUARD check ...[2024-07-12 07:23:13.494473] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:39.789 passed 00:12:39.789 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 07:23:13.494723] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:39.789 passed 00:12:39.789 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 07:23:13.494975] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:39.789 passed 00:12:39.789 Test: verify: APPTAG correct, APPTAG check ...passed 00:12:39.789 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 07:23:13.495624] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:39.789 passed 00:12:39.789 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:12:39.789 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:39.789 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:39.789 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed[2024-07-12 07:23:13.496579] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:39.789 00:12:39.789 Test: verify copy: DIF generated, GUARD check ...passed 00:12:39.789 Test: verify copy: DIF generated, APPTAG check ...passed 00:12:39.789 Test: verify copy: DIF generated, REFTAG check ...passed 00:12:39.789 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 07:23:13.497545] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:39.789 passed 00:12:39.789 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 07:23:13.497738] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:39.789 passed 00:12:39.789 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 07:23:13.497989] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:39.789 passed 00:12:39.789 Test: generate copy: DIF generated, GUARD check ...passed 00:12:39.789 Test: generate copy: DIF generated, APTTAG check ...passed 00:12:39.790 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:39.790 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:12:39.790 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:12:39.790 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:12:39.790 Test: generate copy: iovecs-len validate ...[2024-07-12 07:23:13.499550] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:12:39.790 passed 00:12:39.790 Test: generate copy: buffer alignment validate ...passed 00:12:39.790 00:12:39.790 Run Summary: Type Total Ran Passed Failed Inactive 00:12:39.790 suites 1 1 n/a 0 0 00:12:39.790 tests 26 26 26 0 0 00:12:39.790 asserts 115 115 115 0 n/a 00:12:39.790 00:12:39.790 Elapsed time = 0.011 seconds 00:12:40.048 00:12:40.048 real 0m0.907s 00:12:40.048 user 0m1.200s 00:12:40.048 sys 0m0.299s 00:12:40.048 07:23:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:40.048 ************************************ 00:12:40.048 END TEST accel_dif_functional_tests 00:12:40.048 ************************************ 00:12:40.048 07:23:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:12:40.307 00:12:40.307 real 0m40.450s 00:12:40.307 user 0m40.735s 00:12:40.307 sys 0m6.921s 00:12:40.307 07:23:13 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:40.307 07:23:13 accel -- common/autotest_common.sh@10 -- # set +x 00:12:40.307 ************************************ 00:12:40.307 END TEST accel 00:12:40.307 ************************************ 00:12:40.307 07:23:14 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:40.307 07:23:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:40.307 07:23:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:40.307 07:23:14 -- common/autotest_common.sh@10 -- # set +x 00:12:40.307 ************************************ 00:12:40.307 START TEST accel_rpc 00:12:40.307 ************************************ 00:12:40.307 07:23:14 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:40.307 * Looking for test storage... 00:12:40.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:40.307 07:23:14 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:40.307 07:23:14 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=127477 00:12:40.307 07:23:14 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 127477 00:12:40.307 07:23:14 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 127477 ']' 00:12:40.307 07:23:14 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:40.307 07:23:14 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.307 07:23:14 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:40.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.307 07:23:14 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.307 07:23:14 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:40.307 07:23:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.566 [2024-07-12 07:23:14.201603] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:40.567 [2024-07-12 07:23:14.201810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127477 ] 00:12:40.567 [2024-07-12 07:23:14.339440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.567 [2024-07-12 07:23:14.432686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.505 07:23:15 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:41.505 07:23:15 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:41.505 07:23:15 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:41.505 07:23:15 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:12:41.505 07:23:15 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:41.505 07:23:15 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:12:41.505 07:23:15 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:41.505 07:23:15 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:41.505 07:23:15 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:41.505 07:23:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.505 ************************************ 00:12:41.505 START TEST accel_assign_opcode 00:12:41.505 ************************************ 00:12:41.505 07:23:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:12:41.505 07:23:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:41.505 07:23:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.505 07:23:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:41.505 [2024-07-12 07:23:15.129951] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:41.505 07:23:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.505 07:23:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:41.505 07:23:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.505 07:23:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:41.505 [2024-07-12 07:23:15.141885] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:41.505 07:23:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.505 07:23:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:41.505 07:23:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.505 07:23:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:41.763 07:23:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.763 07:23:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:41.763 07:23:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.763 07:23:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:41.764 07:23:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:41.764 07:23:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:12:41.764 07:23:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.764 software 00:12:41.764 00:12:41.764 real 0m0.414s 00:12:41.764 user 0m0.068s 00:12:41.764 sys 0m0.016s 00:12:41.764 07:23:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:41.764 07:23:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:41.764 ************************************ 00:12:41.764 END TEST accel_assign_opcode 00:12:41.764 ************************************ 00:12:41.764 07:23:15 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 127477 00:12:41.764 07:23:15 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 127477 ']' 00:12:41.764 07:23:15 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 127477 00:12:41.764 07:23:15 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:12:41.764 07:23:15 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:41.764 07:23:15 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 127477 00:12:41.764 07:23:15 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:41.764 07:23:15 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:41.764 07:23:15 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 127477' 00:12:41.764 killing process with pid 127477 00:12:41.764 07:23:15 accel_rpc -- common/autotest_common.sh@965 -- # kill 127477 00:12:41.764 07:23:15 accel_rpc -- common/autotest_common.sh@970 -- # wait 127477 00:12:42.701 00:12:42.701 real 0m2.288s 00:12:42.701 user 0m2.212s 00:12:42.701 sys 0m0.620s 00:12:42.701 07:23:16 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:42.701 07:23:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.701 ************************************ 00:12:42.701 END TEST accel_rpc 00:12:42.701 ************************************ 00:12:42.701 07:23:16 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:42.701 07:23:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:42.701 07:23:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:42.701 07:23:16 -- common/autotest_common.sh@10 -- # set +x 00:12:42.701 ************************************ 00:12:42.701 START TEST app_cmdline 00:12:42.701 ************************************ 00:12:42.701 07:23:16 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:42.701 * Looking for test storage... 00:12:42.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:42.701 07:23:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:42.701 07:23:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=127585 00:12:42.701 07:23:16 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:42.701 07:23:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 127585 00:12:42.701 07:23:16 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 127585 ']' 00:12:42.701 07:23:16 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.701 07:23:16 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:42.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.701 07:23:16 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.701 07:23:16 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:42.701 07:23:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:42.960 [2024-07-12 07:23:16.599011] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:42.960 [2024-07-12 07:23:16.599382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127585 ] 00:12:42.960 [2024-07-12 07:23:16.762943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.219 [2024-07-12 07:23:16.852772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.787 07:23:17 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:43.787 07:23:17 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:12:43.787 07:23:17 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:44.045 { 00:12:44.045 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:12:44.045 "fields": { 00:12:44.045 "major": 24, 00:12:44.045 "minor": 5, 00:12:44.045 "patch": 1, 00:12:44.045 "suffix": "-pre", 00:12:44.045 "commit": "5fa2f5086" 00:12:44.045 } 00:12:44.045 } 00:12:44.045 07:23:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:44.045 07:23:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:44.046 07:23:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:44.046 07:23:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:44.046 07:23:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:44.046 07:23:17 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.046 07:23:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:44.046 07:23:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:44.046 07:23:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:44.046 07:23:17 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.304 07:23:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:44.304 07:23:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:44.304 07:23:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:44.304 07:23:17 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:12:44.304 07:23:17 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:44.304 07:23:17 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:44.304 07:23:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.304 07:23:17 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:44.304 07:23:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.304 07:23:17 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:44.304 07:23:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.304 07:23:17 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:44.304 07:23:17 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:44.304 07:23:17 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:44.563 request: 00:12:44.563 { 00:12:44.563 "method": "env_dpdk_get_mem_stats", 00:12:44.563 "req_id": 1 00:12:44.563 } 00:12:44.563 Got JSON-RPC error response 00:12:44.563 response: 00:12:44.563 { 00:12:44.563 "code": -32601, 00:12:44.563 "message": "Method not found" 00:12:44.563 } 00:12:44.563 07:23:18 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:12:44.563 07:23:18 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:44.563 07:23:18 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:44.563 07:23:18 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:44.563 07:23:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 127585 00:12:44.563 07:23:18 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 127585 ']' 00:12:44.563 07:23:18 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 127585 00:12:44.563 07:23:18 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:12:44.563 07:23:18 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:44.563 07:23:18 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 127585 00:12:44.563 07:23:18 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:44.563 07:23:18 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:44.563 07:23:18 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 127585' 00:12:44.564 killing process with pid 127585 00:12:44.564 07:23:18 app_cmdline -- common/autotest_common.sh@965 -- # kill 127585 00:12:44.564 07:23:18 app_cmdline -- common/autotest_common.sh@970 -- # wait 127585 00:12:45.132 00:12:45.132 real 0m2.557s 00:12:45.132 user 0m2.967s 00:12:45.132 sys 0m0.724s 00:12:45.132 07:23:18 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:45.132 07:23:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:45.132 ************************************ 00:12:45.132 END TEST app_cmdline 00:12:45.132 ************************************ 00:12:45.132 07:23:18 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:45.132 07:23:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:45.132 07:23:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:45.132 07:23:18 -- common/autotest_common.sh@10 -- # set +x 00:12:45.132 ************************************ 00:12:45.132 START TEST version 00:12:45.132 ************************************ 00:12:45.132 07:23:19 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:45.391 * Looking for test storage... 00:12:45.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:45.391 07:23:19 version -- app/version.sh@17 -- # get_header_version major 00:12:45.391 07:23:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:45.391 07:23:19 version -- app/version.sh@14 -- # cut -f2 00:12:45.391 07:23:19 version -- app/version.sh@14 -- # tr -d '"' 00:12:45.391 07:23:19 version -- app/version.sh@17 -- # major=24 00:12:45.391 07:23:19 version -- app/version.sh@18 -- # get_header_version minor 00:12:45.391 07:23:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:45.391 07:23:19 version -- app/version.sh@14 -- # tr -d '"' 00:12:45.391 07:23:19 version -- app/version.sh@14 -- # cut -f2 00:12:45.391 07:23:19 version -- app/version.sh@18 -- # minor=5 00:12:45.391 07:23:19 version -- app/version.sh@19 -- # get_header_version patch 00:12:45.391 07:23:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:45.391 07:23:19 version -- app/version.sh@14 -- # tr -d '"' 00:12:45.391 07:23:19 version -- app/version.sh@14 -- # cut -f2 00:12:45.391 07:23:19 version -- app/version.sh@19 -- # patch=1 00:12:45.391 07:23:19 version -- app/version.sh@20 -- # get_header_version suffix 00:12:45.391 07:23:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:45.391 07:23:19 version -- app/version.sh@14 -- # cut -f2 00:12:45.391 07:23:19 version -- app/version.sh@14 -- # tr -d '"' 00:12:45.391 07:23:19 version -- app/version.sh@20 -- # suffix=-pre 00:12:45.391 07:23:19 version -- app/version.sh@22 -- # version=24.5 00:12:45.391 07:23:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:45.391 07:23:19 version -- app/version.sh@25 -- # version=24.5.1 00:12:45.391 07:23:19 version -- app/version.sh@28 -- # version=24.5.1rc0 00:12:45.391 07:23:19 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:45.391 07:23:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:45.391 07:23:19 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:12:45.391 07:23:19 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:12:45.391 00:12:45.391 real 0m0.176s 00:12:45.391 user 0m0.117s 00:12:45.391 sys 0m0.108s 00:12:45.391 07:23:19 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:45.391 07:23:19 version -- common/autotest_common.sh@10 -- # set +x 00:12:45.391 ************************************ 00:12:45.391 END TEST version 00:12:45.391 ************************************ 00:12:45.391 07:23:19 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:12:45.391 07:23:19 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:45.391 07:23:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:45.391 07:23:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:45.391 07:23:19 -- common/autotest_common.sh@10 -- # set +x 00:12:45.391 ************************************ 00:12:45.391 START TEST blockdev_general 00:12:45.391 ************************************ 00:12:45.391 07:23:19 blockdev_general -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:45.650 * Looking for test storage... 00:12:45.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:45.650 07:23:19 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=127756 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 127756 00:12:45.650 07:23:19 blockdev_general -- common/autotest_common.sh@827 -- # '[' -z 127756 ']' 00:12:45.650 07:23:19 blockdev_general -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.650 07:23:19 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:12:45.650 07:23:19 blockdev_general -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:45.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.650 07:23:19 blockdev_general -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.650 07:23:19 blockdev_general -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:45.650 07:23:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:45.650 [2024-07-12 07:23:19.443844] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:45.650 [2024-07-12 07:23:19.444123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127756 ] 00:12:45.908 [2024-07-12 07:23:19.599297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.908 [2024-07-12 07:23:19.679114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.474 07:23:20 blockdev_general -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:46.474 07:23:20 blockdev_general -- common/autotest_common.sh@860 -- # return 0 00:12:46.474 07:23:20 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:12:46.474 07:23:20 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:12:46.474 07:23:20 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:12:46.474 07:23:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.475 07:23:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:47.041 [2024-07-12 07:23:20.673284] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:47.041 [2024-07-12 07:23:20.673628] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:47.041 00:12:47.041 [2024-07-12 07:23:20.681200] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:47.041 [2024-07-12 07:23:20.681390] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:47.041 00:12:47.041 Malloc0 00:12:47.041 Malloc1 00:12:47.041 Malloc2 00:12:47.041 Malloc3 00:12:47.041 Malloc4 00:12:47.041 Malloc5 00:12:47.041 Malloc6 00:12:47.041 Malloc7 00:12:47.041 Malloc8 00:12:47.041 Malloc9 00:12:47.041 [2024-07-12 07:23:20.910475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:47.042 [2024-07-12 07:23:20.910706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.042 [2024-07-12 07:23:20.910800] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:47.042 [2024-07-12 07:23:20.911075] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.042 [2024-07-12 07:23:20.914035] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.042 [2024-07-12 07:23:20.914223] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:47.042 TestPT 00:12:47.360 07:23:20 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.360 07:23:20 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:12:47.360 5000+0 records in 00:12:47.360 5000+0 records out 00:12:47.360 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0284361 s, 360 MB/s 00:12:47.360 07:23:20 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:12:47.360 07:23:20 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.360 07:23:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:47.360 AIO0 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.360 07:23:21 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.360 07:23:21 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:12:47.360 07:23:21 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.360 07:23:21 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.360 07:23:21 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.360 07:23:21 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:12:47.360 07:23:21 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:12:47.360 07:23:21 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:47.360 07:23:21 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.360 07:23:21 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:12:47.360 07:23:21 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:12:47.361 07:23:21 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "ae26b3ac-3126-4c58-9364-da0f99b507e7"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ae26b3ac-3126-4c58-9364-da0f99b507e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "fff08bb1-d87e-5755-bdf9-8fe63a4cb90b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "fff08bb1-d87e-5755-bdf9-8fe63a4cb90b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "c372025b-7de5-5218-9dac-f7a6e7e3a780"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "c372025b-7de5-5218-9dac-f7a6e7e3a780",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b266a00a-063f-5e9d-b29c-6eb5053a54ae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b266a00a-063f-5e9d-b29c-6eb5053a54ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "7fac3260-8cb4-5086-8b4f-d6a5a8a04ef5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7fac3260-8cb4-5086-8b4f-d6a5a8a04ef5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "ab234728-bfdb-56a2-b345-9367b92cf9e1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ab234728-bfdb-56a2-b345-9367b92cf9e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "842463dd-c2f0-5fbe-b25f-a7383fccc066"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "842463dd-c2f0-5fbe-b25f-a7383fccc066",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cbfd6457-5475-5cb4-80e1-6f283dca5497"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cbfd6457-5475-5cb4-80e1-6f283dca5497",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "3889612e-6409-59cf-805b-53f06a410392"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3889612e-6409-59cf-805b-53f06a410392",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "75e3a63a-f4e1-500e-9c46-00bfaae01314"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "75e3a63a-f4e1-500e-9c46-00bfaae01314",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a3ac040b-4963-5e80-b6ee-65d2d7d77fef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a3ac040b-4963-5e80-b6ee-65d2d7d77fef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "d4369962-7c86-5b17-9f6d-b22e4295f69a"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d4369962-7c86-5b17-9f6d-b22e4295f69a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "1a8fd762-17bf-4486-8d9f-7526056536cb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1a8fd762-17bf-4486-8d9f-7526056536cb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1a8fd762-17bf-4486-8d9f-7526056536cb",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a35e9153-d79d-40da-ba47-096d2e214dda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "74a8028e-5cb7-4d39-884c-abfbd781427f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "921d64ca-4dbe-4217-b026-7160169585fe"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "921d64ca-4dbe-4217-b026-7160169585fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "921d64ca-4dbe-4217-b026-7160169585fe",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "9b9fbf8c-8dce-4975-92eb-99acf8c908b3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "405b9d40-a1c9-42c2-bf47-d58fe381d6e0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "fcc40bac-6136-4d89-bc4b-86f4d61b9d1d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fcc40bac-6136-4d89-bc4b-86f4d61b9d1d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fcc40bac-6136-4d89-bc4b-86f4d61b9d1d",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "9143da10-89de-4bdb-aa40-f1c56c868972",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "a44e6051-4b7a-4477-80cd-87ab057dffeb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "a5a00b8f-9a6b-41aa-898d-9ce526be531c"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "a5a00b8f-9a6b-41aa-898d-9ce526be531c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:47.619 07:23:21 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:12:47.619 07:23:21 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:12:47.619 07:23:21 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:12:47.620 07:23:21 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 127756 00:12:47.620 07:23:21 blockdev_general -- common/autotest_common.sh@946 -- # '[' -z 127756 ']' 00:12:47.620 07:23:21 blockdev_general -- common/autotest_common.sh@950 -- # kill -0 127756 00:12:47.620 07:23:21 blockdev_general -- common/autotest_common.sh@951 -- # uname 00:12:47.620 07:23:21 blockdev_general -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:47.620 07:23:21 blockdev_general -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 127756 00:12:47.620 07:23:21 blockdev_general -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:47.620 07:23:21 blockdev_general -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:47.620 07:23:21 blockdev_general -- common/autotest_common.sh@964 -- # echo 'killing process with pid 127756' 00:12:47.620 killing process with pid 127756 00:12:47.620 07:23:21 blockdev_general -- common/autotest_common.sh@965 -- # kill 127756 00:12:47.620 07:23:21 blockdev_general -- common/autotest_common.sh@970 -- # wait 127756 00:12:48.556 07:23:22 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:48.556 07:23:22 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:48.556 07:23:22 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:12:48.556 07:23:22 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:48.556 07:23:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:48.556 ************************************ 00:12:48.556 START TEST bdev_hello_world 00:12:48.556 ************************************ 00:12:48.556 07:23:22 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:48.556 [2024-07-12 07:23:22.261349] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:48.556 [2024-07-12 07:23:22.261652] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127819 ] 00:12:48.556 [2024-07-12 07:23:22.414652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.814 [2024-07-12 07:23:22.497260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.814 [2024-07-12 07:23:22.678643] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:48.814 [2024-07-12 07:23:22.679055] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:48.814 [2024-07-12 07:23:22.686534] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:48.814 [2024-07-12 07:23:22.686715] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:48.814 [2024-07-12 07:23:22.694578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:48.814 [2024-07-12 07:23:22.694734] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:48.814 [2024-07-12 07:23:22.694892] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:49.073 [2024-07-12 07:23:22.808234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:49.073 [2024-07-12 07:23:22.808618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.073 [2024-07-12 07:23:22.808710] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:49.073 [2024-07-12 07:23:22.808952] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.073 [2024-07-12 07:23:22.811943] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.073 [2024-07-12 07:23:22.812123] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:49.334 [2024-07-12 07:23:23.006370] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:49.334 [2024-07-12 07:23:23.006805] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:49.334 [2024-07-12 07:23:23.007187] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:49.334 [2024-07-12 07:23:23.007378] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:49.334 [2024-07-12 07:23:23.007552] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:49.334 [2024-07-12 07:23:23.007666] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:49.334 [2024-07-12 07:23:23.007780] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:49.334 00:12:49.334 [2024-07-12 07:23:23.007978] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:49.900 00:12:49.900 real 0m1.447s 00:12:49.900 user 0m0.851s 00:12:49.901 sys 0m0.447s 00:12:49.901 07:23:23 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:49.901 07:23:23 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:49.901 ************************************ 00:12:49.901 END TEST bdev_hello_world 00:12:49.901 ************************************ 00:12:49.901 07:23:23 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:12:49.901 07:23:23 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:49.901 07:23:23 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:49.901 07:23:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:49.901 ************************************ 00:12:49.901 START TEST bdev_bounds 00:12:49.901 ************************************ 00:12:49.901 07:23:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:12:49.901 07:23:23 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=127863 00:12:49.901 07:23:23 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:49.901 Process bdevio pid: 127863 00:12:49.901 07:23:23 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 127863' 00:12:49.901 07:23:23 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 127863 00:12:49.901 07:23:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 127863 ']' 00:12:49.901 07:23:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.901 07:23:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:49.901 07:23:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.901 07:23:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:49.901 07:23:23 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:49.901 07:23:23 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:49.901 [2024-07-12 07:23:23.778552] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:49.901 [2024-07-12 07:23:23.778827] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127863 ] 00:12:50.158 [2024-07-12 07:23:23.943567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:50.158 [2024-07-12 07:23:24.024272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.158 [2024-07-12 07:23:24.024176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.158 [2024-07-12 07:23:24.024276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.417 [2024-07-12 07:23:24.207675] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:50.417 [2024-07-12 07:23:24.208057] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:50.417 [2024-07-12 07:23:24.215551] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:50.417 [2024-07-12 07:23:24.215717] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:50.417 [2024-07-12 07:23:24.223645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:50.417 [2024-07-12 07:23:24.223816] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:50.417 [2024-07-12 07:23:24.223962] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:50.674 [2024-07-12 07:23:24.338698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:50.674 [2024-07-12 07:23:24.339080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.674 [2024-07-12 07:23:24.339187] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:50.674 [2024-07-12 07:23:24.339534] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.674 [2024-07-12 07:23:24.342766] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.674 [2024-07-12 07:23:24.342942] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:50.933 07:23:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:50.933 07:23:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:12:50.933 07:23:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:50.933 I/O targets: 00:12:50.933 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:50.933 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:50.933 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:50.933 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:50.933 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:50.933 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:50.933 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:50.933 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:50.933 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:50.933 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:50.933 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:50.933 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:50.933 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:50.933 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:50.933 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:50.933 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:50.933 00:12:50.933 00:12:50.933 CUnit - A unit testing framework for C - Version 2.1-3 00:12:50.933 http://cunit.sourceforge.net/ 00:12:50.933 00:12:50.933 00:12:50.933 Suite: bdevio tests on: AIO0 00:12:50.933 Test: blockdev write read block ...passed 00:12:50.933 Test: blockdev write zeroes read block ...passed 00:12:50.933 Test: blockdev write zeroes read no split ...passed 00:12:51.193 Test: blockdev write zeroes read split ...passed 00:12:51.193 Test: blockdev write zeroes read split partial ...passed 00:12:51.193 Test: blockdev reset ...passed 00:12:51.193 Test: blockdev write read 8 blocks ...passed 00:12:51.193 Test: blockdev write read size > 128k ...passed 00:12:51.193 Test: blockdev write read invalid size ...passed 00:12:51.193 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.193 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.193 Test: blockdev write read max offset ...passed 00:12:51.193 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.193 Test: blockdev writev readv 8 blocks ...passed 00:12:51.193 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.193 Test: blockdev writev readv block ...passed 00:12:51.193 Test: blockdev writev readv size > 128k ...passed 00:12:51.193 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.193 Test: blockdev comparev and writev ...passed 00:12:51.193 Test: blockdev nvme passthru rw ...passed 00:12:51.193 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.193 Test: blockdev nvme admin passthru ...passed 00:12:51.193 Test: blockdev copy ...passed 00:12:51.193 Suite: bdevio tests on: raid1 00:12:51.193 Test: blockdev write read block ...passed 00:12:51.193 Test: blockdev write zeroes read block ...passed 00:12:51.193 Test: blockdev write zeroes read no split ...passed 00:12:51.193 Test: blockdev write zeroes read split ...passed 00:12:51.193 Test: blockdev write zeroes read split partial ...passed 00:12:51.193 Test: blockdev reset ...passed 00:12:51.193 Test: blockdev write read 8 blocks ...passed 00:12:51.193 Test: blockdev write read size > 128k ...passed 00:12:51.193 Test: blockdev write read invalid size ...passed 00:12:51.193 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.193 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.193 Test: blockdev write read max offset ...passed 00:12:51.193 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.193 Test: blockdev writev readv 8 blocks ...passed 00:12:51.193 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.193 Test: blockdev writev readv block ...passed 00:12:51.193 Test: blockdev writev readv size > 128k ...passed 00:12:51.193 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.193 Test: blockdev comparev and writev ...passed 00:12:51.193 Test: blockdev nvme passthru rw ...passed 00:12:51.193 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.193 Test: blockdev nvme admin passthru ...passed 00:12:51.193 Test: blockdev copy ...passed 00:12:51.193 Suite: bdevio tests on: concat0 00:12:51.193 Test: blockdev write read block ...passed 00:12:51.193 Test: blockdev write zeroes read block ...passed 00:12:51.193 Test: blockdev write zeroes read no split ...passed 00:12:51.193 Test: blockdev write zeroes read split ...passed 00:12:51.193 Test: blockdev write zeroes read split partial ...passed 00:12:51.193 Test: blockdev reset ...passed 00:12:51.193 Test: blockdev write read 8 blocks ...passed 00:12:51.193 Test: blockdev write read size > 128k ...passed 00:12:51.193 Test: blockdev write read invalid size ...passed 00:12:51.193 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.193 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.193 Test: blockdev write read max offset ...passed 00:12:51.193 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.193 Test: blockdev writev readv 8 blocks ...passed 00:12:51.193 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.193 Test: blockdev writev readv block ...passed 00:12:51.193 Test: blockdev writev readv size > 128k ...passed 00:12:51.193 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.193 Test: blockdev comparev and writev ...passed 00:12:51.193 Test: blockdev nvme passthru rw ...passed 00:12:51.193 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.193 Test: blockdev nvme admin passthru ...passed 00:12:51.193 Test: blockdev copy ...passed 00:12:51.193 Suite: bdevio tests on: raid0 00:12:51.193 Test: blockdev write read block ...passed 00:12:51.193 Test: blockdev write zeroes read block ...passed 00:12:51.193 Test: blockdev write zeroes read no split ...passed 00:12:51.193 Test: blockdev write zeroes read split ...passed 00:12:51.193 Test: blockdev write zeroes read split partial ...passed 00:12:51.193 Test: blockdev reset ...passed 00:12:51.193 Test: blockdev write read 8 blocks ...passed 00:12:51.193 Test: blockdev write read size > 128k ...passed 00:12:51.193 Test: blockdev write read invalid size ...passed 00:12:51.193 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.193 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.193 Test: blockdev write read max offset ...passed 00:12:51.193 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.193 Test: blockdev writev readv 8 blocks ...passed 00:12:51.193 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.193 Test: blockdev writev readv block ...passed 00:12:51.193 Test: blockdev writev readv size > 128k ...passed 00:12:51.193 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.193 Test: blockdev comparev and writev ...passed 00:12:51.193 Test: blockdev nvme passthru rw ...passed 00:12:51.193 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.193 Test: blockdev nvme admin passthru ...passed 00:12:51.193 Test: blockdev copy ...passed 00:12:51.193 Suite: bdevio tests on: TestPT 00:12:51.193 Test: blockdev write read block ...passed 00:12:51.193 Test: blockdev write zeroes read block ...passed 00:12:51.193 Test: blockdev write zeroes read no split ...passed 00:12:51.193 Test: blockdev write zeroes read split ...passed 00:12:51.193 Test: blockdev write zeroes read split partial ...passed 00:12:51.193 Test: blockdev reset ...passed 00:12:51.193 Test: blockdev write read 8 blocks ...passed 00:12:51.193 Test: blockdev write read size > 128k ...passed 00:12:51.193 Test: blockdev write read invalid size ...passed 00:12:51.193 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.193 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.193 Test: blockdev write read max offset ...passed 00:12:51.193 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.193 Test: blockdev writev readv 8 blocks ...passed 00:12:51.194 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.194 Test: blockdev writev readv block ...passed 00:12:51.194 Test: blockdev writev readv size > 128k ...passed 00:12:51.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.194 Test: blockdev comparev and writev ...passed 00:12:51.194 Test: blockdev nvme passthru rw ...passed 00:12:51.194 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.194 Test: blockdev nvme admin passthru ...passed 00:12:51.194 Test: blockdev copy ...passed 00:12:51.194 Suite: bdevio tests on: Malloc2p7 00:12:51.194 Test: blockdev write read block ...passed 00:12:51.194 Test: blockdev write zeroes read block ...passed 00:12:51.194 Test: blockdev write zeroes read no split ...passed 00:12:51.194 Test: blockdev write zeroes read split ...passed 00:12:51.194 Test: blockdev write zeroes read split partial ...passed 00:12:51.194 Test: blockdev reset ...passed 00:12:51.194 Test: blockdev write read 8 blocks ...passed 00:12:51.194 Test: blockdev write read size > 128k ...passed 00:12:51.194 Test: blockdev write read invalid size ...passed 00:12:51.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.194 Test: blockdev write read max offset ...passed 00:12:51.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.194 Test: blockdev writev readv 8 blocks ...passed 00:12:51.194 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.194 Test: blockdev writev readv block ...passed 00:12:51.194 Test: blockdev writev readv size > 128k ...passed 00:12:51.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.194 Test: blockdev comparev and writev ...passed 00:12:51.194 Test: blockdev nvme passthru rw ...passed 00:12:51.194 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.194 Test: blockdev nvme admin passthru ...passed 00:12:51.194 Test: blockdev copy ...passed 00:12:51.194 Suite: bdevio tests on: Malloc2p6 00:12:51.194 Test: blockdev write read block ...passed 00:12:51.194 Test: blockdev write zeroes read block ...passed 00:12:51.194 Test: blockdev write zeroes read no split ...passed 00:12:51.194 Test: blockdev write zeroes read split ...passed 00:12:51.194 Test: blockdev write zeroes read split partial ...passed 00:12:51.194 Test: blockdev reset ...passed 00:12:51.194 Test: blockdev write read 8 blocks ...passed 00:12:51.194 Test: blockdev write read size > 128k ...passed 00:12:51.194 Test: blockdev write read invalid size ...passed 00:12:51.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.194 Test: blockdev write read max offset ...passed 00:12:51.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.194 Test: blockdev writev readv 8 blocks ...passed 00:12:51.194 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.194 Test: blockdev writev readv block ...passed 00:12:51.194 Test: blockdev writev readv size > 128k ...passed 00:12:51.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.194 Test: blockdev comparev and writev ...passed 00:12:51.194 Test: blockdev nvme passthru rw ...passed 00:12:51.194 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.194 Test: blockdev nvme admin passthru ...passed 00:12:51.194 Test: blockdev copy ...passed 00:12:51.194 Suite: bdevio tests on: Malloc2p5 00:12:51.194 Test: blockdev write read block ...passed 00:12:51.194 Test: blockdev write zeroes read block ...passed 00:12:51.194 Test: blockdev write zeroes read no split ...passed 00:12:51.194 Test: blockdev write zeroes read split ...passed 00:12:51.194 Test: blockdev write zeroes read split partial ...passed 00:12:51.194 Test: blockdev reset ...passed 00:12:51.194 Test: blockdev write read 8 blocks ...passed 00:12:51.194 Test: blockdev write read size > 128k ...passed 00:12:51.194 Test: blockdev write read invalid size ...passed 00:12:51.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.194 Test: blockdev write read max offset ...passed 00:12:51.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.194 Test: blockdev writev readv 8 blocks ...passed 00:12:51.194 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.194 Test: blockdev writev readv block ...passed 00:12:51.194 Test: blockdev writev readv size > 128k ...passed 00:12:51.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.194 Test: blockdev comparev and writev ...passed 00:12:51.194 Test: blockdev nvme passthru rw ...passed 00:12:51.194 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.194 Test: blockdev nvme admin passthru ...passed 00:12:51.194 Test: blockdev copy ...passed 00:12:51.194 Suite: bdevio tests on: Malloc2p4 00:12:51.194 Test: blockdev write read block ...passed 00:12:51.194 Test: blockdev write zeroes read block ...passed 00:12:51.194 Test: blockdev write zeroes read no split ...passed 00:12:51.194 Test: blockdev write zeroes read split ...passed 00:12:51.194 Test: blockdev write zeroes read split partial ...passed 00:12:51.194 Test: blockdev reset ...passed 00:12:51.194 Test: blockdev write read 8 blocks ...passed 00:12:51.194 Test: blockdev write read size > 128k ...passed 00:12:51.194 Test: blockdev write read invalid size ...passed 00:12:51.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.194 Test: blockdev write read max offset ...passed 00:12:51.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.194 Test: blockdev writev readv 8 blocks ...passed 00:12:51.194 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.194 Test: blockdev writev readv block ...passed 00:12:51.194 Test: blockdev writev readv size > 128k ...passed 00:12:51.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.194 Test: blockdev comparev and writev ...passed 00:12:51.194 Test: blockdev nvme passthru rw ...passed 00:12:51.194 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.194 Test: blockdev nvme admin passthru ...passed 00:12:51.194 Test: blockdev copy ...passed 00:12:51.194 Suite: bdevio tests on: Malloc2p3 00:12:51.194 Test: blockdev write read block ...passed 00:12:51.194 Test: blockdev write zeroes read block ...passed 00:12:51.194 Test: blockdev write zeroes read no split ...passed 00:12:51.194 Test: blockdev write zeroes read split ...passed 00:12:51.194 Test: blockdev write zeroes read split partial ...passed 00:12:51.194 Test: blockdev reset ...passed 00:12:51.194 Test: blockdev write read 8 blocks ...passed 00:12:51.194 Test: blockdev write read size > 128k ...passed 00:12:51.194 Test: blockdev write read invalid size ...passed 00:12:51.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.194 Test: blockdev write read max offset ...passed 00:12:51.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.194 Test: blockdev writev readv 8 blocks ...passed 00:12:51.194 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.194 Test: blockdev writev readv block ...passed 00:12:51.194 Test: blockdev writev readv size > 128k ...passed 00:12:51.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.194 Test: blockdev comparev and writev ...passed 00:12:51.194 Test: blockdev nvme passthru rw ...passed 00:12:51.194 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.194 Test: blockdev nvme admin passthru ...passed 00:12:51.194 Test: blockdev copy ...passed 00:12:51.194 Suite: bdevio tests on: Malloc2p2 00:12:51.194 Test: blockdev write read block ...passed 00:12:51.194 Test: blockdev write zeroes read block ...passed 00:12:51.194 Test: blockdev write zeroes read no split ...passed 00:12:51.194 Test: blockdev write zeroes read split ...passed 00:12:51.194 Test: blockdev write zeroes read split partial ...passed 00:12:51.194 Test: blockdev reset ...passed 00:12:51.194 Test: blockdev write read 8 blocks ...passed 00:12:51.194 Test: blockdev write read size > 128k ...passed 00:12:51.194 Test: blockdev write read invalid size ...passed 00:12:51.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.194 Test: blockdev write read max offset ...passed 00:12:51.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.194 Test: blockdev writev readv 8 blocks ...passed 00:12:51.194 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.194 Test: blockdev writev readv block ...passed 00:12:51.194 Test: blockdev writev readv size > 128k ...passed 00:12:51.194 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.194 Test: blockdev comparev and writev ...passed 00:12:51.194 Test: blockdev nvme passthru rw ...passed 00:12:51.194 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.194 Test: blockdev nvme admin passthru ...passed 00:12:51.194 Test: blockdev copy ...passed 00:12:51.194 Suite: bdevio tests on: Malloc2p1 00:12:51.194 Test: blockdev write read block ...passed 00:12:51.194 Test: blockdev write zeroes read block ...passed 00:12:51.194 Test: blockdev write zeroes read no split ...passed 00:12:51.194 Test: blockdev write zeroes read split ...passed 00:12:51.194 Test: blockdev write zeroes read split partial ...passed 00:12:51.194 Test: blockdev reset ...passed 00:12:51.194 Test: blockdev write read 8 blocks ...passed 00:12:51.194 Test: blockdev write read size > 128k ...passed 00:12:51.194 Test: blockdev write read invalid size ...passed 00:12:51.194 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.194 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.194 Test: blockdev write read max offset ...passed 00:12:51.194 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.194 Test: blockdev writev readv 8 blocks ...passed 00:12:51.194 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.194 Test: blockdev writev readv block ...passed 00:12:51.194 Test: blockdev writev readv size > 128k ...passed 00:12:51.195 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.195 Test: blockdev comparev and writev ...passed 00:12:51.195 Test: blockdev nvme passthru rw ...passed 00:12:51.195 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.195 Test: blockdev nvme admin passthru ...passed 00:12:51.195 Test: blockdev copy ...passed 00:12:51.195 Suite: bdevio tests on: Malloc2p0 00:12:51.195 Test: blockdev write read block ...passed 00:12:51.195 Test: blockdev write zeroes read block ...passed 00:12:51.195 Test: blockdev write zeroes read no split ...passed 00:12:51.195 Test: blockdev write zeroes read split ...passed 00:12:51.195 Test: blockdev write zeroes read split partial ...passed 00:12:51.195 Test: blockdev reset ...passed 00:12:51.195 Test: blockdev write read 8 blocks ...passed 00:12:51.195 Test: blockdev write read size > 128k ...passed 00:12:51.195 Test: blockdev write read invalid size ...passed 00:12:51.195 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.195 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.195 Test: blockdev write read max offset ...passed 00:12:51.195 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.195 Test: blockdev writev readv 8 blocks ...passed 00:12:51.195 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.195 Test: blockdev writev readv block ...passed 00:12:51.195 Test: blockdev writev readv size > 128k ...passed 00:12:51.195 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.195 Test: blockdev comparev and writev ...passed 00:12:51.195 Test: blockdev nvme passthru rw ...passed 00:12:51.195 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.195 Test: blockdev nvme admin passthru ...passed 00:12:51.195 Test: blockdev copy ...passed 00:12:51.195 Suite: bdevio tests on: Malloc1p1 00:12:51.195 Test: blockdev write read block ...passed 00:12:51.195 Test: blockdev write zeroes read block ...passed 00:12:51.195 Test: blockdev write zeroes read no split ...passed 00:12:51.195 Test: blockdev write zeroes read split ...passed 00:12:51.195 Test: blockdev write zeroes read split partial ...passed 00:12:51.195 Test: blockdev reset ...passed 00:12:51.195 Test: blockdev write read 8 blocks ...passed 00:12:51.195 Test: blockdev write read size > 128k ...passed 00:12:51.195 Test: blockdev write read invalid size ...passed 00:12:51.195 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.195 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.195 Test: blockdev write read max offset ...passed 00:12:51.195 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.195 Test: blockdev writev readv 8 blocks ...passed 00:12:51.195 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.195 Test: blockdev writev readv block ...passed 00:12:51.195 Test: blockdev writev readv size > 128k ...passed 00:12:51.195 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.195 Test: blockdev comparev and writev ...passed 00:12:51.195 Test: blockdev nvme passthru rw ...passed 00:12:51.195 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.195 Test: blockdev nvme admin passthru ...passed 00:12:51.195 Test: blockdev copy ...passed 00:12:51.195 Suite: bdevio tests on: Malloc1p0 00:12:51.195 Test: blockdev write read block ...passed 00:12:51.195 Test: blockdev write zeroes read block ...passed 00:12:51.195 Test: blockdev write zeroes read no split ...passed 00:12:51.195 Test: blockdev write zeroes read split ...passed 00:12:51.195 Test: blockdev write zeroes read split partial ...passed 00:12:51.195 Test: blockdev reset ...passed 00:12:51.195 Test: blockdev write read 8 blocks ...passed 00:12:51.195 Test: blockdev write read size > 128k ...passed 00:12:51.195 Test: blockdev write read invalid size ...passed 00:12:51.195 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.195 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.195 Test: blockdev write read max offset ...passed 00:12:51.195 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.195 Test: blockdev writev readv 8 blocks ...passed 00:12:51.195 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.195 Test: blockdev writev readv block ...passed 00:12:51.195 Test: blockdev writev readv size > 128k ...passed 00:12:51.195 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.195 Test: blockdev comparev and writev ...passed 00:12:51.195 Test: blockdev nvme passthru rw ...passed 00:12:51.195 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.195 Test: blockdev nvme admin passthru ...passed 00:12:51.195 Test: blockdev copy ...passed 00:12:51.195 Suite: bdevio tests on: Malloc0 00:12:51.195 Test: blockdev write read block ...passed 00:12:51.195 Test: blockdev write zeroes read block ...passed 00:12:51.195 Test: blockdev write zeroes read no split ...passed 00:12:51.195 Test: blockdev write zeroes read split ...passed 00:12:51.454 Test: blockdev write zeroes read split partial ...passed 00:12:51.454 Test: blockdev reset ...passed 00:12:51.454 Test: blockdev write read 8 blocks ...passed 00:12:51.454 Test: blockdev write read size > 128k ...passed 00:12:51.454 Test: blockdev write read invalid size ...passed 00:12:51.454 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.454 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.454 Test: blockdev write read max offset ...passed 00:12:51.454 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.454 Test: blockdev writev readv 8 blocks ...passed 00:12:51.454 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.454 Test: blockdev writev readv block ...passed 00:12:51.454 Test: blockdev writev readv size > 128k ...passed 00:12:51.454 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.454 Test: blockdev comparev and writev ...passed 00:12:51.454 Test: blockdev nvme passthru rw ...passed 00:12:51.454 Test: blockdev nvme passthru vendor specific ...passed 00:12:51.454 Test: blockdev nvme admin passthru ...passed 00:12:51.454 Test: blockdev copy ...passed 00:12:51.454 00:12:51.454 Run Summary: Type Total Ran Passed Failed Inactive 00:12:51.454 suites 16 16 n/a 0 0 00:12:51.454 tests 368 368 368 0 0 00:12:51.454 asserts 2224 2224 2224 0 n/a 00:12:51.454 00:12:51.454 Elapsed time = 0.621 seconds 00:12:51.454 0 00:12:51.454 07:23:25 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 127863 00:12:51.454 07:23:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 127863 ']' 00:12:51.454 07:23:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 127863 00:12:51.454 07:23:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:12:51.454 07:23:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:51.454 07:23:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 127863 00:12:51.454 07:23:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:51.454 07:23:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:51.454 07:23:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 127863' 00:12:51.454 killing process with pid 127863 00:12:51.454 07:23:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@965 -- # kill 127863 00:12:51.454 07:23:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@970 -- # wait 127863 00:12:52.020 07:23:25 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:12:52.020 00:12:52.020 real 0m2.035s 00:12:52.020 user 0m4.545s 00:12:52.020 sys 0m0.629s 00:12:52.020 07:23:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:52.020 ************************************ 00:12:52.020 END TEST bdev_bounds 00:12:52.020 ************************************ 00:12:52.020 07:23:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:52.020 07:23:25 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:52.020 07:23:25 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:12:52.020 07:23:25 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:52.020 07:23:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:52.020 ************************************ 00:12:52.020 START TEST bdev_nbd 00:12:52.020 ************************************ 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:52.020 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:12:52.021 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=127926 00:12:52.021 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:52.021 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:52.021 07:23:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 127926 /var/tmp/spdk-nbd.sock 00:12:52.021 07:23:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 127926 ']' 00:12:52.021 07:23:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:52.021 07:23:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:52.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:52.021 07:23:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:52.021 07:23:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:52.021 07:23:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:52.021 [2024-07-12 07:23:25.892232] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:52.021 [2024-07-12 07:23:25.892503] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.279 [2024-07-12 07:23:26.050262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.279 [2024-07-12 07:23:26.136694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.538 [2024-07-12 07:23:26.318293] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:52.538 [2024-07-12 07:23:26.318669] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:52.538 [2024-07-12 07:23:26.326199] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:52.538 [2024-07-12 07:23:26.326379] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:52.538 [2024-07-12 07:23:26.334247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:52.538 [2024-07-12 07:23:26.334444] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:52.538 [2024-07-12 07:23:26.334615] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:52.796 [2024-07-12 07:23:26.447763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:52.796 [2024-07-12 07:23:26.448065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.796 [2024-07-12 07:23:26.448153] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:52.796 [2024-07-12 07:23:26.448273] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.796 [2024-07-12 07:23:26.451222] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.796 [2024-07-12 07:23:26.451408] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:53.054 07:23:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:53.054 07:23:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:12:53.054 07:23:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:53.054 07:23:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.054 07:23:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:53.054 07:23:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:53.055 07:23:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:53.055 07:23:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.055 07:23:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:53.055 07:23:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:53.055 07:23:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:53.055 07:23:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:53.055 07:23:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:53.055 07:23:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:53.055 07:23:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.313 1+0 records in 00:12:53.313 1+0 records out 00:12:53.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371635 s, 11.0 MB/s 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:53.313 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.572 1+0 records in 00:12:53.572 1+0 records out 00:12:53.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325613 s, 12.6 MB/s 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:53.572 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.140 1+0 records in 00:12:54.140 1+0 records out 00:12:54.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473852 s, 8.6 MB/s 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:54.140 07:23:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.398 1+0 records in 00:12:54.398 1+0 records out 00:12:54.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526064 s, 7.8 MB/s 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:54.398 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.655 1+0 records in 00:12:54.655 1+0 records out 00:12:54.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427776 s, 9.6 MB/s 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:54.655 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.914 1+0 records in 00:12:54.914 1+0 records out 00:12:54.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524743 s, 7.8 MB/s 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:54.914 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:55.173 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:55.173 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:55.173 07:23:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:55.173 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd6 00:12:55.173 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:55.173 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:55.173 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:55.173 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd6 /proc/partitions 00:12:55.173 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:55.173 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:55.173 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:55.173 07:23:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.173 1+0 records in 00:12:55.173 1+0 records out 00:12:55.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517813 s, 7.9 MB/s 00:12:55.173 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.173 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:55.173 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.173 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:55.173 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:55.173 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:55.173 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:55.173 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd7 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd7 /proc/partitions 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.740 1+0 records in 00:12:55.740 1+0 records out 00:12:55.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627168 s, 6.5 MB/s 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:55.740 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd8 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd8 /proc/partitions 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.998 1+0 records in 00:12:55.998 1+0 records out 00:12:55.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046466 s, 8.8 MB/s 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:55.998 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd9 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd9 /proc/partitions 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.259 1+0 records in 00:12:56.259 1+0 records out 00:12:56.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706797 s, 5.8 MB/s 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:56.259 07:23:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.525 1+0 records in 00:12:56.525 1+0 records out 00:12:56.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688975 s, 5.9 MB/s 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:56.525 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:56.784 1+0 records in 00:12:56.784 1+0 records out 00:12:56.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679574 s, 6.0 MB/s 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:56.784 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:57.042 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:57.042 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:57.042 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:57.042 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:12:57.042 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:57.042 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:57.042 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:57.042 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:12:57.042 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:57.042 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:57.042 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:57.042 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.042 1+0 records in 00:12:57.042 1+0 records out 00:12:57.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070653 s, 5.8 MB/s 00:12:57.042 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.300 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:57.301 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.301 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:57.301 07:23:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:57.301 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.301 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:57.301 07:23:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.559 1+0 records in 00:12:57.559 1+0 records out 00:12:57.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000770568 s, 5.3 MB/s 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:57.559 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:57.817 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:57.817 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:57.817 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:57.817 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd14 00:12:57.817 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:57.817 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:57.817 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:57.817 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd14 /proc/partitions 00:12:57.817 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:57.817 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:57.818 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:57.818 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.818 1+0 records in 00:12:57.818 1+0 records out 00:12:57.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075554 s, 5.4 MB/s 00:12:57.818 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.818 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:57.818 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.818 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:57.818 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:57.818 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:57.818 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:57.818 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd15 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd15 /proc/partitions 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.076 1+0 records in 00:12:58.076 1+0 records out 00:12:58.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111305 s, 3.7 MB/s 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:58.076 07:23:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:58.335 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd0", 00:12:58.335 "bdev_name": "Malloc0" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd1", 00:12:58.335 "bdev_name": "Malloc1p0" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd2", 00:12:58.335 "bdev_name": "Malloc1p1" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd3", 00:12:58.335 "bdev_name": "Malloc2p0" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd4", 00:12:58.335 "bdev_name": "Malloc2p1" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd5", 00:12:58.335 "bdev_name": "Malloc2p2" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd6", 00:12:58.335 "bdev_name": "Malloc2p3" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd7", 00:12:58.335 "bdev_name": "Malloc2p4" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd8", 00:12:58.335 "bdev_name": "Malloc2p5" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd9", 00:12:58.335 "bdev_name": "Malloc2p6" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd10", 00:12:58.335 "bdev_name": "Malloc2p7" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd11", 00:12:58.335 "bdev_name": "TestPT" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd12", 00:12:58.335 "bdev_name": "raid0" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd13", 00:12:58.335 "bdev_name": "concat0" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd14", 00:12:58.335 "bdev_name": "raid1" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd15", 00:12:58.335 "bdev_name": "AIO0" 00:12:58.335 } 00:12:58.335 ]' 00:12:58.335 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:58.335 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd0", 00:12:58.335 "bdev_name": "Malloc0" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd1", 00:12:58.335 "bdev_name": "Malloc1p0" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd2", 00:12:58.335 "bdev_name": "Malloc1p1" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd3", 00:12:58.335 "bdev_name": "Malloc2p0" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd4", 00:12:58.335 "bdev_name": "Malloc2p1" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd5", 00:12:58.335 "bdev_name": "Malloc2p2" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd6", 00:12:58.335 "bdev_name": "Malloc2p3" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd7", 00:12:58.335 "bdev_name": "Malloc2p4" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd8", 00:12:58.335 "bdev_name": "Malloc2p5" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd9", 00:12:58.335 "bdev_name": "Malloc2p6" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd10", 00:12:58.335 "bdev_name": "Malloc2p7" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd11", 00:12:58.335 "bdev_name": "TestPT" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd12", 00:12:58.335 "bdev_name": "raid0" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd13", 00:12:58.335 "bdev_name": "concat0" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd14", 00:12:58.335 "bdev_name": "raid1" 00:12:58.335 }, 00:12:58.335 { 00:12:58.335 "nbd_device": "/dev/nbd15", 00:12:58.335 "bdev_name": "AIO0" 00:12:58.335 } 00:12:58.335 ]' 00:12:58.335 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:58.335 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:58.335 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.335 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:12:58.335 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.335 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:58.335 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.335 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.595 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:58.854 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.114 07:23:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:59.373 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:59.373 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:59.373 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:59.373 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.373 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.373 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:59.373 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.373 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.373 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.373 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:59.631 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:59.631 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:59.631 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:59.631 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.631 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.631 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:59.631 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.631 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.631 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.631 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:59.890 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:59.890 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:59.890 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:59.890 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.890 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.890 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:59.890 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.890 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.890 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.890 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:00.149 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:00.149 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:00.149 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:00.149 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.149 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.149 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:00.149 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.149 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.149 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.149 07:23:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.407 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:00.665 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.924 07:23:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:01.183 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:01.183 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:01.183 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:01.183 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.183 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.183 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:01.183 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.183 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.183 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.183 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:01.442 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:01.442 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:01.442 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:01.442 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.442 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.442 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:01.442 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.442 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.442 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.442 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:01.701 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:01.701 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:01.701 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:01.701 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.701 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.701 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:01.701 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.701 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.701 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.701 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:01.960 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:01.960 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:01.960 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:01.960 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.960 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.960 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:01.960 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.960 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.960 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:01.960 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.960 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:02.219 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:02.219 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:02.219 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:02.219 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:02.219 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:02.219 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:02.219 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:02.219 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:02.220 07:23:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:02.479 /dev/nbd0 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.479 1+0 records in 00:13:02.479 1+0 records out 00:13:02.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298005 s, 13.7 MB/s 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:02.479 07:23:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:13:02.739 /dev/nbd1 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.739 1+0 records in 00:13:02.739 1+0 records out 00:13:02.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308623 s, 13.3 MB/s 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:02.739 07:23:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:13:02.998 /dev/nbd10 00:13:02.998 07:23:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:02.998 07:23:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:02.998 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:13:02.998 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:02.998 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:02.998 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:02.998 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:13:02.998 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:02.998 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:02.998 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:02.998 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.998 1+0 records in 00:13:02.998 1+0 records out 00:13:02.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554831 s, 7.4 MB/s 00:13:02.999 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.999 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:02.999 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.999 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:02.999 07:23:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:02.999 07:23:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.999 07:23:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:02.999 07:23:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:13:03.258 /dev/nbd11 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.258 1+0 records in 00:13:03.258 1+0 records out 00:13:03.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631422 s, 6.5 MB/s 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:03.258 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:13:03.516 /dev/nbd12 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.516 1+0 records in 00:13:03.516 1+0 records out 00:13:03.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520893 s, 7.9 MB/s 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:03.516 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:13:03.774 /dev/nbd13 00:13:03.774 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.032 1+0 records in 00:13:04.032 1+0 records out 00:13:04.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449051 s, 9.1 MB/s 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.032 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:04.033 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:13:04.291 /dev/nbd14 00:13:04.291 07:23:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd14 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd14 /proc/partitions 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.291 1+0 records in 00:13:04.291 1+0 records out 00:13:04.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547098 s, 7.5 MB/s 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:04.291 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:13:04.550 /dev/nbd15 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd15 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd15 /proc/partitions 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.550 1+0 records in 00:13:04.550 1+0 records out 00:13:04.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053441 s, 7.7 MB/s 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:04.550 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:13:04.809 /dev/nbd2 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.809 1+0 records in 00:13:04.809 1+0 records out 00:13:04.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498298 s, 8.2 MB/s 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:04.809 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:13:05.069 /dev/nbd3 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.069 1+0 records in 00:13:05.069 1+0 records out 00:13:05.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631455 s, 6.5 MB/s 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:05.069 07:23:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:13:05.328 /dev/nbd4 00:13:05.328 07:23:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:13:05.328 07:23:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:13:05.328 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:13:05.328 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:05.328 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:05.328 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:05.328 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:13:05.328 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:05.328 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:05.328 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:05.328 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.587 1+0 records in 00:13:05.587 1+0 records out 00:13:05.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565231 s, 7.2 MB/s 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:13:05.587 /dev/nbd5 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:05.587 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.587 1+0 records in 00:13:05.587 1+0 records out 00:13:05.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000758434 s, 5.4 MB/s 00:13:05.588 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.588 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:05.588 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.588 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:05.588 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:05.588 07:23:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.588 07:23:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:05.588 07:23:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:13:05.856 /dev/nbd6 00:13:05.856 07:23:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:13:05.856 07:23:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:13:05.856 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd6 00:13:05.856 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:05.856 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:05.856 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:05.856 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd6 /proc/partitions 00:13:06.115 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:06.115 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:06.115 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:06.115 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.115 1+0 records in 00:13:06.115 1+0 records out 00:13:06.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691583 s, 5.9 MB/s 00:13:06.115 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.115 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:06.115 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.115 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:06.115 07:23:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:06.115 07:23:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.115 07:23:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:06.115 07:23:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:13:06.374 /dev/nbd7 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd7 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd7 /proc/partitions 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.374 1+0 records in 00:13:06.374 1+0 records out 00:13:06.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595982 s, 6.9 MB/s 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:06.374 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:13:06.632 /dev/nbd8 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd8 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd8 /proc/partitions 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.632 1+0 records in 00:13:06.632 1+0 records out 00:13:06.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516891 s, 7.9 MB/s 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:06.632 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:13:06.632 /dev/nbd9 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd9 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd9 /proc/partitions 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.890 1+0 records in 00:13:06.890 1+0 records out 00:13:06.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119456 s, 3.4 MB/s 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd0", 00:13:06.890 "bdev_name": "Malloc0" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd1", 00:13:06.890 "bdev_name": "Malloc1p0" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd10", 00:13:06.890 "bdev_name": "Malloc1p1" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd11", 00:13:06.890 "bdev_name": "Malloc2p0" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd12", 00:13:06.890 "bdev_name": "Malloc2p1" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd13", 00:13:06.890 "bdev_name": "Malloc2p2" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd14", 00:13:06.890 "bdev_name": "Malloc2p3" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd15", 00:13:06.890 "bdev_name": "Malloc2p4" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd2", 00:13:06.890 "bdev_name": "Malloc2p5" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd3", 00:13:06.890 "bdev_name": "Malloc2p6" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd4", 00:13:06.890 "bdev_name": "Malloc2p7" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd5", 00:13:06.890 "bdev_name": "TestPT" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd6", 00:13:06.890 "bdev_name": "raid0" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd7", 00:13:06.890 "bdev_name": "concat0" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd8", 00:13:06.890 "bdev_name": "raid1" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd9", 00:13:06.890 "bdev_name": "AIO0" 00:13:06.890 } 00:13:06.890 ]' 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd0", 00:13:06.890 "bdev_name": "Malloc0" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd1", 00:13:06.890 "bdev_name": "Malloc1p0" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd10", 00:13:06.890 "bdev_name": "Malloc1p1" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd11", 00:13:06.890 "bdev_name": "Malloc2p0" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd12", 00:13:06.890 "bdev_name": "Malloc2p1" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd13", 00:13:06.890 "bdev_name": "Malloc2p2" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd14", 00:13:06.890 "bdev_name": "Malloc2p3" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd15", 00:13:06.890 "bdev_name": "Malloc2p4" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd2", 00:13:06.890 "bdev_name": "Malloc2p5" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd3", 00:13:06.890 "bdev_name": "Malloc2p6" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd4", 00:13:06.890 "bdev_name": "Malloc2p7" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd5", 00:13:06.890 "bdev_name": "TestPT" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd6", 00:13:06.890 "bdev_name": "raid0" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd7", 00:13:06.890 "bdev_name": "concat0" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd8", 00:13:06.890 "bdev_name": "raid1" 00:13:06.890 }, 00:13:06.890 { 00:13:06.890 "nbd_device": "/dev/nbd9", 00:13:06.890 "bdev_name": "AIO0" 00:13:06.890 } 00:13:06.890 ]' 00:13:06.890 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:07.148 /dev/nbd1 00:13:07.148 /dev/nbd10 00:13:07.148 /dev/nbd11 00:13:07.148 /dev/nbd12 00:13:07.148 /dev/nbd13 00:13:07.148 /dev/nbd14 00:13:07.148 /dev/nbd15 00:13:07.148 /dev/nbd2 00:13:07.148 /dev/nbd3 00:13:07.148 /dev/nbd4 00:13:07.148 /dev/nbd5 00:13:07.148 /dev/nbd6 00:13:07.148 /dev/nbd7 00:13:07.148 /dev/nbd8 00:13:07.148 /dev/nbd9' 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:07.148 /dev/nbd1 00:13:07.148 /dev/nbd10 00:13:07.148 /dev/nbd11 00:13:07.148 /dev/nbd12 00:13:07.148 /dev/nbd13 00:13:07.148 /dev/nbd14 00:13:07.148 /dev/nbd15 00:13:07.148 /dev/nbd2 00:13:07.148 /dev/nbd3 00:13:07.148 /dev/nbd4 00:13:07.148 /dev/nbd5 00:13:07.148 /dev/nbd6 00:13:07.148 /dev/nbd7 00:13:07.148 /dev/nbd8 00:13:07.148 /dev/nbd9' 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:07.148 256+0 records in 00:13:07.148 256+0 records out 00:13:07.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00815739 s, 129 MB/s 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:07.148 256+0 records in 00:13:07.148 256+0 records out 00:13:07.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151453 s, 6.9 MB/s 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.148 07:23:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:07.406 256+0 records in 00:13:07.406 256+0 records out 00:13:07.406 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156972 s, 6.7 MB/s 00:13:07.406 07:23:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.406 07:23:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:07.406 256+0 records in 00:13:07.406 256+0 records out 00:13:07.406 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154866 s, 6.8 MB/s 00:13:07.406 07:23:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.406 07:23:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:07.665 256+0 records in 00:13:07.665 256+0 records out 00:13:07.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154382 s, 6.8 MB/s 00:13:07.665 07:23:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.665 07:23:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:07.923 256+0 records in 00:13:07.923 256+0 records out 00:13:07.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153913 s, 6.8 MB/s 00:13:07.923 07:23:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.923 07:23:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:07.923 256+0 records in 00:13:07.923 256+0 records out 00:13:07.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153794 s, 6.8 MB/s 00:13:07.923 07:23:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.923 07:23:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:13:08.181 256+0 records in 00:13:08.181 256+0 records out 00:13:08.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15765 s, 6.7 MB/s 00:13:08.181 07:23:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:08.181 07:23:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:13:08.439 256+0 records in 00:13:08.439 256+0 records out 00:13:08.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156453 s, 6.7 MB/s 00:13:08.439 07:23:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:08.439 07:23:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:13:08.439 256+0 records in 00:13:08.439 256+0 records out 00:13:08.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156742 s, 6.7 MB/s 00:13:08.439 07:23:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:08.439 07:23:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:13:08.698 256+0 records in 00:13:08.698 256+0 records out 00:13:08.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15381 s, 6.8 MB/s 00:13:08.698 07:23:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:08.698 07:23:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:13:08.698 256+0 records in 00:13:08.698 256+0 records out 00:13:08.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15418 s, 6.8 MB/s 00:13:08.698 07:23:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:08.698 07:23:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:13:08.957 256+0 records in 00:13:08.957 256+0 records out 00:13:08.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155942 s, 6.7 MB/s 00:13:08.957 07:23:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:08.957 07:23:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:13:09.226 256+0 records in 00:13:09.226 256+0 records out 00:13:09.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157762 s, 6.6 MB/s 00:13:09.226 07:23:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:09.226 07:23:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:13:09.226 256+0 records in 00:13:09.226 256+0 records out 00:13:09.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157506 s, 6.7 MB/s 00:13:09.226 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:09.226 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:13:09.484 256+0 records in 00:13:09.484 256+0 records out 00:13:09.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158401 s, 6.6 MB/s 00:13:09.484 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:09.484 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:13:09.742 256+0 records in 00:13:09.742 256+0 records out 00:13:09.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.196545 s, 5.3 MB/s 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.743 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:10.001 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:10.001 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:10.001 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:10.001 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.001 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.001 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:10.001 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.001 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.001 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.001 07:23:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.567 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.568 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:10.827 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:10.827 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:10.827 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:10.827 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.827 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.827 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:10.827 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.827 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.827 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.827 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:11.395 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:11.395 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:11.395 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:11.395 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.395 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.395 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:11.395 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.395 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.395 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.395 07:23:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:11.395 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.654 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:11.913 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:11.913 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:11.913 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:11.913 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.913 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.913 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:11.913 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.913 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.913 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.913 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:12.172 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:12.172 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:12.172 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:12.172 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.172 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.172 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:12.172 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:12.172 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.172 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.172 07:23:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:12.430 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:12.430 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:12.430 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:12.430 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.430 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.430 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:12.430 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:12.430 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.430 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.430 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.689 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:13.277 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:13.277 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:13.277 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:13.277 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.277 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.277 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:13.277 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.277 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.277 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.277 07:23:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:13.277 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:13.277 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:13.277 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:13.277 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.277 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.277 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:13.278 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.278 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.278 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.278 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:13.537 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:13.537 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:13.537 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:13.537 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.537 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.537 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:13.537 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.537 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.537 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:13.537 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:13.795 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:13.795 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:13.795 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:13.795 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:13.795 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:13.795 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:13.795 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:13.795 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:13.795 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:13.795 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:13.795 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:14.053 07:23:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:14.312 malloc_lvol_verify 00:13:14.312 07:23:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:14.570 cfa7c870-ca34-459b-960e-221c67bd2b96 00:13:14.570 07:23:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:14.829 dfb3c6bc-b4d9-4bcb-be57-c57dbf09a317 00:13:14.829 07:23:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:15.087 /dev/nbd0 00:13:15.087 07:23:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:15.087 mke2fs 1.46.5 (30-Dec-2021) 00:13:15.087 00:13:15.087 Filesystem too small for a journal 00:13:15.087 Discarding device blocks: 0/1024 done 00:13:15.087 Creating filesystem with 1024 4k blocks and 1024 inodes 00:13:15.087 00:13:15.087 Allocating group tables: 0/1 done 00:13:15.087 Writing inode tables: 0/1 done 00:13:15.087 Writing superblocks and filesystem accounting information: 0/1 done 00:13:15.087 00:13:15.087 07:23:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:15.087 07:23:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:15.087 07:23:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:15.087 07:23:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:15.087 07:23:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:15.087 07:23:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:15.087 07:23:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.087 07:23:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 127926 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 127926 ']' 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 127926 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 127926 00:13:15.346 killing process with pid 127926 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 127926' 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@965 -- # kill 127926 00:13:15.346 07:23:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@970 -- # wait 127926 00:13:15.914 ************************************ 00:13:15.914 END TEST bdev_nbd 00:13:15.914 ************************************ 00:13:15.914 07:23:49 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:13:15.914 00:13:15.914 real 0m23.889s 00:13:15.914 user 0m30.911s 00:13:15.914 sys 0m11.241s 00:13:15.914 07:23:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:15.914 07:23:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:15.914 07:23:49 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:13:15.914 07:23:49 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:13:15.914 07:23:49 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:13:15.914 07:23:49 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:13:15.914 07:23:49 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:15.914 07:23:49 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:15.914 07:23:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:15.914 ************************************ 00:13:15.914 START TEST bdev_fio 00:13:15.914 ************************************ 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:15.914 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:13:15.914 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:16.173 07:23:49 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:16.173 ************************************ 00:13:16.173 START TEST bdev_fio_rw_verify 00:13:16.173 ************************************ 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:16.173 07:23:49 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.432 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.432 fio-3.35 00:13:16.432 Starting 16 threads 00:13:28.676 00:13:28.676 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=129068: Fri Jul 12 07:24:01 2024 00:13:28.676 read: IOPS=68.0k, BW=266MiB/s (279MB/s)(2657MiB/10004msec) 00:13:28.676 slat (nsec): min=1885, max=36419k, avg=45230.76, stdev=507466.47 00:13:28.676 clat (usec): min=8, max=44226, avg=358.09, stdev=1430.89 00:13:28.676 lat (usec): min=25, max=44251, avg=403.32, stdev=1517.17 00:13:28.676 clat percentiles (usec): 00:13:28.676 | 50.000th=[ 212], 99.000th=[ 1549], 99.900th=[16450], 99.990th=[28181], 00:13:28.676 | 99.999th=[44303] 00:13:28.676 write: IOPS=108k, BW=422MiB/s (443MB/s)(4169MiB/9876msec); 0 zone resets 00:13:28.676 slat (usec): min=7, max=60611, avg=72.20, stdev=711.28 00:13:28.676 clat (usec): min=9, max=68259, avg=436.60, stdev=1673.90 00:13:28.676 lat (usec): min=34, max=68315, avg=508.81, stdev=1818.84 00:13:28.676 clat percentiles (usec): 00:13:28.676 | 50.000th=[ 253], 99.000th=[ 8848], 99.900th=[20579], 99.990th=[38536], 00:13:28.676 | 99.999th=[52691] 00:13:28.676 bw ( KiB/s): min=267024, max=675080, per=99.41%, avg=429715.16, stdev=7178.68, samples=304 00:13:28.676 iops : min=66756, max=168770, avg=107428.58, stdev=1794.65, samples=304 00:13:28.676 lat (usec) : 10=0.01%, 20=0.01%, 50=0.48%, 100=7.35%, 250=47.08% 00:13:28.676 lat (usec) : 500=41.03%, 750=2.65%, 1000=0.13% 00:13:28.676 lat (msec) : 2=0.08%, 4=0.08%, 10=0.20%, 20=0.82%, 50=0.09% 00:13:28.676 lat (msec) : 100=0.01% 00:13:28.676 cpu : usr=56.34%, sys=2.26%, ctx=260005, majf=2, minf=84326 00:13:28.676 IO depths : 1=11.0%, 2=23.4%, 4=52.4%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:28.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.676 complete : 0=0.0%, 4=88.8%, 8=11.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.676 issued rwts: total=680299,1067235,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.676 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:28.676 00:13:28.676 Run status group 0 (all jobs): 00:13:28.676 READ: bw=266MiB/s (279MB/s), 266MiB/s-266MiB/s (279MB/s-279MB/s), io=2657MiB (2787MB), run=10004-10004msec 00:13:28.676 WRITE: bw=422MiB/s (443MB/s), 422MiB/s-422MiB/s (443MB/s-443MB/s), io=4169MiB (4371MB), run=9876-9876msec 00:13:28.676 ----------------------------------------------------- 00:13:28.676 Suppressions used: 00:13:28.676 count bytes template 00:13:28.676 16 140 /usr/src/fio/parse.c 00:13:28.676 8684 833664 /usr/src/fio/iolog.c 00:13:28.676 1 904 libcrypto.so 00:13:28.676 ----------------------------------------------------- 00:13:28.676 00:13:28.676 ************************************ 00:13:28.676 END TEST bdev_fio_rw_verify 00:13:28.676 ************************************ 00:13:28.676 00:13:28.676 real 0m12.278s 00:13:28.676 user 1m33.336s 00:13:28.676 sys 0m4.608s 00:13:28.676 07:24:02 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:28.676 07:24:02 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:13:28.676 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:28.677 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "ae26b3ac-3126-4c58-9364-da0f99b507e7"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ae26b3ac-3126-4c58-9364-da0f99b507e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "fff08bb1-d87e-5755-bdf9-8fe63a4cb90b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "fff08bb1-d87e-5755-bdf9-8fe63a4cb90b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "c372025b-7de5-5218-9dac-f7a6e7e3a780"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "c372025b-7de5-5218-9dac-f7a6e7e3a780",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b266a00a-063f-5e9d-b29c-6eb5053a54ae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b266a00a-063f-5e9d-b29c-6eb5053a54ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "7fac3260-8cb4-5086-8b4f-d6a5a8a04ef5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7fac3260-8cb4-5086-8b4f-d6a5a8a04ef5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "ab234728-bfdb-56a2-b345-9367b92cf9e1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ab234728-bfdb-56a2-b345-9367b92cf9e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "842463dd-c2f0-5fbe-b25f-a7383fccc066"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "842463dd-c2f0-5fbe-b25f-a7383fccc066",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cbfd6457-5475-5cb4-80e1-6f283dca5497"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cbfd6457-5475-5cb4-80e1-6f283dca5497",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "3889612e-6409-59cf-805b-53f06a410392"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3889612e-6409-59cf-805b-53f06a410392",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "75e3a63a-f4e1-500e-9c46-00bfaae01314"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "75e3a63a-f4e1-500e-9c46-00bfaae01314",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a3ac040b-4963-5e80-b6ee-65d2d7d77fef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a3ac040b-4963-5e80-b6ee-65d2d7d77fef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "d4369962-7c86-5b17-9f6d-b22e4295f69a"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d4369962-7c86-5b17-9f6d-b22e4295f69a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "1a8fd762-17bf-4486-8d9f-7526056536cb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1a8fd762-17bf-4486-8d9f-7526056536cb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1a8fd762-17bf-4486-8d9f-7526056536cb",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a35e9153-d79d-40da-ba47-096d2e214dda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "74a8028e-5cb7-4d39-884c-abfbd781427f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "921d64ca-4dbe-4217-b026-7160169585fe"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "921d64ca-4dbe-4217-b026-7160169585fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "921d64ca-4dbe-4217-b026-7160169585fe",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "9b9fbf8c-8dce-4975-92eb-99acf8c908b3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "405b9d40-a1c9-42c2-bf47-d58fe381d6e0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "fcc40bac-6136-4d89-bc4b-86f4d61b9d1d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fcc40bac-6136-4d89-bc4b-86f4d61b9d1d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fcc40bac-6136-4d89-bc4b-86f4d61b9d1d",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "9143da10-89de-4bdb-aa40-f1c56c868972",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "a44e6051-4b7a-4477-80cd-87ab057dffeb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "a5a00b8f-9a6b-41aa-898d-9ce526be531c"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "a5a00b8f-9a6b-41aa-898d-9ce526be531c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:28.677 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:13:28.677 Malloc1p0 00:13:28.677 Malloc1p1 00:13:28.677 Malloc2p0 00:13:28.677 Malloc2p1 00:13:28.677 Malloc2p2 00:13:28.677 Malloc2p3 00:13:28.677 Malloc2p4 00:13:28.677 Malloc2p5 00:13:28.677 Malloc2p6 00:13:28.677 Malloc2p7 00:13:28.677 TestPT 00:13:28.677 raid0 00:13:28.677 concat0 ]] 00:13:28.677 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:28.678 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "ae26b3ac-3126-4c58-9364-da0f99b507e7"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ae26b3ac-3126-4c58-9364-da0f99b507e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "fff08bb1-d87e-5755-bdf9-8fe63a4cb90b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "fff08bb1-d87e-5755-bdf9-8fe63a4cb90b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "c372025b-7de5-5218-9dac-f7a6e7e3a780"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "c372025b-7de5-5218-9dac-f7a6e7e3a780",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b266a00a-063f-5e9d-b29c-6eb5053a54ae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b266a00a-063f-5e9d-b29c-6eb5053a54ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "7fac3260-8cb4-5086-8b4f-d6a5a8a04ef5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7fac3260-8cb4-5086-8b4f-d6a5a8a04ef5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "ab234728-bfdb-56a2-b345-9367b92cf9e1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ab234728-bfdb-56a2-b345-9367b92cf9e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "842463dd-c2f0-5fbe-b25f-a7383fccc066"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "842463dd-c2f0-5fbe-b25f-a7383fccc066",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cbfd6457-5475-5cb4-80e1-6f283dca5497"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cbfd6457-5475-5cb4-80e1-6f283dca5497",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "3889612e-6409-59cf-805b-53f06a410392"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3889612e-6409-59cf-805b-53f06a410392",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "75e3a63a-f4e1-500e-9c46-00bfaae01314"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "75e3a63a-f4e1-500e-9c46-00bfaae01314",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a3ac040b-4963-5e80-b6ee-65d2d7d77fef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a3ac040b-4963-5e80-b6ee-65d2d7d77fef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "d4369962-7c86-5b17-9f6d-b22e4295f69a"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d4369962-7c86-5b17-9f6d-b22e4295f69a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "1a8fd762-17bf-4486-8d9f-7526056536cb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "1a8fd762-17bf-4486-8d9f-7526056536cb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1a8fd762-17bf-4486-8d9f-7526056536cb",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "a35e9153-d79d-40da-ba47-096d2e214dda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "74a8028e-5cb7-4d39-884c-abfbd781427f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "921d64ca-4dbe-4217-b026-7160169585fe"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "921d64ca-4dbe-4217-b026-7160169585fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "921d64ca-4dbe-4217-b026-7160169585fe",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "9b9fbf8c-8dce-4975-92eb-99acf8c908b3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "405b9d40-a1c9-42c2-bf47-d58fe381d6e0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "fcc40bac-6136-4d89-bc4b-86f4d61b9d1d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fcc40bac-6136-4d89-bc4b-86f4d61b9d1d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fcc40bac-6136-4d89-bc4b-86f4d61b9d1d",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "9143da10-89de-4bdb-aa40-f1c56c868972",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "a44e6051-4b7a-4477-80cd-87ab057dffeb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "a5a00b8f-9a6b-41aa-898d-9ce526be531c"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "a5a00b8f-9a6b-41aa-898d-9ce526be531c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:28.678 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:28.678 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:13:28.678 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:13:28.678 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:28.678 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:28.679 07:24:02 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:28.679 ************************************ 00:13:28.679 START TEST bdev_fio_trim 00:13:28.679 ************************************ 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # local sanitizers 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # shift 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local asan_lib= 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libasan 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # break 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:28.679 07:24:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:28.939 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:28.939 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:28.939 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:28.939 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:28.939 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:28.939 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:28.939 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:28.939 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:28.939 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:28.939 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:28.939 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:28.939 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:28.939 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:28.939 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:28.939 fio-3.35 00:13:28.939 Starting 14 threads 00:13:41.142 00:13:41.142 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=129278: Fri Jul 12 07:24:13 2024 00:13:41.142 write: IOPS=130k, BW=509MiB/s (533MB/s)(5092MiB/10010msec); 0 zone resets 00:13:41.142 slat (nsec): min=1994, max=38048k, avg=37983.89, stdev=403478.46 00:13:41.142 clat (usec): min=17, max=38277, avg=265.94, stdev=1091.18 00:13:41.142 lat (usec): min=33, max=38307, avg=303.92, stdev=1163.19 00:13:41.142 clat percentiles (usec): 00:13:41.142 | 50.000th=[ 182], 99.000th=[ 453], 99.900th=[16319], 99.990th=[22414], 00:13:41.142 | 99.999th=[28443] 00:13:41.142 bw ( KiB/s): min=334545, max=741576, per=100.00%, avg=522640.67, stdev=9463.51, samples=267 00:13:41.142 iops : min=83636, max=185394, avg=130660.22, stdev=2365.87, samples=267 00:13:41.142 trim: IOPS=130k, BW=509MiB/s (533MB/s)(5092MiB/10010msec); 0 zone resets 00:13:41.142 slat (usec): min=4, max=32027, avg=28.09, stdev=360.08 00:13:41.143 clat (usec): min=3, max=38307, avg=292.37, stdev=1123.37 00:13:41.143 lat (usec): min=12, max=38328, avg=320.46, stdev=1179.70 00:13:41.143 clat percentiles (usec): 00:13:41.143 | 50.000th=[ 206], 99.000th=[ 437], 99.900th=[16319], 99.990th=[24249], 00:13:41.143 | 99.999th=[28443] 00:13:41.143 bw ( KiB/s): min=334545, max=741576, per=100.00%, avg=522640.67, stdev=9463.54, samples=267 00:13:41.143 iops : min=83636, max=185394, avg=130660.22, stdev=2365.87, samples=267 00:13:41.143 lat (usec) : 4=0.01%, 10=0.04%, 20=0.13%, 50=0.83%, 100=6.40% 00:13:41.143 lat (usec) : 250=67.64%, 500=24.11%, 750=0.21%, 1000=0.05% 00:13:41.143 lat (msec) : 2=0.03%, 4=0.01%, 10=0.04%, 20=0.47%, 50=0.02% 00:13:41.143 cpu : usr=69.18%, sys=0.43%, ctx=172553, majf=0, minf=8902 00:13:41.143 IO depths : 1=12.4%, 2=24.8%, 4=50.1%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:41.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.143 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.143 issued rwts: total=0,1303638,1303639,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.143 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:41.143 00:13:41.143 Run status group 0 (all jobs): 00:13:41.143 WRITE: bw=509MiB/s (533MB/s), 509MiB/s-509MiB/s (533MB/s-533MB/s), io=5092MiB (5340MB), run=10010-10010msec 00:13:41.143 TRIM: bw=509MiB/s (533MB/s), 509MiB/s-509MiB/s (533MB/s-533MB/s), io=5092MiB (5340MB), run=10010-10010msec 00:13:41.143 ----------------------------------------------------- 00:13:41.143 Suppressions used: 00:13:41.143 count bytes template 00:13:41.143 14 129 /usr/src/fio/parse.c 00:13:41.143 1 904 libcrypto.so 00:13:41.143 ----------------------------------------------------- 00:13:41.143 00:13:41.143 ************************************ 00:13:41.143 END TEST bdev_fio_trim 00:13:41.143 ************************************ 00:13:41.143 00:13:41.143 real 0m11.909s 00:13:41.143 user 1m39.477s 00:13:41.143 sys 0m1.582s 00:13:41.143 07:24:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:41.143 07:24:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:13:41.143 07:24:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:13:41.143 07:24:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:41.143 07:24:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:13:41.143 /home/vagrant/spdk_repo/spdk 00:13:41.143 07:24:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:13:41.143 00:13:41.143 real 0m24.587s 00:13:41.143 user 3m13.011s 00:13:41.143 sys 0m6.355s 00:13:41.143 07:24:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:41.143 07:24:14 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:41.143 ************************************ 00:13:41.143 END TEST bdev_fio 00:13:41.143 ************************************ 00:13:41.143 07:24:14 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:41.143 07:24:14 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:41.143 07:24:14 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:13:41.143 07:24:14 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:41.143 07:24:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:41.143 ************************************ 00:13:41.143 START TEST bdev_verify 00:13:41.143 ************************************ 00:13:41.143 07:24:14 blockdev_general.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:41.143 [2024-07-12 07:24:14.475161] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:41.143 [2024-07-12 07:24:14.475736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129446 ] 00:13:41.143 [2024-07-12 07:24:14.625479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:41.143 [2024-07-12 07:24:14.703747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.143 [2024-07-12 07:24:14.703757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.143 [2024-07-12 07:24:14.886419] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:41.143 [2024-07-12 07:24:14.886685] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:41.143 [2024-07-12 07:24:14.894344] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:41.143 [2024-07-12 07:24:14.894490] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:41.143 [2024-07-12 07:24:14.902429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:41.143 [2024-07-12 07:24:14.902570] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:41.143 [2024-07-12 07:24:14.902715] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:41.143 [2024-07-12 07:24:15.019194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:41.143 [2024-07-12 07:24:15.019546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.143 [2024-07-12 07:24:15.019665] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:41.143 [2024-07-12 07:24:15.019867] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.143 [2024-07-12 07:24:15.023177] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.143 [2024-07-12 07:24:15.023375] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:41.711 Running I/O for 5 seconds... 00:13:46.978 00:13:46.978 Latency(us) 00:13:46.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.978 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x1000 00:13:46.978 Malloc0 : 5.13 1423.32 5.56 0.00 0.00 89786.12 553.94 209715.20 00:13:46.978 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x1000 length 0x1000 00:13:46.978 Malloc0 : 5.12 1400.77 5.47 0.00 0.00 91238.61 628.05 331549.74 00:13:46.978 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x800 00:13:46.978 Malloc1p0 : 5.13 723.82 2.83 0.00 0.00 176077.88 2980.33 199728.76 00:13:46.978 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x800 length 0x800 00:13:46.978 Malloc1p0 : 5.12 725.09 2.83 0.00 0.00 175779.43 2933.52 189742.32 00:13:46.978 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x800 00:13:46.978 Malloc1p1 : 5.13 723.53 2.83 0.00 0.00 175748.51 2855.50 195734.19 00:13:46.978 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x800 length 0x800 00:13:46.978 Malloc1p1 : 5.12 724.82 2.83 0.00 0.00 175444.03 2871.10 186746.39 00:13:46.978 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x200 00:13:46.978 Malloc2p0 : 5.13 723.26 2.83 0.00 0.00 175422.59 2902.31 191739.61 00:13:46.978 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x200 length 0x200 00:13:46.978 Malloc2p0 : 5.12 724.54 2.83 0.00 0.00 175104.24 2886.70 182751.82 00:13:46.978 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x200 00:13:46.978 Malloc2p1 : 5.13 723.00 2.82 0.00 0.00 175052.03 2902.31 186746.39 00:13:46.978 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x200 length 0x200 00:13:46.978 Malloc2p1 : 5.13 724.23 2.83 0.00 0.00 174761.11 2886.70 178757.24 00:13:46.978 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x200 00:13:46.978 Malloc2p2 : 5.14 722.74 2.82 0.00 0.00 174724.15 2855.50 182751.82 00:13:46.978 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x200 length 0x200 00:13:46.978 Malloc2p2 : 5.13 723.91 2.83 0.00 0.00 174432.08 2855.50 173764.02 00:13:46.978 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x200 00:13:46.978 Malloc2p3 : 5.14 722.47 2.82 0.00 0.00 174371.52 2980.33 179755.89 00:13:46.978 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x200 length 0x200 00:13:46.978 Malloc2p3 : 5.13 723.63 2.83 0.00 0.00 174084.04 2964.72 169769.45 00:13:46.978 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x200 00:13:46.978 Malloc2p4 : 5.14 722.21 2.82 0.00 0.00 174036.65 2933.52 175761.31 00:13:46.978 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x200 length 0x200 00:13:46.978 Malloc2p4 : 5.13 723.35 2.83 0.00 0.00 173754.30 2917.91 165774.87 00:13:46.978 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x200 00:13:46.978 Malloc2p5 : 5.14 721.95 2.82 0.00 0.00 173683.05 2949.12 172765.38 00:13:46.978 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x200 length 0x200 00:13:46.978 Malloc2p5 : 5.23 734.40 2.87 0.00 0.00 170803.36 2949.12 162778.94 00:13:46.978 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x200 00:13:46.978 Malloc2p6 : 5.23 734.31 2.87 0.00 0.00 170423.55 2933.52 167772.16 00:13:46.978 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x200 length 0x200 00:13:46.978 Malloc2p6 : 5.23 734.11 2.87 0.00 0.00 170469.19 2902.31 158784.37 00:13:46.978 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x200 00:13:46.978 Malloc2p7 : 5.23 734.02 2.87 0.00 0.00 170087.90 2933.52 163777.58 00:13:46.978 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x200 length 0x200 00:13:46.978 Malloc2p7 : 5.23 733.84 2.87 0.00 0.00 170135.58 2933.52 153791.15 00:13:46.978 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x1000 00:13:46.978 TestPT : 5.23 714.18 2.79 0.00 0.00 173921.56 10673.01 164776.23 00:13:46.978 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x1000 length 0x1000 00:13:46.978 TestPT : 5.24 708.95 2.77 0.00 0.00 175529.75 10298.51 230686.72 00:13:46.978 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x2000 00:13:46.978 raid0 : 5.24 733.39 2.86 0.00 0.00 169363.54 2652.65 149796.57 00:13:46.978 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x2000 length 0x2000 00:13:46.978 raid0 : 5.24 732.88 2.86 0.00 0.00 169523.86 2683.86 137812.85 00:13:46.978 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x2000 00:13:46.978 concat0 : 5.24 732.87 2.86 0.00 0.00 169135.30 2824.29 144803.35 00:13:46.978 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x2000 length 0x2000 00:13:46.978 concat0 : 5.24 732.37 2.86 0.00 0.00 169288.57 2793.08 133818.27 00:13:46.978 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x1000 00:13:46.978 raid1 : 5.24 732.36 2.86 0.00 0.00 168880.87 3276.80 139810.13 00:13:46.978 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x1000 length 0x1000 00:13:46.978 raid1 : 5.25 731.92 2.86 0.00 0.00 169023.67 3401.63 130822.34 00:13:46.978 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x0 length 0x4e2 00:13:46.978 AIO0 : 5.25 731.78 2.86 0.00 0.00 168324.33 2356.18 139810.13 00:13:46.978 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:46.978 Verification LBA range: start 0x4e2 length 0x4e2 00:13:46.978 AIO0 : 5.25 731.45 2.86 0.00 0.00 168419.51 2761.87 134816.91 00:13:46.978 =================================================================================================================== 00:13:46.978 Total : 24629.46 96.21 0.00 0.00 163184.18 553.94 331549.74 00:13:47.579 00:13:47.579 real 0m6.873s 00:13:47.579 user 0m11.641s 00:13:47.579 sys 0m0.623s 00:13:47.579 07:24:21 blockdev_general.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:47.579 07:24:21 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:47.579 ************************************ 00:13:47.579 END TEST bdev_verify 00:13:47.579 ************************************ 00:13:47.579 07:24:21 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:47.579 07:24:21 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:13:47.579 07:24:21 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:47.579 07:24:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:47.579 ************************************ 00:13:47.579 START TEST bdev_verify_big_io 00:13:47.580 ************************************ 00:13:47.580 07:24:21 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:47.580 [2024-07-12 07:24:21.407023] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:47.580 [2024-07-12 07:24:21.407468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129552 ] 00:13:47.838 [2024-07-12 07:24:21.557501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:47.838 [2024-07-12 07:24:21.640014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.838 [2024-07-12 07:24:21.640014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.097 [2024-07-12 07:24:21.822267] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:48.097 [2024-07-12 07:24:21.822635] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:48.097 [2024-07-12 07:24:21.830155] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:48.097 [2024-07-12 07:24:21.830301] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:48.097 [2024-07-12 07:24:21.838247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:48.097 [2024-07-12 07:24:21.838411] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:48.097 [2024-07-12 07:24:21.838557] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:48.097 [2024-07-12 07:24:21.949844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:48.097 [2024-07-12 07:24:21.950225] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.097 [2024-07-12 07:24:21.950315] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:48.097 [2024-07-12 07:24:21.950442] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.097 [2024-07-12 07:24:21.953609] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.097 [2024-07-12 07:24:21.953775] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:48.356 [2024-07-12 07:24:22.173192] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.174766] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.176886] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.179035] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.180469] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.182632] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.184260] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.186555] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.188074] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.190391] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.191881] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.194143] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.195683] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.197999] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.200247] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.201759] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:48.356 [2024-07-12 07:24:22.238717] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:48.615 [2024-07-12 07:24:22.242108] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:48.615 Running I/O for 5 seconds... 00:13:55.179 00:13:55.179 Latency(us) 00:13:55.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.179 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:55.179 Verification LBA range: start 0x0 length 0x100 00:13:55.179 Malloc0 : 5.70 269.27 16.83 0.00 0.00 468340.26 717.78 1326198.98 00:13:55.179 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:55.179 Verification LBA range: start 0x100 length 0x100 00:13:55.179 Malloc0 : 5.46 257.85 16.12 0.00 0.00 488797.76 663.16 1557884.34 00:13:55.179 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:55.179 Verification LBA range: start 0x0 length 0x80 00:13:55.179 Malloc1p0 : 5.89 103.82 6.49 0.00 0.00 1169372.19 3027.14 2029244.22 00:13:55.179 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:55.179 Verification LBA range: start 0x80 length 0x80 00:13:55.179 Malloc1p0 : 5.88 81.63 5.10 0.00 0.00 1465344.05 2683.86 2236962.13 00:13:55.179 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:55.179 Verification LBA range: start 0x0 length 0x80 00:13:55.179 Malloc1p1 : 6.25 48.65 3.04 0.00 0.00 2377027.93 2153.33 3898705.43 00:13:55.179 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:55.179 Verification LBA range: start 0x80 length 0x80 00:13:55.179 Malloc1p1 : 6.20 51.62 3.23 0.00 0.00 2242633.75 2090.91 3659030.92 00:13:55.179 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.179 Verification LBA range: start 0x0 length 0x20 00:13:55.179 Malloc2p0 : 5.84 35.63 2.23 0.00 0.00 818536.30 873.81 1485981.99 00:13:55.179 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.179 Verification LBA range: start 0x20 length 0x20 00:13:55.179 Malloc2p0 : 5.83 38.44 2.40 0.00 0.00 757819.60 885.52 1310220.68 00:13:55.179 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.179 Verification LBA range: start 0x0 length 0x20 00:13:55.179 Malloc2p1 : 5.84 35.62 2.23 0.00 0.00 813474.76 920.62 1462014.54 00:13:55.179 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.179 Verification LBA range: start 0x20 length 0x20 00:13:55.179 Malloc2p1 : 5.83 38.44 2.40 0.00 0.00 753143.42 947.93 1286253.23 00:13:55.179 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.179 Verification LBA range: start 0x0 length 0x20 00:13:55.179 Malloc2p2 : 5.84 35.62 2.23 0.00 0.00 807713.84 897.22 1438047.09 00:13:55.179 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.179 Verification LBA range: start 0x20 length 0x20 00:13:55.179 Malloc2p2 : 5.83 38.43 2.40 0.00 0.00 747966.62 924.53 1270274.93 00:13:55.179 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.179 Verification LBA range: start 0x0 length 0x20 00:13:55.179 Malloc2p3 : 5.84 35.61 2.23 0.00 0.00 802130.96 940.13 1422068.78 00:13:55.180 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x20 length 0x20 00:13:55.180 Malloc2p3 : 5.88 40.80 2.55 0.00 0.00 705729.97 940.13 1246307.47 00:13:55.180 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x0 length 0x20 00:13:55.180 Malloc2p4 : 5.90 37.98 2.37 0.00 0.00 752610.39 889.42 1398101.33 00:13:55.180 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x20 length 0x20 00:13:55.180 Malloc2p4 : 5.88 40.79 2.55 0.00 0.00 701062.38 866.01 1230329.17 00:13:55.180 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x0 length 0x20 00:13:55.180 Malloc2p5 : 5.90 37.98 2.37 0.00 0.00 747688.53 756.78 1382123.03 00:13:55.180 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x20 length 0x20 00:13:55.180 Malloc2p5 : 5.88 40.79 2.55 0.00 0.00 696471.20 784.09 1214350.87 00:13:55.180 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x0 length 0x20 00:13:55.180 Malloc2p6 : 5.90 37.97 2.37 0.00 0.00 742896.53 752.88 1366144.73 00:13:55.180 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x20 length 0x20 00:13:55.180 Malloc2p6 : 5.89 40.78 2.55 0.00 0.00 691694.08 772.39 1198372.57 00:13:55.180 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x0 length 0x20 00:13:55.180 Malloc2p7 : 5.90 37.96 2.37 0.00 0.00 738297.46 760.69 1350166.43 00:13:55.180 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x20 length 0x20 00:13:55.180 Malloc2p7 : 5.89 40.77 2.55 0.00 0.00 687023.45 776.29 1182394.27 00:13:55.180 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x0 length 0x100 00:13:55.180 TestPT : 6.31 50.72 3.17 0.00 0.00 2117508.84 1302.92 3627074.32 00:13:55.180 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x100 length 0x100 00:13:55.180 TestPT : 6.24 48.86 3.05 0.00 0.00 2205067.74 77394.90 3115768.69 00:13:55.180 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x0 length 0x200 00:13:55.180 raid0 : 6.33 55.60 3.47 0.00 0.00 1914064.58 1388.74 3499247.91 00:13:55.180 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x200 length 0x200 00:13:55.180 raid0 : 6.20 59.33 3.71 0.00 0.00 1785946.96 1419.95 3259573.39 00:13:55.180 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x0 length 0x200 00:13:55.180 concat0 : 6.33 60.68 3.79 0.00 0.00 1725692.27 1373.14 3371421.50 00:13:55.180 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x200 length 0x200 00:13:55.180 concat0 : 6.20 69.63 4.35 0.00 0.00 1506225.16 1373.14 3147725.29 00:13:55.180 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x0 length 0x100 00:13:55.180 raid1 : 6.26 85.04 5.32 0.00 0.00 1208677.22 1763.23 3259573.39 00:13:55.180 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x100 length 0x100 00:13:55.180 raid1 : 6.25 76.86 4.80 0.00 0.00 1340946.99 1778.83 3019898.88 00:13:55.180 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x0 length 0x4e 00:13:55.180 AIO0 : 6.33 72.64 4.54 0.00 0.00 851159.00 651.46 2005276.77 00:13:55.180 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:55.180 Verification LBA range: start 0x4e length 0x4e 00:13:55.180 AIO0 : 6.32 78.98 4.94 0.00 0.00 783572.86 1341.93 1797558.86 00:13:55.180 =================================================================================================================== 00:13:55.180 Total : 2084.78 130.30 0.00 0.00 1050226.26 651.46 3898705.43 00:13:55.746 00:13:55.746 real 0m8.027s 00:13:55.746 user 0m14.585s 00:13:55.746 sys 0m0.591s 00:13:55.746 ************************************ 00:13:55.746 END TEST bdev_verify_big_io 00:13:55.746 ************************************ 00:13:55.746 07:24:29 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:55.746 07:24:29 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.746 07:24:29 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:55.746 07:24:29 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:55.746 07:24:29 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:55.746 07:24:29 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:55.746 ************************************ 00:13:55.746 START TEST bdev_write_zeroes 00:13:55.746 ************************************ 00:13:55.746 07:24:29 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:55.746 [2024-07-12 07:24:29.507730] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:55.746 [2024-07-12 07:24:29.508401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129676 ] 00:13:56.004 [2024-07-12 07:24:29.665360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.004 [2024-07-12 07:24:29.739821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.263 [2024-07-12 07:24:29.920910] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:56.263 [2024-07-12 07:24:29.921296] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:56.263 [2024-07-12 07:24:29.928823] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:56.263 [2024-07-12 07:24:29.928972] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:56.263 [2024-07-12 07:24:29.936886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:56.263 [2024-07-12 07:24:29.937057] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:56.263 [2024-07-12 07:24:29.937225] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:56.263 [2024-07-12 07:24:30.049946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:56.263 [2024-07-12 07:24:30.050314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.263 [2024-07-12 07:24:30.050419] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:56.263 [2024-07-12 07:24:30.050521] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.263 [2024-07-12 07:24:30.053563] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.263 [2024-07-12 07:24:30.053717] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:56.522 Running I/O for 1 seconds... 00:13:57.898 00:13:57.898 Latency(us) 00:13:57.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.898 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.898 Malloc0 : 1.03 5946.86 23.23 0.00 0.00 21506.98 670.96 37698.80 00:13:57.898 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.898 Malloc1p0 : 1.03 5940.19 23.20 0.00 0.00 21492.83 877.71 36949.82 00:13:57.898 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.898 Malloc1p1 : 1.04 5934.23 23.18 0.00 0.00 21472.55 885.52 36200.84 00:13:57.898 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.898 Malloc2p0 : 1.04 5928.01 23.16 0.00 0.00 21457.85 893.32 35451.86 00:13:57.898 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.898 Malloc2p1 : 1.04 5921.92 23.13 0.00 0.00 21429.27 893.32 34702.87 00:13:57.898 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.898 Malloc2p2 : 1.04 5915.92 23.11 0.00 0.00 21413.60 869.91 33704.23 00:13:57.898 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.898 Malloc2p3 : 1.04 5910.06 23.09 0.00 0.00 21396.08 873.81 32955.25 00:13:57.898 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.898 Malloc2p4 : 1.04 5903.86 23.06 0.00 0.00 21378.54 889.42 32206.26 00:13:57.898 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.898 Malloc2p5 : 1.04 5897.73 23.04 0.00 0.00 21353.05 881.62 31332.45 00:13:57.898 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.898 Malloc2p6 : 1.04 5891.57 23.01 0.00 0.00 21342.24 877.71 30583.47 00:13:57.898 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.898 Malloc2p7 : 1.04 5885.46 22.99 0.00 0.00 21318.33 869.91 29709.65 00:13:57.898 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.898 TestPT : 1.04 5879.45 22.97 0.00 0.00 21297.03 889.42 28960.67 00:13:57.898 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.899 raid0 : 1.05 5872.52 22.94 0.00 0.00 21276.84 1334.13 27587.54 00:13:57.899 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.899 concat0 : 1.05 5951.75 23.25 0.00 0.00 20929.68 1349.73 26089.57 00:13:57.899 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.899 raid1 : 1.06 5943.36 23.22 0.00 0.00 20886.76 2122.12 23967.45 00:13:57.899 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.899 AIO0 : 1.06 5932.31 23.17 0.00 0.00 20833.08 1404.34 23343.30 00:13:57.899 =================================================================================================================== 00:13:57.899 Total : 94655.20 369.75 0.00 0.00 21297.46 670.96 37698.80 00:13:58.467 ************************************ 00:13:58.467 END TEST bdev_write_zeroes 00:13:58.467 ************************************ 00:13:58.467 00:13:58.467 real 0m2.599s 00:13:58.467 user 0m1.922s 00:13:58.467 sys 0m0.492s 00:13:58.467 07:24:32 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:58.467 07:24:32 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:58.467 07:24:32 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.467 07:24:32 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:58.467 07:24:32 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:58.467 07:24:32 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:58.467 ************************************ 00:13:58.467 START TEST bdev_json_nonenclosed 00:13:58.467 ************************************ 00:13:58.467 07:24:32 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.467 [2024-07-12 07:24:32.174169] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:58.467 [2024-07-12 07:24:32.174568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129728 ] 00:13:58.467 [2024-07-12 07:24:32.319455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.727 [2024-07-12 07:24:32.399521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.727 [2024-07-12 07:24:32.399921] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:58.727 [2024-07-12 07:24:32.400050] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:58.727 [2024-07-12 07:24:32.400101] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:58.727 ************************************ 00:13:58.727 END TEST bdev_json_nonenclosed 00:13:58.727 ************************************ 00:13:58.727 00:13:58.727 real 0m0.477s 00:13:58.727 user 0m0.257s 00:13:58.727 sys 0m0.120s 00:13:58.727 07:24:32 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:58.727 07:24:32 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:58.996 07:24:32 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.996 07:24:32 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:58.996 07:24:32 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:58.996 07:24:32 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:58.996 ************************************ 00:13:58.996 START TEST bdev_json_nonarray 00:13:58.996 ************************************ 00:13:58.996 07:24:32 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.996 [2024-07-12 07:24:32.726277] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:58.996 [2024-07-12 07:24:32.726655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129761 ] 00:13:59.294 [2024-07-12 07:24:32.871911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.294 [2024-07-12 07:24:32.955039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.294 [2024-07-12 07:24:32.955387] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:59.294 [2024-07-12 07:24:32.955514] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:59.294 [2024-07-12 07:24:32.955647] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:59.294 00:13:59.294 real 0m0.489s 00:13:59.294 user 0m0.260s 00:13:59.294 sys 0m0.129s 00:13:59.294 07:24:33 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:59.294 07:24:33 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:59.294 ************************************ 00:13:59.294 END TEST bdev_json_nonarray 00:13:59.294 ************************************ 00:13:59.553 07:24:33 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:13:59.553 07:24:33 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:13:59.553 07:24:33 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:59.553 07:24:33 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:59.553 07:24:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:59.553 ************************************ 00:13:59.553 START TEST bdev_qos 00:13:59.553 ************************************ 00:13:59.553 07:24:33 blockdev_general.bdev_qos -- common/autotest_common.sh@1121 -- # qos_test_suite '' 00:13:59.553 07:24:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=129790 00:13:59.553 07:24:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 129790' 00:13:59.553 Process qos testing pid: 129790 00:13:59.553 07:24:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:59.553 07:24:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:59.553 07:24:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 129790 00:13:59.553 07:24:33 blockdev_general.bdev_qos -- common/autotest_common.sh@827 -- # '[' -z 129790 ']' 00:13:59.553 07:24:33 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.553 07:24:33 blockdev_general.bdev_qos -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:59.553 07:24:33 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.553 07:24:33 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:59.553 07:24:33 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:59.553 [2024-07-12 07:24:33.286277] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:59.553 [2024-07-12 07:24:33.286662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129790 ] 00:13:59.553 [2024-07-12 07:24:33.435689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.812 [2024-07-12 07:24:33.520688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # return 0 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:00.749 Malloc_0 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_0 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:14:00.749 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:00.750 [ 00:14:00.750 { 00:14:00.750 "name": "Malloc_0", 00:14:00.750 "aliases": [ 00:14:00.750 "3db08213-01d2-4c1b-a6d6-e60117e4050c" 00:14:00.750 ], 00:14:00.750 "product_name": "Malloc disk", 00:14:00.750 "block_size": 512, 00:14:00.750 "num_blocks": 262144, 00:14:00.750 "uuid": "3db08213-01d2-4c1b-a6d6-e60117e4050c", 00:14:00.750 "assigned_rate_limits": { 00:14:00.750 "rw_ios_per_sec": 0, 00:14:00.750 "rw_mbytes_per_sec": 0, 00:14:00.750 "r_mbytes_per_sec": 0, 00:14:00.750 "w_mbytes_per_sec": 0 00:14:00.750 }, 00:14:00.750 "claimed": false, 00:14:00.750 "zoned": false, 00:14:00.750 "supported_io_types": { 00:14:00.750 "read": true, 00:14:00.750 "write": true, 00:14:00.750 "unmap": true, 00:14:00.750 "write_zeroes": true, 00:14:00.750 "flush": true, 00:14:00.750 "reset": true, 00:14:00.750 "compare": false, 00:14:00.750 "compare_and_write": false, 00:14:00.750 "abort": true, 00:14:00.750 "nvme_admin": false, 00:14:00.750 "nvme_io": false 00:14:00.750 }, 00:14:00.750 "memory_domains": [ 00:14:00.750 { 00:14:00.750 "dma_device_id": "system", 00:14:00.750 "dma_device_type": 1 00:14:00.750 }, 00:14:00.750 { 00:14:00.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.750 "dma_device_type": 2 00:14:00.750 } 00:14:00.750 ], 00:14:00.750 "driver_specific": {} 00:14:00.750 } 00:14:00.750 ] 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:00.750 Null_1 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Null_1 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:00.750 [ 00:14:00.750 { 00:14:00.750 "name": "Null_1", 00:14:00.750 "aliases": [ 00:14:00.750 "a8a2a435-e451-4d68-9af7-e96045fa5c47" 00:14:00.750 ], 00:14:00.750 "product_name": "Null disk", 00:14:00.750 "block_size": 512, 00:14:00.750 "num_blocks": 262144, 00:14:00.750 "uuid": "a8a2a435-e451-4d68-9af7-e96045fa5c47", 00:14:00.750 "assigned_rate_limits": { 00:14:00.750 "rw_ios_per_sec": 0, 00:14:00.750 "rw_mbytes_per_sec": 0, 00:14:00.750 "r_mbytes_per_sec": 0, 00:14:00.750 "w_mbytes_per_sec": 0 00:14:00.750 }, 00:14:00.750 "claimed": false, 00:14:00.750 "zoned": false, 00:14:00.750 "supported_io_types": { 00:14:00.750 "read": true, 00:14:00.750 "write": true, 00:14:00.750 "unmap": false, 00:14:00.750 "write_zeroes": true, 00:14:00.750 "flush": false, 00:14:00.750 "reset": true, 00:14:00.750 "compare": false, 00:14:00.750 "compare_and_write": false, 00:14:00.750 "abort": true, 00:14:00.750 "nvme_admin": false, 00:14:00.750 "nvme_io": false 00:14:00.750 }, 00:14:00.750 "driver_specific": {} 00:14:00.750 } 00:14:00.750 ] 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:00.750 07:24:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:14:00.750 Running I/O for 60 seconds... 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 83116.12 332464.48 0.00 0.00 335872.00 0.00 0.00 ' 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=83116.12 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 83116 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=83116 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=20000 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 20000 -gt 1000 ']' 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 20000 Malloc_0 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 20000 IOPS Malloc_0 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:06.019 07:24:39 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:06.019 ************************************ 00:14:06.019 START TEST bdev_qos_iops 00:14:06.019 ************************************ 00:14:06.019 07:24:39 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1121 -- # run_qos_test 20000 IOPS Malloc_0 00:14:06.019 07:24:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=20000 00:14:06.019 07:24:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:14:06.019 07:24:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:14:06.019 07:24:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:14:06.019 07:24:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:14:06.019 07:24:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:06.019 07:24:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:14:06.019 07:24:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:06.019 07:24:39 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:14:11.297 07:24:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 20002.90 80011.58 0.00 0.00 81280.00 0.00 0.00 ' 00:14:11.297 07:24:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:14:11.297 07:24:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:14:11.297 07:24:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=20002.90 00:14:11.297 07:24:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 20002 00:14:11.297 ************************************ 00:14:11.297 END TEST bdev_qos_iops 00:14:11.297 ************************************ 00:14:11.297 07:24:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=20002 00:14:11.297 07:24:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:14:11.297 07:24:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=18000 00:14:11.297 07:24:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=22000 00:14:11.297 07:24:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 20002 -lt 18000 ']' 00:14:11.297 07:24:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 20002 -gt 22000 ']' 00:14:11.297 00:14:11.297 real 0m5.199s 00:14:11.297 user 0m0.117s 00:14:11.297 sys 0m0.031s 00:14:11.297 07:24:44 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:11.297 07:24:44 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:14:11.297 07:24:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:14:11.297 07:24:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:14:11.297 07:24:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:14:11.297 07:24:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:11.297 07:24:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:11.297 07:24:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:14:11.297 07:24:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 31298.11 125192.44 0.00 0.00 126976.00 0.00 0.00 ' 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=126976.00 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 126976 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=126976 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=12 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 12 -lt 2 ']' 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:16.563 07:24:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:16.563 ************************************ 00:14:16.563 START TEST bdev_qos_bw 00:14:16.563 ************************************ 00:14:16.563 07:24:50 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1121 -- # run_qos_test 12 BANDWIDTH Null_1 00:14:16.563 07:24:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=12 00:14:16.563 07:24:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:14:16.563 07:24:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:14:16.563 07:24:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:14:16.563 07:24:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:14:16.563 07:24:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:16.563 07:24:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:16.563 07:24:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:14:16.563 07:24:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 3072.89 12291.57 0.00 0.00 12544.00 0.00 0.00 ' 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=12544.00 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 12544 00:14:21.836 ************************************ 00:14:21.836 END TEST bdev_qos_bw 00:14:21.836 ************************************ 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=12544 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=12288 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=11059 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=13516 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 12544 -lt 11059 ']' 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 12544 -gt 13516 ']' 00:14:21.836 00:14:21.836 real 0m5.243s 00:14:21.836 user 0m0.111s 00:14:21.836 sys 0m0.045s 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:14:21.836 07:24:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:14:21.836 07:24:55 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.836 07:24:55 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:21.836 07:24:55 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.836 07:24:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:14:21.836 07:24:55 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:21.836 07:24:55 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:21.836 07:24:55 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:21.836 ************************************ 00:14:21.836 START TEST bdev_qos_ro_bw 00:14:21.836 ************************************ 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1121 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:21.836 07:24:55 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:14:27.110 07:25:00 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.98 2047.93 0.00 0.00 2068.00 0.00 0.00 ' 00:14:27.110 07:25:00 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:14:27.110 07:25:00 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:27.110 07:25:00 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:14:27.110 07:25:00 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2068.00 00:14:27.110 07:25:00 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2068 00:14:27.110 ************************************ 00:14:27.110 END TEST bdev_qos_ro_bw 00:14:27.110 ************************************ 00:14:27.110 07:25:00 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2068 00:14:27.110 07:25:00 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:27.110 07:25:00 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:14:27.110 07:25:00 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:14:27.110 07:25:00 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:14:27.110 07:25:00 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2068 -lt 1843 ']' 00:14:27.110 07:25:00 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2068 -gt 2252 ']' 00:14:27.110 00:14:27.110 real 0m5.188s 00:14:27.110 user 0m0.124s 00:14:27.110 sys 0m0.033s 00:14:27.110 07:25:00 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:27.110 07:25:00 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:14:27.110 07:25:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:14:27.110 07:25:00 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.110 07:25:00 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:27.676 00:14:27.676 Latency(us) 00:14:27.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.676 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:27.676 Malloc_0 : 26.73 27785.26 108.54 0.00 0.00 9125.83 2184.53 503316.48 00:14:27.676 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:27.676 Null_1 : 26.85 29267.58 114.33 0.00 0.00 8729.07 624.15 120835.90 00:14:27.676 =================================================================================================================== 00:14:27.676 Total : 57052.84 222.86 0.00 0.00 8921.85 624.15 503316.48 00:14:27.676 0 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 129790 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@946 -- # '[' -z 129790 ']' 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # kill -0 129790 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # uname 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 129790 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@964 -- # echo 'killing process with pid 129790' 00:14:27.676 killing process with pid 129790 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@965 -- # kill 129790 00:14:27.676 Received shutdown signal, test time was about 26.893974 seconds 00:14:27.676 00:14:27.676 Latency(us) 00:14:27.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.676 =================================================================================================================== 00:14:27.676 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:27.676 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@970 -- # wait 129790 00:14:28.243 07:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:14:28.243 00:14:28.243 real 0m28.624s 00:14:28.243 user 0m29.326s 00:14:28.243 sys 0m0.866s 00:14:28.243 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:28.243 07:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:28.243 ************************************ 00:14:28.243 END TEST bdev_qos 00:14:28.243 ************************************ 00:14:28.243 07:25:01 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:14:28.243 07:25:01 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:28.243 07:25:01 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:28.243 07:25:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:28.243 ************************************ 00:14:28.243 START TEST bdev_qd_sampling 00:14:28.243 ************************************ 00:14:28.243 07:25:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1121 -- # qd_sampling_test_suite '' 00:14:28.243 07:25:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:14:28.243 07:25:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=130253 00:14:28.243 07:25:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 130253' 00:14:28.243 Process bdev QD sampling period testing pid: 130253 00:14:28.243 07:25:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:14:28.243 07:25:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:14:28.243 07:25:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 130253 00:14:28.243 07:25:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@827 -- # '[' -z 130253 ']' 00:14:28.243 07:25:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.243 07:25:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:28.243 07:25:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.243 07:25:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:28.243 07:25:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:28.243 [2024-07-12 07:25:02.002685] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:28.243 [2024-07-12 07:25:02.002966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130253 ] 00:14:28.502 [2024-07-12 07:25:02.167714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:28.502 [2024-07-12 07:25:02.257441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.502 [2024-07-12 07:25:02.257445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.068 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:29.068 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # return 0 00:14:29.068 07:25:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:14:29.068 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.068 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:29.337 Malloc_QD 00:14:29.337 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.337 07:25:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:14:29.337 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_QD 00:14:29.337 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:29.337 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local i 00:14:29.337 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:29.337 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:29.337 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:14:29.337 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.337 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:29.337 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.337 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:14:29.337 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.337 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:29.337 [ 00:14:29.337 { 00:14:29.337 "name": "Malloc_QD", 00:14:29.337 "aliases": [ 00:14:29.337 "166df19a-4ed5-4cae-8791-b6fc83525f60" 00:14:29.337 ], 00:14:29.337 "product_name": "Malloc disk", 00:14:29.337 "block_size": 512, 00:14:29.337 "num_blocks": 262144, 00:14:29.338 "uuid": "166df19a-4ed5-4cae-8791-b6fc83525f60", 00:14:29.338 "assigned_rate_limits": { 00:14:29.338 "rw_ios_per_sec": 0, 00:14:29.338 "rw_mbytes_per_sec": 0, 00:14:29.338 "r_mbytes_per_sec": 0, 00:14:29.338 "w_mbytes_per_sec": 0 00:14:29.338 }, 00:14:29.338 "claimed": false, 00:14:29.338 "zoned": false, 00:14:29.338 "supported_io_types": { 00:14:29.338 "read": true, 00:14:29.338 "write": true, 00:14:29.338 "unmap": true, 00:14:29.338 "write_zeroes": true, 00:14:29.338 "flush": true, 00:14:29.338 "reset": true, 00:14:29.338 "compare": false, 00:14:29.338 "compare_and_write": false, 00:14:29.338 "abort": true, 00:14:29.338 "nvme_admin": false, 00:14:29.338 "nvme_io": false 00:14:29.338 }, 00:14:29.338 "memory_domains": [ 00:14:29.338 { 00:14:29.338 "dma_device_id": "system", 00:14:29.338 "dma_device_type": 1 00:14:29.338 }, 00:14:29.338 { 00:14:29.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.338 "dma_device_type": 2 00:14:29.338 } 00:14:29.338 ], 00:14:29.338 "driver_specific": {} 00:14:29.338 } 00:14:29.338 ] 00:14:29.338 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.338 07:25:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # return 0 00:14:29.338 07:25:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:14:29.338 07:25:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:29.338 Running I/O for 5 seconds... 00:14:31.257 07:25:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:14:31.257 07:25:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:14:31.257 07:25:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:14:31.257 07:25:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:14:31.257 07:25:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:14:31.257 07:25:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.257 07:25:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:31.257 07:25:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.257 07:25:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:14:31.257 "tick_rate": 2100000000, 00:14:31.257 "ticks": 1623924125800, 00:14:31.257 "bdevs": [ 00:14:31.257 { 00:14:31.257 "name": "Malloc_QD", 00:14:31.257 "bytes_read": 925929984, 00:14:31.257 "num_read_ops": 226051, 00:14:31.257 "bytes_written": 0, 00:14:31.257 "num_write_ops": 0, 00:14:31.257 "bytes_unmapped": 0, 00:14:31.257 "num_unmap_ops": 0, 00:14:31.257 "bytes_copied": 0, 00:14:31.257 "num_copy_ops": 0, 00:14:31.257 "read_latency_ticks": 2084249063772, 00:14:31.257 "max_read_latency_ticks": 10147332, 00:14:31.257 "min_read_latency_ticks": 403260, 00:14:31.257 "write_latency_ticks": 0, 00:14:31.257 "max_write_latency_ticks": 0, 00:14:31.257 "min_write_latency_ticks": 0, 00:14:31.257 "unmap_latency_ticks": 0, 00:14:31.257 "max_unmap_latency_ticks": 0, 00:14:31.257 "min_unmap_latency_ticks": 0, 00:14:31.257 "copy_latency_ticks": 0, 00:14:31.257 "max_copy_latency_ticks": 0, 00:14:31.257 "min_copy_latency_ticks": 0, 00:14:31.257 "io_error": {}, 00:14:31.257 "queue_depth_polling_period": 10, 00:14:31.257 "queue_depth": 512, 00:14:31.257 "io_time": 30, 00:14:31.257 "weighted_io_time": 15360 00:14:31.257 } 00:14:31.257 ] 00:14:31.257 }' 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:31.257 00:14:31.257 Latency(us) 00:14:31.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.257 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:31.257 Malloc_QD : 2.01 57879.44 226.09 0.00 0.00 4412.50 1045.46 5523.75 00:14:31.257 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:31.257 Malloc_QD : 2.01 58605.15 228.93 0.00 0.00 4358.08 706.07 4805.97 00:14:31.257 =================================================================================================================== 00:14:31.257 Total : 116484.59 455.02 0.00 0.00 4385.11 706.07 5523.75 00:14:31.257 0 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 130253 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@946 -- # '[' -z 130253 ']' 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # kill -0 130253 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # uname 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:31.257 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 130253 00:14:31.516 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:31.516 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:31.516 killing process with pid 130253 00:14:31.516 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@964 -- # echo 'killing process with pid 130253' 00:14:31.516 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@965 -- # kill 130253 00:14:31.516 Received shutdown signal, test time was about 2.088224 seconds 00:14:31.516 00:14:31.516 Latency(us) 00:14:31.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.516 =================================================================================================================== 00:14:31.516 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:31.516 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@970 -- # wait 130253 00:14:31.775 07:25:05 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:14:31.775 00:14:31.775 real 0m3.660s 00:14:31.775 user 0m6.795s 00:14:31.775 sys 0m0.527s 00:14:31.775 ************************************ 00:14:31.775 END TEST bdev_qd_sampling 00:14:31.775 ************************************ 00:14:31.775 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:31.775 07:25:05 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:31.775 07:25:05 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:14:31.775 07:25:05 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:31.775 07:25:05 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:31.775 07:25:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:31.775 ************************************ 00:14:31.775 START TEST bdev_error 00:14:31.775 ************************************ 00:14:31.775 07:25:05 blockdev_general.bdev_error -- common/autotest_common.sh@1121 -- # error_test_suite '' 00:14:31.775 07:25:05 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:14:31.776 07:25:05 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:14:31.776 07:25:05 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:14:31.776 07:25:05 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=130343 00:14:31.776 07:25:05 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 130343' 00:14:31.776 Process error testing pid: 130343 00:14:31.776 07:25:05 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 130343 00:14:31.776 07:25:05 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 130343 ']' 00:14:31.776 07:25:05 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.776 07:25:05 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:31.776 07:25:05 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:14:31.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.776 07:25:05 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.776 07:25:05 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:31.776 07:25:05 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:32.035 [2024-07-12 07:25:05.722658] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:32.035 [2024-07-12 07:25:05.723115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130343 ] 00:14:32.035 [2024-07-12 07:25:05.879014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.294 [2024-07-12 07:25:05.957123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.862 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:32.862 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:14:32.862 07:25:06 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:32.863 Dev_1 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.863 07:25:06 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:32.863 [ 00:14:32.863 { 00:14:32.863 "name": "Dev_1", 00:14:32.863 "aliases": [ 00:14:32.863 "efc2ee3c-cf09-4e50-b60b-dabe7c0b86f3" 00:14:32.863 ], 00:14:32.863 "product_name": "Malloc disk", 00:14:32.863 "block_size": 512, 00:14:32.863 "num_blocks": 262144, 00:14:32.863 "uuid": "efc2ee3c-cf09-4e50-b60b-dabe7c0b86f3", 00:14:32.863 "assigned_rate_limits": { 00:14:32.863 "rw_ios_per_sec": 0, 00:14:32.863 "rw_mbytes_per_sec": 0, 00:14:32.863 "r_mbytes_per_sec": 0, 00:14:32.863 "w_mbytes_per_sec": 0 00:14:32.863 }, 00:14:32.863 "claimed": false, 00:14:32.863 "zoned": false, 00:14:32.863 "supported_io_types": { 00:14:32.863 "read": true, 00:14:32.863 "write": true, 00:14:32.863 "unmap": true, 00:14:32.863 "write_zeroes": true, 00:14:32.863 "flush": true, 00:14:32.863 "reset": true, 00:14:32.863 "compare": false, 00:14:32.863 "compare_and_write": false, 00:14:32.863 "abort": true, 00:14:32.863 "nvme_admin": false, 00:14:32.863 "nvme_io": false 00:14:32.863 }, 00:14:32.863 "memory_domains": [ 00:14:32.863 { 00:14:32.863 "dma_device_id": "system", 00:14:32.863 "dma_device_type": 1 00:14:32.863 }, 00:14:32.863 { 00:14:32.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.863 "dma_device_type": 2 00:14:32.863 } 00:14:32.863 ], 00:14:32.863 "driver_specific": {} 00:14:32.863 } 00:14:32.863 ] 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:14:32.863 07:25:06 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:32.863 true 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.863 07:25:06 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.863 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:33.122 Dev_2 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.122 07:25:06 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:33.122 [ 00:14:33.122 { 00:14:33.122 "name": "Dev_2", 00:14:33.122 "aliases": [ 00:14:33.122 "a6da700b-c212-4960-bf61-305be9f42f15" 00:14:33.122 ], 00:14:33.122 "product_name": "Malloc disk", 00:14:33.122 "block_size": 512, 00:14:33.122 "num_blocks": 262144, 00:14:33.122 "uuid": "a6da700b-c212-4960-bf61-305be9f42f15", 00:14:33.122 "assigned_rate_limits": { 00:14:33.122 "rw_ios_per_sec": 0, 00:14:33.122 "rw_mbytes_per_sec": 0, 00:14:33.122 "r_mbytes_per_sec": 0, 00:14:33.122 "w_mbytes_per_sec": 0 00:14:33.122 }, 00:14:33.122 "claimed": false, 00:14:33.122 "zoned": false, 00:14:33.122 "supported_io_types": { 00:14:33.122 "read": true, 00:14:33.122 "write": true, 00:14:33.122 "unmap": true, 00:14:33.122 "write_zeroes": true, 00:14:33.122 "flush": true, 00:14:33.122 "reset": true, 00:14:33.122 "compare": false, 00:14:33.122 "compare_and_write": false, 00:14:33.122 "abort": true, 00:14:33.122 "nvme_admin": false, 00:14:33.122 "nvme_io": false 00:14:33.122 }, 00:14:33.122 "memory_domains": [ 00:14:33.122 { 00:14:33.122 "dma_device_id": "system", 00:14:33.122 "dma_device_type": 1 00:14:33.122 }, 00:14:33.122 { 00:14:33.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.122 "dma_device_type": 2 00:14:33.122 } 00:14:33.122 ], 00:14:33.122 "driver_specific": {} 00:14:33.122 } 00:14:33.122 ] 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:14:33.122 07:25:06 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:33.122 07:25:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.122 07:25:06 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:14:33.122 07:25:06 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:33.122 Running I/O for 5 seconds... 00:14:34.059 07:25:07 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 130343 00:14:34.059 Process is existed as continue on error is set. Pid: 130343 00:14:34.059 07:25:07 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 130343' 00:14:34.059 07:25:07 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:14:34.059 07:25:07 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.059 07:25:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:34.059 07:25:07 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.059 07:25:07 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:14:34.059 07:25:07 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.059 07:25:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:34.059 07:25:07 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.059 07:25:07 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:14:34.059 Timeout while waiting for response: 00:14:34.059 00:14:34.059 00:14:38.250 00:14:38.250 Latency(us) 00:14:38.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.250 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:38.250 EE_Dev_1 : 0.90 48235.16 188.42 5.56 0.00 329.13 145.31 729.48 00:14:38.250 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:38.250 Dev_2 : 5.00 105331.80 411.45 0.00 0.00 149.39 55.34 35451.86 00:14:38.250 =================================================================================================================== 00:14:38.250 Total : 153566.97 599.87 5.56 0.00 163.07 55.34 35451.86 00:14:39.279 07:25:12 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 130343 00:14:39.279 07:25:12 blockdev_general.bdev_error -- common/autotest_common.sh@946 -- # '[' -z 130343 ']' 00:14:39.279 07:25:12 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # kill -0 130343 00:14:39.279 07:25:12 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # uname 00:14:39.279 07:25:12 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:39.279 07:25:12 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 130343 00:14:39.279 07:25:12 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:39.279 07:25:12 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:39.279 killing process with pid 130343 00:14:39.279 07:25:12 blockdev_general.bdev_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 130343' 00:14:39.279 07:25:12 blockdev_general.bdev_error -- common/autotest_common.sh@965 -- # kill 130343 00:14:39.279 Received shutdown signal, test time was about 5.000000 seconds 00:14:39.279 00:14:39.279 Latency(us) 00:14:39.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.279 =================================================================================================================== 00:14:39.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.279 07:25:12 blockdev_general.bdev_error -- common/autotest_common.sh@970 -- # wait 130343 00:14:39.538 07:25:13 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=130440 00:14:39.538 07:25:13 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:14:39.539 Process error testing pid: 130440 00:14:39.539 07:25:13 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 130440' 00:14:39.539 07:25:13 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 130440 00:14:39.539 07:25:13 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 130440 ']' 00:14:39.539 07:25:13 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.539 07:25:13 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:39.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.539 07:25:13 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.539 07:25:13 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:39.539 07:25:13 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:39.797 [2024-07-12 07:25:13.457878] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:39.797 [2024-07-12 07:25:13.458125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130440 ] 00:14:39.797 [2024-07-12 07:25:13.614767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.055 [2024-07-12 07:25:13.696372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:14:40.623 07:25:14 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:40.623 Dev_1 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.623 07:25:14 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:40.623 [ 00:14:40.623 { 00:14:40.623 "name": "Dev_1", 00:14:40.623 "aliases": [ 00:14:40.623 "2f518d4e-0350-4108-9601-89402a35b980" 00:14:40.623 ], 00:14:40.623 "product_name": "Malloc disk", 00:14:40.623 "block_size": 512, 00:14:40.623 "num_blocks": 262144, 00:14:40.623 "uuid": "2f518d4e-0350-4108-9601-89402a35b980", 00:14:40.623 "assigned_rate_limits": { 00:14:40.623 "rw_ios_per_sec": 0, 00:14:40.623 "rw_mbytes_per_sec": 0, 00:14:40.623 "r_mbytes_per_sec": 0, 00:14:40.623 "w_mbytes_per_sec": 0 00:14:40.623 }, 00:14:40.623 "claimed": false, 00:14:40.623 "zoned": false, 00:14:40.623 "supported_io_types": { 00:14:40.623 "read": true, 00:14:40.623 "write": true, 00:14:40.623 "unmap": true, 00:14:40.623 "write_zeroes": true, 00:14:40.623 "flush": true, 00:14:40.623 "reset": true, 00:14:40.623 "compare": false, 00:14:40.623 "compare_and_write": false, 00:14:40.623 "abort": true, 00:14:40.623 "nvme_admin": false, 00:14:40.623 "nvme_io": false 00:14:40.623 }, 00:14:40.623 "memory_domains": [ 00:14:40.623 { 00:14:40.623 "dma_device_id": "system", 00:14:40.623 "dma_device_type": 1 00:14:40.623 }, 00:14:40.623 { 00:14:40.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.623 "dma_device_type": 2 00:14:40.623 } 00:14:40.623 ], 00:14:40.623 "driver_specific": {} 00:14:40.623 } 00:14:40.623 ] 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:14:40.623 07:25:14 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:40.623 true 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.623 07:25:14 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.623 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:40.881 Dev_2 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.881 07:25:14 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:40.881 [ 00:14:40.881 { 00:14:40.881 "name": "Dev_2", 00:14:40.881 "aliases": [ 00:14:40.881 "6814fb55-8ab7-43e6-a047-7e328530ba75" 00:14:40.881 ], 00:14:40.881 "product_name": "Malloc disk", 00:14:40.881 "block_size": 512, 00:14:40.881 "num_blocks": 262144, 00:14:40.881 "uuid": "6814fb55-8ab7-43e6-a047-7e328530ba75", 00:14:40.881 "assigned_rate_limits": { 00:14:40.881 "rw_ios_per_sec": 0, 00:14:40.881 "rw_mbytes_per_sec": 0, 00:14:40.881 "r_mbytes_per_sec": 0, 00:14:40.881 "w_mbytes_per_sec": 0 00:14:40.881 }, 00:14:40.881 "claimed": false, 00:14:40.881 "zoned": false, 00:14:40.881 "supported_io_types": { 00:14:40.881 "read": true, 00:14:40.881 "write": true, 00:14:40.881 "unmap": true, 00:14:40.881 "write_zeroes": true, 00:14:40.881 "flush": true, 00:14:40.881 "reset": true, 00:14:40.881 "compare": false, 00:14:40.881 "compare_and_write": false, 00:14:40.881 "abort": true, 00:14:40.881 "nvme_admin": false, 00:14:40.881 "nvme_io": false 00:14:40.881 }, 00:14:40.881 "memory_domains": [ 00:14:40.881 { 00:14:40.881 "dma_device_id": "system", 00:14:40.881 "dma_device_type": 1 00:14:40.881 }, 00:14:40.881 { 00:14:40.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.881 "dma_device_type": 2 00:14:40.881 } 00:14:40.881 ], 00:14:40.881 "driver_specific": {} 00:14:40.881 } 00:14:40.881 ] 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:14:40.881 07:25:14 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.881 07:25:14 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 130440 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 130440 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:14:40.881 07:25:14 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:40.881 07:25:14 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 130440 00:14:40.881 Running I/O for 5 seconds... 00:14:40.881 task offset: 151816 on job bdev=EE_Dev_1 fails 00:14:40.881 00:14:40.881 Latency(us) 00:14:40.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.881 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:40.881 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:14:40.881 EE_Dev_1 : 0.00 28277.63 110.46 6426.74 0.00 362.55 147.26 674.86 00:14:40.881 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:40.881 Dev_2 : 0.00 21376.09 83.50 0.00 0.00 495.45 142.38 897.22 00:14:40.881 =================================================================================================================== 00:14:40.881 Total : 49653.72 193.96 6426.74 0.00 434.63 142.38 897.22 00:14:40.881 [2024-07-12 07:25:14.660884] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:40.881 request: 00:14:40.881 { 00:14:40.881 "method": "perform_tests", 00:14:40.881 "req_id": 1 00:14:40.881 } 00:14:40.881 Got JSON-RPC error response 00:14:40.881 response: 00:14:40.881 { 00:14:40.881 "code": -32603, 00:14:40.881 "message": "bdevperf failed with error Operation not permitted" 00:14:40.881 } 00:14:41.446 07:25:15 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:14:41.446 07:25:15 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:41.446 07:25:15 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:14:41.446 07:25:15 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:14:41.446 07:25:15 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:14:41.446 ************************************ 00:14:41.446 END TEST bdev_error 00:14:41.446 ************************************ 00:14:41.446 07:25:15 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:41.446 00:14:41.446 real 0m9.566s 00:14:41.446 user 0m9.519s 00:14:41.446 sys 0m1.076s 00:14:41.446 07:25:15 blockdev_general.bdev_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:41.446 07:25:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:41.446 07:25:15 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:14:41.446 07:25:15 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:41.446 07:25:15 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:41.446 07:25:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:41.446 ************************************ 00:14:41.446 START TEST bdev_stat 00:14:41.446 ************************************ 00:14:41.446 07:25:15 blockdev_general.bdev_stat -- common/autotest_common.sh@1121 -- # stat_test_suite '' 00:14:41.446 07:25:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:14:41.446 07:25:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=130491 00:14:41.446 07:25:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 130491' 00:14:41.446 07:25:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:14:41.446 Process Bdev IO statistics testing pid: 130491 00:14:41.446 07:25:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:14:41.446 07:25:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 130491 00:14:41.446 07:25:15 blockdev_general.bdev_stat -- common/autotest_common.sh@827 -- # '[' -z 130491 ']' 00:14:41.446 07:25:15 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.446 07:25:15 blockdev_general.bdev_stat -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:41.446 07:25:15 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.446 07:25:15 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:41.446 07:25:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:41.703 [2024-07-12 07:25:15.350580] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:41.703 [2024-07-12 07:25:15.350969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130491 ] 00:14:41.703 [2024-07-12 07:25:15.504477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:41.961 [2024-07-12 07:25:15.594029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.961 [2024-07-12 07:25:15.594031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # return 0 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:42.558 Malloc_STAT 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_STAT 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local i 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.558 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:42.558 [ 00:14:42.558 { 00:14:42.558 "name": "Malloc_STAT", 00:14:42.558 "aliases": [ 00:14:42.558 "98c7dbd9-e273-4367-bd26-07f2438534e1" 00:14:42.558 ], 00:14:42.558 "product_name": "Malloc disk", 00:14:42.558 "block_size": 512, 00:14:42.558 "num_blocks": 262144, 00:14:42.558 "uuid": "98c7dbd9-e273-4367-bd26-07f2438534e1", 00:14:42.558 "assigned_rate_limits": { 00:14:42.558 "rw_ios_per_sec": 0, 00:14:42.558 "rw_mbytes_per_sec": 0, 00:14:42.558 "r_mbytes_per_sec": 0, 00:14:42.558 "w_mbytes_per_sec": 0 00:14:42.558 }, 00:14:42.558 "claimed": false, 00:14:42.558 "zoned": false, 00:14:42.558 "supported_io_types": { 00:14:42.558 "read": true, 00:14:42.558 "write": true, 00:14:42.558 "unmap": true, 00:14:42.558 "write_zeroes": true, 00:14:42.558 "flush": true, 00:14:42.558 "reset": true, 00:14:42.558 "compare": false, 00:14:42.558 "compare_and_write": false, 00:14:42.559 "abort": true, 00:14:42.559 "nvme_admin": false, 00:14:42.559 "nvme_io": false 00:14:42.559 }, 00:14:42.559 "memory_domains": [ 00:14:42.559 { 00:14:42.559 "dma_device_id": "system", 00:14:42.559 "dma_device_type": 1 00:14:42.559 }, 00:14:42.559 { 00:14:42.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.559 "dma_device_type": 2 00:14:42.559 } 00:14:42.559 ], 00:14:42.559 "driver_specific": {} 00:14:42.559 } 00:14:42.559 ] 00:14:42.559 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.559 07:25:16 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # return 0 00:14:42.559 07:25:16 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:14:42.559 07:25:16 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:42.559 Running I/O for 10 seconds... 00:14:44.457 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:14:44.457 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:14:44.457 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:14:44.457 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:14:44.457 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:14:44.457 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:14:44.457 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:14:44.457 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:14:44.457 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:14:44.457 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:44.457 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.457 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:44.457 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.457 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:14:44.457 "tick_rate": 2100000000, 00:14:44.457 "ticks": 1651884668676, 00:14:44.457 "bdevs": [ 00:14:44.457 { 00:14:44.457 "name": "Malloc_STAT", 00:14:44.457 "bytes_read": 898667008, 00:14:44.457 "num_read_ops": 219395, 00:14:44.457 "bytes_written": 0, 00:14:44.457 "num_write_ops": 0, 00:14:44.457 "bytes_unmapped": 0, 00:14:44.457 "num_unmap_ops": 0, 00:14:44.457 "bytes_copied": 0, 00:14:44.457 "num_copy_ops": 0, 00:14:44.458 "read_latency_ticks": 2049974073650, 00:14:44.458 "max_read_latency_ticks": 12335432, 00:14:44.458 "min_read_latency_ticks": 410266, 00:14:44.458 "write_latency_ticks": 0, 00:14:44.458 "max_write_latency_ticks": 0, 00:14:44.458 "min_write_latency_ticks": 0, 00:14:44.458 "unmap_latency_ticks": 0, 00:14:44.458 "max_unmap_latency_ticks": 0, 00:14:44.458 "min_unmap_latency_ticks": 0, 00:14:44.458 "copy_latency_ticks": 0, 00:14:44.458 "max_copy_latency_ticks": 0, 00:14:44.458 "min_copy_latency_ticks": 0, 00:14:44.458 "io_error": {} 00:14:44.458 } 00:14:44.458 ] 00:14:44.458 }' 00:14:44.458 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=219395 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:14:44.717 "tick_rate": 2100000000, 00:14:44.717 "ticks": 1652026578044, 00:14:44.717 "name": "Malloc_STAT", 00:14:44.717 "channels": [ 00:14:44.717 { 00:14:44.717 "thread_id": 2, 00:14:44.717 "bytes_read": 461373440, 00:14:44.717 "num_read_ops": 112640, 00:14:44.717 "bytes_written": 0, 00:14:44.717 "num_write_ops": 0, 00:14:44.717 "bytes_unmapped": 0, 00:14:44.717 "num_unmap_ops": 0, 00:14:44.717 "bytes_copied": 0, 00:14:44.717 "num_copy_ops": 0, 00:14:44.717 "read_latency_ticks": 1060056788298, 00:14:44.717 "max_read_latency_ticks": 12335432, 00:14:44.717 "min_read_latency_ticks": 6764580, 00:14:44.717 "write_latency_ticks": 0, 00:14:44.717 "max_write_latency_ticks": 0, 00:14:44.717 "min_write_latency_ticks": 0, 00:14:44.717 "unmap_latency_ticks": 0, 00:14:44.717 "max_unmap_latency_ticks": 0, 00:14:44.717 "min_unmap_latency_ticks": 0, 00:14:44.717 "copy_latency_ticks": 0, 00:14:44.717 "max_copy_latency_ticks": 0, 00:14:44.717 "min_copy_latency_ticks": 0 00:14:44.717 }, 00:14:44.717 { 00:14:44.717 "thread_id": 3, 00:14:44.717 "bytes_read": 468713472, 00:14:44.717 "num_read_ops": 114432, 00:14:44.717 "bytes_written": 0, 00:14:44.717 "num_write_ops": 0, 00:14:44.717 "bytes_unmapped": 0, 00:14:44.717 "num_unmap_ops": 0, 00:14:44.717 "bytes_copied": 0, 00:14:44.717 "num_copy_ops": 0, 00:14:44.717 "read_latency_ticks": 1061549934066, 00:14:44.717 "max_read_latency_ticks": 11942264, 00:14:44.717 "min_read_latency_ticks": 6315918, 00:14:44.717 "write_latency_ticks": 0, 00:14:44.717 "max_write_latency_ticks": 0, 00:14:44.717 "min_write_latency_ticks": 0, 00:14:44.717 "unmap_latency_ticks": 0, 00:14:44.717 "max_unmap_latency_ticks": 0, 00:14:44.717 "min_unmap_latency_ticks": 0, 00:14:44.717 "copy_latency_ticks": 0, 00:14:44.717 "max_copy_latency_ticks": 0, 00:14:44.717 "min_copy_latency_ticks": 0 00:14:44.717 } 00:14:44.717 ] 00:14:44.717 }' 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=112640 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=112640 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=114432 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=227072 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:14:44.717 "tick_rate": 2100000000, 00:14:44.717 "ticks": 1652274674232, 00:14:44.717 "bdevs": [ 00:14:44.717 { 00:14:44.717 "name": "Malloc_STAT", 00:14:44.717 "bytes_read": 985698816, 00:14:44.717 "num_read_ops": 240643, 00:14:44.717 "bytes_written": 0, 00:14:44.717 "num_write_ops": 0, 00:14:44.717 "bytes_unmapped": 0, 00:14:44.717 "num_unmap_ops": 0, 00:14:44.717 "bytes_copied": 0, 00:14:44.717 "num_copy_ops": 0, 00:14:44.717 "read_latency_ticks": 2248471974462, 00:14:44.717 "max_read_latency_ticks": 12335432, 00:14:44.717 "min_read_latency_ticks": 410266, 00:14:44.717 "write_latency_ticks": 0, 00:14:44.717 "max_write_latency_ticks": 0, 00:14:44.717 "min_write_latency_ticks": 0, 00:14:44.717 "unmap_latency_ticks": 0, 00:14:44.717 "max_unmap_latency_ticks": 0, 00:14:44.717 "min_unmap_latency_ticks": 0, 00:14:44.717 "copy_latency_ticks": 0, 00:14:44.717 "max_copy_latency_ticks": 0, 00:14:44.717 "min_copy_latency_ticks": 0, 00:14:44.717 "io_error": {} 00:14:44.717 } 00:14:44.717 ] 00:14:44.717 }' 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=240643 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 227072 -lt 219395 ']' 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 227072 -gt 240643 ']' 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.717 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:44.717 00:14:44.717 Latency(us) 00:14:44.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.717 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:44.717 Malloc_STAT : 2.17 56977.63 222.57 0.00 0.00 4482.03 1529.17 5898.24 00:14:44.717 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:44.717 Malloc_STAT : 2.17 57920.32 226.25 0.00 0.00 4409.56 1373.14 5710.99 00:14:44.717 =================================================================================================================== 00:14:44.717 Total : 114897.94 448.82 0.00 0.00 4445.49 1373.14 5898.24 00:14:44.977 0 00:14:44.977 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.977 07:25:18 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 130491 00:14:44.977 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@946 -- # '[' -z 130491 ']' 00:14:44.977 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # kill -0 130491 00:14:44.977 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # uname 00:14:44.977 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:44.977 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 130491 00:14:44.977 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:44.977 killing process with pid 130491 00:14:44.977 Received shutdown signal, test time was about 2.242616 seconds 00:14:44.977 00:14:44.977 Latency(us) 00:14:44.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.977 =================================================================================================================== 00:14:44.977 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.977 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:44.977 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 130491' 00:14:44.977 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@965 -- # kill 130491 00:14:44.977 07:25:18 blockdev_general.bdev_stat -- common/autotest_common.sh@970 -- # wait 130491 00:14:45.236 ************************************ 00:14:45.236 END TEST bdev_stat 00:14:45.236 ************************************ 00:14:45.236 07:25:19 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:14:45.236 00:14:45.236 real 0m3.800s 00:14:45.236 user 0m7.276s 00:14:45.236 sys 0m0.467s 00:14:45.236 07:25:19 blockdev_general.bdev_stat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:45.236 07:25:19 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:45.495 07:25:19 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:14:45.495 07:25:19 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:14:45.495 07:25:19 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:14:45.495 07:25:19 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:14:45.495 07:25:19 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:45.495 07:25:19 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:45.495 07:25:19 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:14:45.495 07:25:19 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:14:45.495 07:25:19 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:14:45.495 07:25:19 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:14:45.495 00:14:45.495 real 1m59.914s 00:14:45.495 user 5m13.968s 00:14:45.495 sys 0m24.957s 00:14:45.495 07:25:19 blockdev_general -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:45.495 ************************************ 00:14:45.495 07:25:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:45.495 END TEST blockdev_general 00:14:45.495 ************************************ 00:14:45.495 07:25:19 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:45.495 07:25:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:45.495 07:25:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:45.495 07:25:19 -- common/autotest_common.sh@10 -- # set +x 00:14:45.495 ************************************ 00:14:45.495 START TEST bdev_raid 00:14:45.495 ************************************ 00:14:45.495 07:25:19 bdev_raid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:45.495 * Looking for test storage... 00:14:45.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:45.495 07:25:19 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:45.495 07:25:19 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:14:45.495 07:25:19 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:45.495 07:25:19 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:14:45.495 07:25:19 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:14:45.495 07:25:19 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:14:45.495 07:25:19 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:14:45.495 07:25:19 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:14:45.495 07:25:19 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:14:45.495 07:25:19 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:14:45.495 07:25:19 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:14:45.495 07:25:19 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:45.495 07:25:19 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:45.495 07:25:19 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:45.495 07:25:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:45.495 ************************************ 00:14:45.495 START TEST raid_function_test_raid0 00:14:45.495 ************************************ 00:14:45.495 07:25:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1121 -- # raid_function_test raid0 00:14:45.495 07:25:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:14:45.495 07:25:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:14:45.495 07:25:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:14:45.495 07:25:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=130640 00:14:45.496 07:25:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 130640' 00:14:45.496 Process raid pid: 130640 00:14:45.496 07:25:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:45.496 07:25:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 130640 /var/tmp/spdk-raid.sock 00:14:45.496 07:25:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@827 -- # '[' -z 130640 ']' 00:14:45.496 07:25:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:45.496 07:25:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:45.496 07:25:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:45.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:45.496 07:25:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:45.496 07:25:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:45.754 [2024-07-12 07:25:19.438753] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:45.754 [2024-07-12 07:25:19.439273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.754 [2024-07-12 07:25:19.595162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.014 [2024-07-12 07:25:19.675701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.014 [2024-07-12 07:25:19.754576] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.581 07:25:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:46.582 07:25:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # return 0 00:14:46.582 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:14:46.582 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:14:46.582 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:46.582 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:14:46.582 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:46.839 [2024-07-12 07:25:20.706574] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:46.839 [2024-07-12 07:25:20.709288] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:46.839 [2024-07-12 07:25:20.709483] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:46.839 [2024-07-12 07:25:20.709602] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:46.839 [2024-07-12 07:25:20.709820] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:46.839 [2024-07-12 07:25:20.710291] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:46.839 [2024-07-12 07:25:20.710405] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080 00:14:46.839 [2024-07-12 07:25:20.710692] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.839 Base_1 00:14:46.839 Base_2 00:14:47.099 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:47.099 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:14:47.099 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:47.357 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:14:47.357 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:14:47.357 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:47.357 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:47.357 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:47.357 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:47.357 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:47.357 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:47.357 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:14:47.357 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:47.357 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.357 07:25:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:47.357 [2024-07-12 07:25:21.158808] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:47.357 /dev/nbd0 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@865 -- # local i 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # break 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.357 1+0 records in 00:14:47.357 1+0 records out 00:14:47.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042313 s, 9.7 MB/s 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # size=4096 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # return 0 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:47.357 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:47.615 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:47.615 { 00:14:47.615 "nbd_device": "/dev/nbd0", 00:14:47.615 "bdev_name": "raid" 00:14:47.615 } 00:14:47.615 ]' 00:14:47.615 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:47.615 { 00:14:47.615 "nbd_device": "/dev/nbd0", 00:14:47.615 "bdev_name": "raid" 00:14:47.615 } 00:14:47.615 ]' 00:14:47.615 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:14:47.873 4096+0 records in 00:14:47.873 4096+0 records out 00:14:47.873 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.026882 s, 78.0 MB/s 00:14:47.873 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:48.131 4096+0 records in 00:14:48.131 4096+0 records out 00:14:48.131 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.277744 s, 7.6 MB/s 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:48.131 128+0 records in 00:14:48.131 128+0 records out 00:14:48.131 65536 bytes (66 kB, 64 KiB) copied, 0.00126578 s, 51.8 MB/s 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:48.131 2035+0 records in 00:14:48.131 2035+0 records out 00:14:48.131 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0078323 s, 133 MB/s 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:48.131 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:48.132 456+0 records in 00:14:48.132 456+0 records out 00:14:48.132 233472 bytes (233 kB, 228 KiB) copied, 0.00123588 s, 189 MB/s 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.132 07:25:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:48.390 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:48.390 [2024-07-12 07:25:22.154787] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.390 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:48.390 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:48.390 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.390 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.390 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:48.390 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:14:48.390 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.390 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:48.390 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:48.390 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 130640 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@946 -- # '[' -z 130640 ']' 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # kill -0 130640 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@951 -- # uname 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 130640 00:14:48.648 killing process with pid 130640 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 130640' 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@965 -- # kill 130640 00:14:48.648 [2024-07-12 07:25:22.462083] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.648 07:25:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # wait 130640 00:14:48.648 [2024-07-12 07:25:22.462255] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.648 [2024-07-12 07:25:22.462327] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.648 [2024-07-12 07:25:22.462338] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline 00:14:48.648 [2024-07-12 07:25:22.503512] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.238 ************************************ 00:14:49.238 END TEST raid_function_test_raid0 00:14:49.238 ************************************ 00:14:49.238 07:25:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:14:49.238 00:14:49.238 real 0m3.541s 00:14:49.238 user 0m4.514s 00:14:49.238 sys 0m1.180s 00:14:49.238 07:25:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:49.238 07:25:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:49.238 07:25:22 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:14:49.238 07:25:22 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:49.238 07:25:22 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:49.238 07:25:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:49.238 ************************************ 00:14:49.238 START TEST raid_function_test_concat 00:14:49.238 ************************************ 00:14:49.238 07:25:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1121 -- # raid_function_test concat 00:14:49.238 07:25:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:14:49.238 07:25:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:14:49.238 07:25:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:14:49.238 07:25:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=130791 00:14:49.238 07:25:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 130791' 00:14:49.238 Process raid pid: 130791 00:14:49.238 07:25:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:49.238 07:25:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 130791 /var/tmp/spdk-raid.sock 00:14:49.238 07:25:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@827 -- # '[' -z 130791 ']' 00:14:49.238 07:25:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:49.238 07:25:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:49.238 07:25:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:49.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:49.238 07:25:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:49.238 07:25:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:49.238 [2024-07-12 07:25:23.039714] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:49.238 [2024-07-12 07:25:23.040171] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.496 [2024-07-12 07:25:23.188536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.496 [2024-07-12 07:25:23.259946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.496 [2024-07-12 07:25:23.339061] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.062 07:25:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:50.062 07:25:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # return 0 00:14:50.062 07:25:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:14:50.062 07:25:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:14:50.062 07:25:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:50.062 07:25:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:14:50.062 07:25:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:50.628 [2024-07-12 07:25:24.234822] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:50.628 [2024-07-12 07:25:24.237639] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:50.628 [2024-07-12 07:25:24.237828] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:50.628 [2024-07-12 07:25:24.237954] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:50.628 [2024-07-12 07:25:24.238172] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:50.628 [2024-07-12 07:25:24.238636] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:50.628 [2024-07-12 07:25:24.238648] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080 00:14:50.628 [2024-07-12 07:25:24.238839] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.628 Base_1 00:14:50.628 Base_2 00:14:50.628 07:25:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:50.628 07:25:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:50.628 07:25:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:14:50.628 07:25:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:14:50.628 07:25:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:14:50.628 07:25:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:50.628 07:25:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:50.628 07:25:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:50.628 07:25:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:50.628 07:25:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:50.628 07:25:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:50.628 07:25:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:14:50.628 07:25:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:50.628 07:25:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:50.628 07:25:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:50.886 [2024-07-12 07:25:24.735009] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:50.886 /dev/nbd0 00:14:50.886 07:25:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@865 -- # local i 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # break 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.144 1+0 records in 00:14:51.144 1+0 records out 00:14:51.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524179 s, 7.8 MB/s 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # size=4096 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # return 0 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:51.144 07:25:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:51.403 { 00:14:51.403 "nbd_device": "/dev/nbd0", 00:14:51.403 "bdev_name": "raid" 00:14:51.403 } 00:14:51.403 ]' 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:51.403 { 00:14:51.403 "nbd_device": "/dev/nbd0", 00:14:51.403 "bdev_name": "raid" 00:14:51.403 } 00:14:51.403 ]' 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:14:51.403 4096+0 records in 00:14:51.403 4096+0 records out 00:14:51.403 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0249997 s, 83.9 MB/s 00:14:51.403 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:51.661 4096+0 records in 00:14:51.661 4096+0 records out 00:14:51.661 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.261439 s, 8.0 MB/s 00:14:51.661 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:14:51.661 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:51.661 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:14:51.661 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:51.661 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:14:51.661 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:14:51.661 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:51.661 128+0 records in 00:14:51.661 128+0 records out 00:14:51.661 65536 bytes (66 kB, 64 KiB) copied, 0.000646844 s, 101 MB/s 00:14:51.661 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:51.662 2035+0 records in 00:14:51.662 2035+0 records out 00:14:51.662 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00952545 s, 109 MB/s 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:51.662 456+0 records in 00:14:51.662 456+0 records out 00:14:51.662 233472 bytes (233 kB, 228 KiB) copied, 0.00313185 s, 74.5 MB/s 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:51.662 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:51.920 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:14:51.920 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:51.920 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:52.178 [2024-07-12 07:25:25.822953] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.178 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:52.178 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:52.178 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:52.178 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:52.178 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:52.178 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:52.178 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:14:52.178 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:14:52.178 07:25:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:52.178 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:52.178 07:25:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 130791 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@946 -- # '[' -z 130791 ']' 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # kill -0 130791 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@951 -- # uname 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 130791 00:14:52.437 killing process with pid 130791 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 130791' 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@965 -- # kill 130791 00:14:52.437 [2024-07-12 07:25:26.163481] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.437 07:25:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # wait 130791 00:14:52.437 [2024-07-12 07:25:26.163624] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.437 [2024-07-12 07:25:26.163692] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.437 [2024-07-12 07:25:26.163703] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline 00:14:52.437 [2024-07-12 07:25:26.204147] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.006 07:25:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:14:53.006 00:14:53.006 real 0m3.631s 00:14:53.006 user 0m4.768s 00:14:53.006 sys 0m1.100s 00:14:53.006 07:25:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:53.006 07:25:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:53.006 ************************************ 00:14:53.006 END TEST raid_function_test_concat 00:14:53.006 ************************************ 00:14:53.006 07:25:26 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:14:53.006 07:25:26 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:53.006 07:25:26 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:53.006 07:25:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.006 ************************************ 00:14:53.006 START TEST raid0_resize_test 00:14:53.006 ************************************ 00:14:53.006 07:25:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1121 -- # raid0_resize_test 00:14:53.006 07:25:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:14:53.006 07:25:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:14:53.006 07:25:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:14:53.006 07:25:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:14:53.006 07:25:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:14:53.007 07:25:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:14:53.007 07:25:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=130936 00:14:53.007 07:25:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 130936' 00:14:53.007 Process raid pid: 130936 00:14:53.007 07:25:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 130936 /var/tmp/spdk-raid.sock 00:14:53.007 07:25:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:53.007 07:25:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@827 -- # '[' -z 130936 ']' 00:14:53.007 07:25:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:53.007 07:25:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:53.007 07:25:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:53.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:53.007 07:25:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:53.007 07:25:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.007 [2024-07-12 07:25:26.757070] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:53.007 [2024-07-12 07:25:26.757632] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.266 [2024-07-12 07:25:26.911076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.266 [2024-07-12 07:25:27.002292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.266 [2024-07-12 07:25:27.082243] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.834 07:25:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:53.834 07:25:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # return 0 00:14:53.834 07:25:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:54.093 Base_1 00:14:54.093 07:25:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:54.352 Base_2 00:14:54.352 07:25:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:54.638 [2024-07-12 07:25:28.247471] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:54.638 [2024-07-12 07:25:28.250303] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:54.638 [2024-07-12 07:25:28.250496] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:54.638 [2024-07-12 07:25:28.250576] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:54.638 [2024-07-12 07:25:28.250830] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001de0 00:14:54.638 [2024-07-12 07:25:28.251440] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:54.638 [2024-07-12 07:25:28.251554] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006080 00:14:54.638 [2024-07-12 07:25:28.251902] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.638 07:25:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:54.638 [2024-07-12 07:25:28.436004] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:54.638 [2024-07-12 07:25:28.436299] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:54.638 true 00:14:54.638 07:25:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:54.638 07:25:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:14:54.896 [2024-07-12 07:25:28.680069] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.896 07:25:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:14:54.896 07:25:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:14:54.896 07:25:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:14:54.896 07:25:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:55.155 [2024-07-12 07:25:28.872006] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:55.155 [2024-07-12 07:25:28.872244] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:55.155 [2024-07-12 07:25:28.872371] bdev_raid.c:2289:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:14:55.155 true 00:14:55.155 07:25:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:55.155 07:25:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:14:55.413 [2024-07-12 07:25:29.064103] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.413 07:25:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:14:55.413 07:25:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:14:55.413 07:25:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:14:55.413 07:25:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 130936 00:14:55.413 07:25:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@946 -- # '[' -z 130936 ']' 00:14:55.413 07:25:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # kill -0 130936 00:14:55.413 07:25:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # uname 00:14:55.413 07:25:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:55.413 07:25:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 130936 00:14:55.413 killing process with pid 130936 00:14:55.413 07:25:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:55.413 07:25:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:55.413 07:25:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 130936' 00:14:55.413 07:25:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@965 -- # kill 130936 00:14:55.413 [2024-07-12 07:25:29.115941] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.413 07:25:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # wait 130936 00:14:55.413 [2024-07-12 07:25:29.116068] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.413 [2024-07-12 07:25:29.116140] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.413 [2024-07-12 07:25:29.116150] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Raid, state offline 00:14:55.413 [2024-07-12 07:25:29.116712] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.671 07:25:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:14:55.671 00:14:55.671 real 0m2.835s 00:14:55.671 user 0m4.066s 00:14:55.671 sys 0m0.624s 00:14:55.671 07:25:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:55.671 07:25:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.671 ************************************ 00:14:55.671 END TEST raid0_resize_test 00:14:55.671 ************************************ 00:14:55.930 07:25:29 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:14:55.930 07:25:29 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:55.930 07:25:29 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:55.930 07:25:29 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:55.930 07:25:29 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:55.930 07:25:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:55.930 ************************************ 00:14:55.930 START TEST raid_state_function_test 00:14:55.930 ************************************ 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 false 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=131018 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 131018' 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:55.930 Process raid pid: 131018 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 131018 /var/tmp/spdk-raid.sock 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 131018 ']' 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:55.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:55.930 07:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.930 [2024-07-12 07:25:29.650853] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:55.930 [2024-07-12 07:25:29.651233] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.930 [2024-07-12 07:25:29.793434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.190 [2024-07-12 07:25:29.872538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.190 [2024-07-12 07:25:29.951449] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.759 07:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:56.759 07:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:14:56.759 07:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:57.017 [2024-07-12 07:25:30.779610] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:57.017 [2024-07-12 07:25:30.779870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:57.017 [2024-07-12 07:25:30.779958] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:57.017 [2024-07-12 07:25:30.780049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:57.017 07:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:57.017 07:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:57.017 07:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:57.017 07:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:57.017 07:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:57.017 07:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:57.017 07:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.017 07:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.017 07:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.017 07:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.017 07:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.017 07:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.276 07:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.276 "name": "Existed_Raid", 00:14:57.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.276 "strip_size_kb": 64, 00:14:57.276 "state": "configuring", 00:14:57.276 "raid_level": "raid0", 00:14:57.276 "superblock": false, 00:14:57.276 "num_base_bdevs": 2, 00:14:57.276 "num_base_bdevs_discovered": 0, 00:14:57.276 "num_base_bdevs_operational": 2, 00:14:57.276 "base_bdevs_list": [ 00:14:57.276 { 00:14:57.276 "name": "BaseBdev1", 00:14:57.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.276 "is_configured": false, 00:14:57.276 "data_offset": 0, 00:14:57.276 "data_size": 0 00:14:57.276 }, 00:14:57.276 { 00:14:57.276 "name": "BaseBdev2", 00:14:57.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.276 "is_configured": false, 00:14:57.276 "data_offset": 0, 00:14:57.276 "data_size": 0 00:14:57.276 } 00:14:57.276 ] 00:14:57.276 }' 00:14:57.276 07:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.276 07:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.843 07:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:58.103 [2024-07-12 07:25:31.735637] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:58.103 [2024-07-12 07:25:31.735843] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:58.103 07:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:58.360 [2024-07-12 07:25:32.007701] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:58.360 [2024-07-12 07:25:32.007963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:58.360 [2024-07-12 07:25:32.008078] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:58.360 [2024-07-12 07:25:32.008146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:58.360 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:58.360 [2024-07-12 07:25:32.235721] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.360 BaseBdev1 00:14:58.617 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:58.617 07:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:58.617 07:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:58.617 07:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:58.617 07:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:58.617 07:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:58.617 07:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:58.875 07:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:58.875 [ 00:14:58.875 { 00:14:58.875 "name": "BaseBdev1", 00:14:58.875 "aliases": [ 00:14:58.875 "c46bc2fc-523c-4a7b-af06-f871e2245775" 00:14:58.875 ], 00:14:58.875 "product_name": "Malloc disk", 00:14:58.875 "block_size": 512, 00:14:58.875 "num_blocks": 65536, 00:14:58.875 "uuid": "c46bc2fc-523c-4a7b-af06-f871e2245775", 00:14:58.875 "assigned_rate_limits": { 00:14:58.875 "rw_ios_per_sec": 0, 00:14:58.875 "rw_mbytes_per_sec": 0, 00:14:58.875 "r_mbytes_per_sec": 0, 00:14:58.875 "w_mbytes_per_sec": 0 00:14:58.875 }, 00:14:58.875 "claimed": true, 00:14:58.875 "claim_type": "exclusive_write", 00:14:58.875 "zoned": false, 00:14:58.875 "supported_io_types": { 00:14:58.875 "read": true, 00:14:58.875 "write": true, 00:14:58.875 "unmap": true, 00:14:58.875 "write_zeroes": true, 00:14:58.875 "flush": true, 00:14:58.875 "reset": true, 00:14:58.875 "compare": false, 00:14:58.875 "compare_and_write": false, 00:14:58.875 "abort": true, 00:14:58.875 "nvme_admin": false, 00:14:58.875 "nvme_io": false 00:14:58.875 }, 00:14:58.875 "memory_domains": [ 00:14:58.875 { 00:14:58.875 "dma_device_id": "system", 00:14:58.875 "dma_device_type": 1 00:14:58.875 }, 00:14:58.875 { 00:14:58.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.875 "dma_device_type": 2 00:14:58.875 } 00:14:58.875 ], 00:14:58.875 "driver_specific": {} 00:14:58.875 } 00:14:58.875 ] 00:14:58.875 07:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:58.875 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:58.875 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:58.875 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:58.875 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:58.875 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:58.875 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:58.875 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:58.875 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:58.875 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:58.875 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:58.875 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.875 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.133 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.133 "name": "Existed_Raid", 00:14:59.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.133 "strip_size_kb": 64, 00:14:59.133 "state": "configuring", 00:14:59.133 "raid_level": "raid0", 00:14:59.133 "superblock": false, 00:14:59.133 "num_base_bdevs": 2, 00:14:59.133 "num_base_bdevs_discovered": 1, 00:14:59.133 "num_base_bdevs_operational": 2, 00:14:59.133 "base_bdevs_list": [ 00:14:59.133 { 00:14:59.133 "name": "BaseBdev1", 00:14:59.133 "uuid": "c46bc2fc-523c-4a7b-af06-f871e2245775", 00:14:59.133 "is_configured": true, 00:14:59.133 "data_offset": 0, 00:14:59.133 "data_size": 65536 00:14:59.133 }, 00:14:59.133 { 00:14:59.133 "name": "BaseBdev2", 00:14:59.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.133 "is_configured": false, 00:14:59.133 "data_offset": 0, 00:14:59.133 "data_size": 0 00:14:59.133 } 00:14:59.133 ] 00:14:59.133 }' 00:14:59.133 07:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.133 07:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.722 07:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:59.982 [2024-07-12 07:25:33.860098] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.982 [2024-07-12 07:25:33.860451] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:00.241 07:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:00.241 [2024-07-12 07:25:34.060204] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.241 [2024-07-12 07:25:34.062935] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.241 [2024-07-12 07:25:34.063127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.241 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:00.241 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:00.241 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:00.241 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:00.241 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:00.241 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:00.241 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:00.241 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:00.241 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:00.241 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:00.241 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:00.241 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:00.241 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.241 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.500 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:00.500 "name": "Existed_Raid", 00:15:00.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.500 "strip_size_kb": 64, 00:15:00.500 "state": "configuring", 00:15:00.500 "raid_level": "raid0", 00:15:00.500 "superblock": false, 00:15:00.500 "num_base_bdevs": 2, 00:15:00.500 "num_base_bdevs_discovered": 1, 00:15:00.500 "num_base_bdevs_operational": 2, 00:15:00.500 "base_bdevs_list": [ 00:15:00.500 { 00:15:00.500 "name": "BaseBdev1", 00:15:00.500 "uuid": "c46bc2fc-523c-4a7b-af06-f871e2245775", 00:15:00.500 "is_configured": true, 00:15:00.500 "data_offset": 0, 00:15:00.500 "data_size": 65536 00:15:00.500 }, 00:15:00.500 { 00:15:00.500 "name": "BaseBdev2", 00:15:00.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.500 "is_configured": false, 00:15:00.500 "data_offset": 0, 00:15:00.500 "data_size": 0 00:15:00.500 } 00:15:00.500 ] 00:15:00.500 }' 00:15:00.500 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:00.500 07:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.065 07:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:01.324 [2024-07-12 07:25:35.095194] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.324 [2024-07-12 07:25:35.095530] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:15:01.324 [2024-07-12 07:25:35.095595] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:01.324 [2024-07-12 07:25:35.095944] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:15:01.324 [2024-07-12 07:25:35.096650] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:15:01.324 [2024-07-12 07:25:35.096809] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:15:01.324 [2024-07-12 07:25:35.097330] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.324 BaseBdev2 00:15:01.324 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:01.324 07:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:01.324 07:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:01.324 07:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:01.324 07:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:01.324 07:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:01.324 07:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:01.582 07:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:01.840 [ 00:15:01.840 { 00:15:01.840 "name": "BaseBdev2", 00:15:01.840 "aliases": [ 00:15:01.840 "4861e169-edf6-4e24-8cb5-9f42aaab5917" 00:15:01.840 ], 00:15:01.840 "product_name": "Malloc disk", 00:15:01.840 "block_size": 512, 00:15:01.840 "num_blocks": 65536, 00:15:01.840 "uuid": "4861e169-edf6-4e24-8cb5-9f42aaab5917", 00:15:01.840 "assigned_rate_limits": { 00:15:01.840 "rw_ios_per_sec": 0, 00:15:01.840 "rw_mbytes_per_sec": 0, 00:15:01.840 "r_mbytes_per_sec": 0, 00:15:01.840 "w_mbytes_per_sec": 0 00:15:01.840 }, 00:15:01.840 "claimed": true, 00:15:01.840 "claim_type": "exclusive_write", 00:15:01.840 "zoned": false, 00:15:01.840 "supported_io_types": { 00:15:01.840 "read": true, 00:15:01.840 "write": true, 00:15:01.840 "unmap": true, 00:15:01.840 "write_zeroes": true, 00:15:01.841 "flush": true, 00:15:01.841 "reset": true, 00:15:01.841 "compare": false, 00:15:01.841 "compare_and_write": false, 00:15:01.841 "abort": true, 00:15:01.841 "nvme_admin": false, 00:15:01.841 "nvme_io": false 00:15:01.841 }, 00:15:01.841 "memory_domains": [ 00:15:01.841 { 00:15:01.841 "dma_device_id": "system", 00:15:01.841 "dma_device_type": 1 00:15:01.841 }, 00:15:01.841 { 00:15:01.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.841 "dma_device_type": 2 00:15:01.841 } 00:15:01.841 ], 00:15:01.841 "driver_specific": {} 00:15:01.841 } 00:15:01.841 ] 00:15:01.841 07:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:01.841 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:01.841 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:01.841 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:01.841 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:01.841 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:01.841 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:01.841 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:01.841 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:01.841 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.841 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.841 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.841 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.841 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.841 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.099 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:02.099 "name": "Existed_Raid", 00:15:02.099 "uuid": "0415b2a4-931d-44b7-99c8-c2781d9f7d8a", 00:15:02.099 "strip_size_kb": 64, 00:15:02.099 "state": "online", 00:15:02.099 "raid_level": "raid0", 00:15:02.099 "superblock": false, 00:15:02.099 "num_base_bdevs": 2, 00:15:02.099 "num_base_bdevs_discovered": 2, 00:15:02.099 "num_base_bdevs_operational": 2, 00:15:02.099 "base_bdevs_list": [ 00:15:02.099 { 00:15:02.099 "name": "BaseBdev1", 00:15:02.099 "uuid": "c46bc2fc-523c-4a7b-af06-f871e2245775", 00:15:02.099 "is_configured": true, 00:15:02.099 "data_offset": 0, 00:15:02.099 "data_size": 65536 00:15:02.099 }, 00:15:02.100 { 00:15:02.100 "name": "BaseBdev2", 00:15:02.100 "uuid": "4861e169-edf6-4e24-8cb5-9f42aaab5917", 00:15:02.100 "is_configured": true, 00:15:02.100 "data_offset": 0, 00:15:02.100 "data_size": 65536 00:15:02.100 } 00:15:02.100 ] 00:15:02.100 }' 00:15:02.100 07:25:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:02.100 07:25:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.667 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:02.667 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:02.667 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:02.667 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:02.667 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:02.667 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:02.667 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:02.667 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:02.925 [2024-07-12 07:25:36.623763] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.925 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:02.925 "name": "Existed_Raid", 00:15:02.925 "aliases": [ 00:15:02.925 "0415b2a4-931d-44b7-99c8-c2781d9f7d8a" 00:15:02.925 ], 00:15:02.925 "product_name": "Raid Volume", 00:15:02.925 "block_size": 512, 00:15:02.925 "num_blocks": 131072, 00:15:02.925 "uuid": "0415b2a4-931d-44b7-99c8-c2781d9f7d8a", 00:15:02.925 "assigned_rate_limits": { 00:15:02.925 "rw_ios_per_sec": 0, 00:15:02.925 "rw_mbytes_per_sec": 0, 00:15:02.925 "r_mbytes_per_sec": 0, 00:15:02.925 "w_mbytes_per_sec": 0 00:15:02.925 }, 00:15:02.925 "claimed": false, 00:15:02.925 "zoned": false, 00:15:02.925 "supported_io_types": { 00:15:02.925 "read": true, 00:15:02.925 "write": true, 00:15:02.925 "unmap": true, 00:15:02.925 "write_zeroes": true, 00:15:02.925 "flush": true, 00:15:02.925 "reset": true, 00:15:02.925 "compare": false, 00:15:02.925 "compare_and_write": false, 00:15:02.925 "abort": false, 00:15:02.925 "nvme_admin": false, 00:15:02.925 "nvme_io": false 00:15:02.925 }, 00:15:02.925 "memory_domains": [ 00:15:02.925 { 00:15:02.925 "dma_device_id": "system", 00:15:02.925 "dma_device_type": 1 00:15:02.925 }, 00:15:02.925 { 00:15:02.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.925 "dma_device_type": 2 00:15:02.925 }, 00:15:02.925 { 00:15:02.925 "dma_device_id": "system", 00:15:02.925 "dma_device_type": 1 00:15:02.925 }, 00:15:02.925 { 00:15:02.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.925 "dma_device_type": 2 00:15:02.925 } 00:15:02.925 ], 00:15:02.925 "driver_specific": { 00:15:02.925 "raid": { 00:15:02.925 "uuid": "0415b2a4-931d-44b7-99c8-c2781d9f7d8a", 00:15:02.925 "strip_size_kb": 64, 00:15:02.925 "state": "online", 00:15:02.925 "raid_level": "raid0", 00:15:02.925 "superblock": false, 00:15:02.925 "num_base_bdevs": 2, 00:15:02.925 "num_base_bdevs_discovered": 2, 00:15:02.925 "num_base_bdevs_operational": 2, 00:15:02.925 "base_bdevs_list": [ 00:15:02.925 { 00:15:02.925 "name": "BaseBdev1", 00:15:02.925 "uuid": "c46bc2fc-523c-4a7b-af06-f871e2245775", 00:15:02.925 "is_configured": true, 00:15:02.925 "data_offset": 0, 00:15:02.925 "data_size": 65536 00:15:02.925 }, 00:15:02.925 { 00:15:02.925 "name": "BaseBdev2", 00:15:02.925 "uuid": "4861e169-edf6-4e24-8cb5-9f42aaab5917", 00:15:02.925 "is_configured": true, 00:15:02.925 "data_offset": 0, 00:15:02.925 "data_size": 65536 00:15:02.925 } 00:15:02.925 ] 00:15:02.925 } 00:15:02.925 } 00:15:02.925 }' 00:15:02.925 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:02.925 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:02.925 BaseBdev2' 00:15:02.925 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:02.925 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:02.925 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:03.183 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.183 "name": "BaseBdev1", 00:15:03.183 "aliases": [ 00:15:03.183 "c46bc2fc-523c-4a7b-af06-f871e2245775" 00:15:03.183 ], 00:15:03.183 "product_name": "Malloc disk", 00:15:03.183 "block_size": 512, 00:15:03.183 "num_blocks": 65536, 00:15:03.183 "uuid": "c46bc2fc-523c-4a7b-af06-f871e2245775", 00:15:03.183 "assigned_rate_limits": { 00:15:03.183 "rw_ios_per_sec": 0, 00:15:03.183 "rw_mbytes_per_sec": 0, 00:15:03.183 "r_mbytes_per_sec": 0, 00:15:03.183 "w_mbytes_per_sec": 0 00:15:03.183 }, 00:15:03.183 "claimed": true, 00:15:03.183 "claim_type": "exclusive_write", 00:15:03.183 "zoned": false, 00:15:03.183 "supported_io_types": { 00:15:03.183 "read": true, 00:15:03.183 "write": true, 00:15:03.183 "unmap": true, 00:15:03.183 "write_zeroes": true, 00:15:03.183 "flush": true, 00:15:03.183 "reset": true, 00:15:03.183 "compare": false, 00:15:03.183 "compare_and_write": false, 00:15:03.183 "abort": true, 00:15:03.183 "nvme_admin": false, 00:15:03.183 "nvme_io": false 00:15:03.183 }, 00:15:03.183 "memory_domains": [ 00:15:03.183 { 00:15:03.183 "dma_device_id": "system", 00:15:03.183 "dma_device_type": 1 00:15:03.183 }, 00:15:03.183 { 00:15:03.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.183 "dma_device_type": 2 00:15:03.183 } 00:15:03.183 ], 00:15:03.183 "driver_specific": {} 00:15:03.183 }' 00:15:03.183 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.183 07:25:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.183 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:03.183 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.442 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.442 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:03.442 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.442 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.442 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:03.442 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.442 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.442 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.442 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.442 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.442 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:03.700 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.700 "name": "BaseBdev2", 00:15:03.700 "aliases": [ 00:15:03.700 "4861e169-edf6-4e24-8cb5-9f42aaab5917" 00:15:03.700 ], 00:15:03.700 "product_name": "Malloc disk", 00:15:03.700 "block_size": 512, 00:15:03.700 "num_blocks": 65536, 00:15:03.700 "uuid": "4861e169-edf6-4e24-8cb5-9f42aaab5917", 00:15:03.700 "assigned_rate_limits": { 00:15:03.700 "rw_ios_per_sec": 0, 00:15:03.700 "rw_mbytes_per_sec": 0, 00:15:03.700 "r_mbytes_per_sec": 0, 00:15:03.700 "w_mbytes_per_sec": 0 00:15:03.700 }, 00:15:03.700 "claimed": true, 00:15:03.700 "claim_type": "exclusive_write", 00:15:03.700 "zoned": false, 00:15:03.700 "supported_io_types": { 00:15:03.700 "read": true, 00:15:03.700 "write": true, 00:15:03.700 "unmap": true, 00:15:03.700 "write_zeroes": true, 00:15:03.700 "flush": true, 00:15:03.700 "reset": true, 00:15:03.700 "compare": false, 00:15:03.700 "compare_and_write": false, 00:15:03.700 "abort": true, 00:15:03.700 "nvme_admin": false, 00:15:03.700 "nvme_io": false 00:15:03.700 }, 00:15:03.700 "memory_domains": [ 00:15:03.700 { 00:15:03.700 "dma_device_id": "system", 00:15:03.700 "dma_device_type": 1 00:15:03.700 }, 00:15:03.700 { 00:15:03.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.700 "dma_device_type": 2 00:15:03.700 } 00:15:03.700 ], 00:15:03.700 "driver_specific": {} 00:15:03.700 }' 00:15:03.700 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.700 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.700 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:03.700 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.958 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.958 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:03.958 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.958 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.958 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:03.958 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.958 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.958 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.958 07:25:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:04.217 [2024-07-12 07:25:37.987967] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:04.217 [2024-07-12 07:25:37.988199] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.217 [2024-07-12 07:25:37.988431] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.217 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.475 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:04.475 "name": "Existed_Raid", 00:15:04.475 "uuid": "0415b2a4-931d-44b7-99c8-c2781d9f7d8a", 00:15:04.475 "strip_size_kb": 64, 00:15:04.475 "state": "offline", 00:15:04.475 "raid_level": "raid0", 00:15:04.475 "superblock": false, 00:15:04.475 "num_base_bdevs": 2, 00:15:04.475 "num_base_bdevs_discovered": 1, 00:15:04.475 "num_base_bdevs_operational": 1, 00:15:04.475 "base_bdevs_list": [ 00:15:04.475 { 00:15:04.475 "name": null, 00:15:04.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.475 "is_configured": false, 00:15:04.475 "data_offset": 0, 00:15:04.475 "data_size": 65536 00:15:04.475 }, 00:15:04.475 { 00:15:04.475 "name": "BaseBdev2", 00:15:04.475 "uuid": "4861e169-edf6-4e24-8cb5-9f42aaab5917", 00:15:04.475 "is_configured": true, 00:15:04.475 "data_offset": 0, 00:15:04.475 "data_size": 65536 00:15:04.475 } 00:15:04.475 ] 00:15:04.475 }' 00:15:04.475 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:04.475 07:25:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.040 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:05.040 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:05.040 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.040 07:25:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:05.298 07:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:05.299 07:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:05.299 07:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:05.557 [2024-07-12 07:25:39.245652] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:05.557 [2024-07-12 07:25:39.245977] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:15:05.557 07:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:05.557 07:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:05.557 07:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.557 07:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:05.816 07:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:05.816 07:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:05.816 07:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:05.816 07:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 131018 00:15:05.816 07:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 131018 ']' 00:15:05.816 07:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 131018 00:15:05.816 07:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:15:05.816 07:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:05.816 07:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 131018 00:15:05.816 killing process with pid 131018 00:15:05.816 07:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:05.816 07:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:05.816 07:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 131018' 00:15:05.816 07:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 131018 00:15:05.816 07:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 131018 00:15:05.816 [2024-07-12 07:25:39.560452] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.816 [2024-07-12 07:25:39.560560] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:06.075 ************************************ 00:15:06.075 END TEST raid_state_function_test 00:15:06.075 ************************************ 00:15:06.075 07:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:06.075 00:15:06.075 real 0m10.361s 00:15:06.075 user 0m18.156s 00:15:06.075 sys 0m1.921s 00:15:06.075 07:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:06.075 07:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.334 07:25:40 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:15:06.334 07:25:40 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:06.334 07:25:40 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:06.334 07:25:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:06.334 ************************************ 00:15:06.334 START TEST raid_state_function_test_sb 00:15:06.334 ************************************ 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 true 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=131382 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 131382' 00:15:06.334 Process raid pid: 131382 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 131382 /var/tmp/spdk-raid.sock 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 131382 ']' 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:06.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:06.334 07:25:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.334 [2024-07-12 07:25:40.087612] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:06.334 [2024-07-12 07:25:40.088048] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.593 [2024-07-12 07:25:40.236873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.593 [2024-07-12 07:25:40.321583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.593 [2024-07-12 07:25:40.400981] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.161 07:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:07.161 07:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:15:07.161 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:07.420 [2024-07-12 07:25:41.181061] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:07.420 [2024-07-12 07:25:41.181398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:07.420 [2024-07-12 07:25:41.181521] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:07.420 [2024-07-12 07:25:41.181581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:07.420 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:07.420 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:07.420 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:07.420 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:07.420 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.420 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:07.420 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.420 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.420 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.420 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.420 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.420 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.679 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.679 "name": "Existed_Raid", 00:15:07.679 "uuid": "fae6a452-b656-4e25-925a-f3cc99df6374", 00:15:07.679 "strip_size_kb": 64, 00:15:07.679 "state": "configuring", 00:15:07.679 "raid_level": "raid0", 00:15:07.679 "superblock": true, 00:15:07.679 "num_base_bdevs": 2, 00:15:07.679 "num_base_bdevs_discovered": 0, 00:15:07.679 "num_base_bdevs_operational": 2, 00:15:07.679 "base_bdevs_list": [ 00:15:07.679 { 00:15:07.679 "name": "BaseBdev1", 00:15:07.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.679 "is_configured": false, 00:15:07.679 "data_offset": 0, 00:15:07.679 "data_size": 0 00:15:07.679 }, 00:15:07.679 { 00:15:07.679 "name": "BaseBdev2", 00:15:07.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.679 "is_configured": false, 00:15:07.679 "data_offset": 0, 00:15:07.679 "data_size": 0 00:15:07.679 } 00:15:07.679 ] 00:15:07.679 }' 00:15:07.679 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.679 07:25:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.246 07:25:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:08.505 [2024-07-12 07:25:42.153082] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:08.505 [2024-07-12 07:25:42.153298] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:08.505 07:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:08.505 [2024-07-12 07:25:42.345144] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:08.505 [2024-07-12 07:25:42.345529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:08.505 [2024-07-12 07:25:42.345635] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:08.505 [2024-07-12 07:25:42.345700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:08.505 07:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:08.765 [2024-07-12 07:25:42.553144] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.765 BaseBdev1 00:15:08.765 07:25:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:08.765 07:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:08.765 07:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:08.765 07:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:08.765 07:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:08.765 07:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:08.765 07:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:09.024 07:25:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:09.283 [ 00:15:09.283 { 00:15:09.283 "name": "BaseBdev1", 00:15:09.283 "aliases": [ 00:15:09.283 "4b412a83-28d8-410f-9a18-8f441f9728ee" 00:15:09.283 ], 00:15:09.283 "product_name": "Malloc disk", 00:15:09.283 "block_size": 512, 00:15:09.283 "num_blocks": 65536, 00:15:09.283 "uuid": "4b412a83-28d8-410f-9a18-8f441f9728ee", 00:15:09.283 "assigned_rate_limits": { 00:15:09.283 "rw_ios_per_sec": 0, 00:15:09.283 "rw_mbytes_per_sec": 0, 00:15:09.283 "r_mbytes_per_sec": 0, 00:15:09.283 "w_mbytes_per_sec": 0 00:15:09.283 }, 00:15:09.283 "claimed": true, 00:15:09.283 "claim_type": "exclusive_write", 00:15:09.283 "zoned": false, 00:15:09.283 "supported_io_types": { 00:15:09.283 "read": true, 00:15:09.283 "write": true, 00:15:09.283 "unmap": true, 00:15:09.283 "write_zeroes": true, 00:15:09.283 "flush": true, 00:15:09.283 "reset": true, 00:15:09.283 "compare": false, 00:15:09.283 "compare_and_write": false, 00:15:09.283 "abort": true, 00:15:09.283 "nvme_admin": false, 00:15:09.283 "nvme_io": false 00:15:09.283 }, 00:15:09.283 "memory_domains": [ 00:15:09.283 { 00:15:09.283 "dma_device_id": "system", 00:15:09.283 "dma_device_type": 1 00:15:09.283 }, 00:15:09.283 { 00:15:09.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.283 "dma_device_type": 2 00:15:09.283 } 00:15:09.283 ], 00:15:09.283 "driver_specific": {} 00:15:09.283 } 00:15:09.283 ] 00:15:09.283 07:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:09.283 07:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:09.283 07:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:09.283 07:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:09.283 07:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:09.283 07:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.283 07:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:09.283 07:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.283 07:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.283 07:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.283 07:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.284 07:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.284 07:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.542 07:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.542 "name": "Existed_Raid", 00:15:09.542 "uuid": "18089dac-d811-468e-8be6-ebdd16734d6b", 00:15:09.542 "strip_size_kb": 64, 00:15:09.542 "state": "configuring", 00:15:09.542 "raid_level": "raid0", 00:15:09.542 "superblock": true, 00:15:09.542 "num_base_bdevs": 2, 00:15:09.542 "num_base_bdevs_discovered": 1, 00:15:09.542 "num_base_bdevs_operational": 2, 00:15:09.542 "base_bdevs_list": [ 00:15:09.542 { 00:15:09.542 "name": "BaseBdev1", 00:15:09.542 "uuid": "4b412a83-28d8-410f-9a18-8f441f9728ee", 00:15:09.542 "is_configured": true, 00:15:09.542 "data_offset": 2048, 00:15:09.542 "data_size": 63488 00:15:09.542 }, 00:15:09.542 { 00:15:09.542 "name": "BaseBdev2", 00:15:09.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.542 "is_configured": false, 00:15:09.542 "data_offset": 0, 00:15:09.542 "data_size": 0 00:15:09.542 } 00:15:09.542 ] 00:15:09.542 }' 00:15:09.542 07:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.542 07:25:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.112 07:25:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:10.377 [2024-07-12 07:25:44.125548] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:10.377 [2024-07-12 07:25:44.125819] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:10.377 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:10.634 [2024-07-12 07:25:44.381690] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.634 [2024-07-12 07:25:44.384278] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.634 [2024-07-12 07:25:44.384445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.634 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:10.634 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:10.634 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:10.634 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:10.634 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:10.634 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:10.634 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:10.634 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:10.634 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:10.634 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:10.634 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:10.634 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:10.634 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.634 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.892 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:10.892 "name": "Existed_Raid", 00:15:10.892 "uuid": "a0a44a55-4182-42a6-afce-c94a98865348", 00:15:10.892 "strip_size_kb": 64, 00:15:10.893 "state": "configuring", 00:15:10.893 "raid_level": "raid0", 00:15:10.893 "superblock": true, 00:15:10.893 "num_base_bdevs": 2, 00:15:10.893 "num_base_bdevs_discovered": 1, 00:15:10.893 "num_base_bdevs_operational": 2, 00:15:10.893 "base_bdevs_list": [ 00:15:10.893 { 00:15:10.893 "name": "BaseBdev1", 00:15:10.893 "uuid": "4b412a83-28d8-410f-9a18-8f441f9728ee", 00:15:10.893 "is_configured": true, 00:15:10.893 "data_offset": 2048, 00:15:10.893 "data_size": 63488 00:15:10.893 }, 00:15:10.893 { 00:15:10.893 "name": "BaseBdev2", 00:15:10.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.893 "is_configured": false, 00:15:10.893 "data_offset": 0, 00:15:10.893 "data_size": 0 00:15:10.893 } 00:15:10.893 ] 00:15:10.893 }' 00:15:10.893 07:25:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:10.893 07:25:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.460 07:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:11.720 [2024-07-12 07:25:45.551404] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:11.720 [2024-07-12 07:25:45.552018] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:15:11.720 [2024-07-12 07:25:45.552219] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:11.720 [2024-07-12 07:25:45.552525] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:15:11.720 BaseBdev2 00:15:11.720 [2024-07-12 07:25:45.553334] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:15:11.720 [2024-07-12 07:25:45.553357] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:15:11.720 [2024-07-12 07:25:45.553608] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.720 07:25:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:11.720 07:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:11.720 07:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:11.720 07:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:11.720 07:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:11.720 07:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:11.720 07:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:11.979 07:25:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:12.238 [ 00:15:12.238 { 00:15:12.238 "name": "BaseBdev2", 00:15:12.238 "aliases": [ 00:15:12.238 "a11a01b6-6f4d-45d9-a2a1-589c42a6da9a" 00:15:12.238 ], 00:15:12.238 "product_name": "Malloc disk", 00:15:12.238 "block_size": 512, 00:15:12.238 "num_blocks": 65536, 00:15:12.238 "uuid": "a11a01b6-6f4d-45d9-a2a1-589c42a6da9a", 00:15:12.238 "assigned_rate_limits": { 00:15:12.238 "rw_ios_per_sec": 0, 00:15:12.238 "rw_mbytes_per_sec": 0, 00:15:12.238 "r_mbytes_per_sec": 0, 00:15:12.238 "w_mbytes_per_sec": 0 00:15:12.238 }, 00:15:12.238 "claimed": true, 00:15:12.238 "claim_type": "exclusive_write", 00:15:12.238 "zoned": false, 00:15:12.238 "supported_io_types": { 00:15:12.238 "read": true, 00:15:12.238 "write": true, 00:15:12.238 "unmap": true, 00:15:12.238 "write_zeroes": true, 00:15:12.238 "flush": true, 00:15:12.238 "reset": true, 00:15:12.238 "compare": false, 00:15:12.238 "compare_and_write": false, 00:15:12.238 "abort": true, 00:15:12.238 "nvme_admin": false, 00:15:12.238 "nvme_io": false 00:15:12.238 }, 00:15:12.238 "memory_domains": [ 00:15:12.238 { 00:15:12.238 "dma_device_id": "system", 00:15:12.238 "dma_device_type": 1 00:15:12.238 }, 00:15:12.238 { 00:15:12.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.238 "dma_device_type": 2 00:15:12.238 } 00:15:12.238 ], 00:15:12.238 "driver_specific": {} 00:15:12.238 } 00:15:12.238 ] 00:15:12.238 07:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:12.238 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:12.238 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:12.238 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:12.238 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:12.238 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:12.238 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:12.238 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:12.238 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:12.238 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.238 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.238 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.238 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.498 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.498 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.498 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.498 "name": "Existed_Raid", 00:15:12.498 "uuid": "a0a44a55-4182-42a6-afce-c94a98865348", 00:15:12.498 "strip_size_kb": 64, 00:15:12.498 "state": "online", 00:15:12.498 "raid_level": "raid0", 00:15:12.498 "superblock": true, 00:15:12.498 "num_base_bdevs": 2, 00:15:12.498 "num_base_bdevs_discovered": 2, 00:15:12.498 "num_base_bdevs_operational": 2, 00:15:12.498 "base_bdevs_list": [ 00:15:12.498 { 00:15:12.498 "name": "BaseBdev1", 00:15:12.498 "uuid": "4b412a83-28d8-410f-9a18-8f441f9728ee", 00:15:12.498 "is_configured": true, 00:15:12.498 "data_offset": 2048, 00:15:12.498 "data_size": 63488 00:15:12.498 }, 00:15:12.498 { 00:15:12.498 "name": "BaseBdev2", 00:15:12.498 "uuid": "a11a01b6-6f4d-45d9-a2a1-589c42a6da9a", 00:15:12.498 "is_configured": true, 00:15:12.498 "data_offset": 2048, 00:15:12.498 "data_size": 63488 00:15:12.498 } 00:15:12.498 ] 00:15:12.498 }' 00:15:12.498 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.498 07:25:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.066 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:13.066 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:13.066 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:13.066 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:13.066 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:13.066 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:13.066 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:13.066 07:25:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:13.325 [2024-07-12 07:25:47.079954] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.325 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:13.325 "name": "Existed_Raid", 00:15:13.325 "aliases": [ 00:15:13.325 "a0a44a55-4182-42a6-afce-c94a98865348" 00:15:13.325 ], 00:15:13.325 "product_name": "Raid Volume", 00:15:13.325 "block_size": 512, 00:15:13.325 "num_blocks": 126976, 00:15:13.325 "uuid": "a0a44a55-4182-42a6-afce-c94a98865348", 00:15:13.325 "assigned_rate_limits": { 00:15:13.325 "rw_ios_per_sec": 0, 00:15:13.325 "rw_mbytes_per_sec": 0, 00:15:13.325 "r_mbytes_per_sec": 0, 00:15:13.325 "w_mbytes_per_sec": 0 00:15:13.325 }, 00:15:13.325 "claimed": false, 00:15:13.325 "zoned": false, 00:15:13.325 "supported_io_types": { 00:15:13.325 "read": true, 00:15:13.325 "write": true, 00:15:13.325 "unmap": true, 00:15:13.325 "write_zeroes": true, 00:15:13.325 "flush": true, 00:15:13.325 "reset": true, 00:15:13.325 "compare": false, 00:15:13.325 "compare_and_write": false, 00:15:13.325 "abort": false, 00:15:13.325 "nvme_admin": false, 00:15:13.325 "nvme_io": false 00:15:13.325 }, 00:15:13.325 "memory_domains": [ 00:15:13.325 { 00:15:13.325 "dma_device_id": "system", 00:15:13.325 "dma_device_type": 1 00:15:13.325 }, 00:15:13.325 { 00:15:13.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.325 "dma_device_type": 2 00:15:13.325 }, 00:15:13.325 { 00:15:13.325 "dma_device_id": "system", 00:15:13.325 "dma_device_type": 1 00:15:13.325 }, 00:15:13.325 { 00:15:13.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.325 "dma_device_type": 2 00:15:13.325 } 00:15:13.325 ], 00:15:13.325 "driver_specific": { 00:15:13.325 "raid": { 00:15:13.325 "uuid": "a0a44a55-4182-42a6-afce-c94a98865348", 00:15:13.325 "strip_size_kb": 64, 00:15:13.325 "state": "online", 00:15:13.325 "raid_level": "raid0", 00:15:13.325 "superblock": true, 00:15:13.325 "num_base_bdevs": 2, 00:15:13.325 "num_base_bdevs_discovered": 2, 00:15:13.325 "num_base_bdevs_operational": 2, 00:15:13.325 "base_bdevs_list": [ 00:15:13.325 { 00:15:13.325 "name": "BaseBdev1", 00:15:13.325 "uuid": "4b412a83-28d8-410f-9a18-8f441f9728ee", 00:15:13.325 "is_configured": true, 00:15:13.325 "data_offset": 2048, 00:15:13.325 "data_size": 63488 00:15:13.325 }, 00:15:13.325 { 00:15:13.325 "name": "BaseBdev2", 00:15:13.325 "uuid": "a11a01b6-6f4d-45d9-a2a1-589c42a6da9a", 00:15:13.325 "is_configured": true, 00:15:13.325 "data_offset": 2048, 00:15:13.325 "data_size": 63488 00:15:13.325 } 00:15:13.325 ] 00:15:13.325 } 00:15:13.325 } 00:15:13.325 }' 00:15:13.325 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:13.325 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:13.325 BaseBdev2' 00:15:13.325 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:13.325 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:13.325 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:13.583 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:13.583 "name": "BaseBdev1", 00:15:13.583 "aliases": [ 00:15:13.583 "4b412a83-28d8-410f-9a18-8f441f9728ee" 00:15:13.583 ], 00:15:13.583 "product_name": "Malloc disk", 00:15:13.583 "block_size": 512, 00:15:13.583 "num_blocks": 65536, 00:15:13.583 "uuid": "4b412a83-28d8-410f-9a18-8f441f9728ee", 00:15:13.583 "assigned_rate_limits": { 00:15:13.583 "rw_ios_per_sec": 0, 00:15:13.583 "rw_mbytes_per_sec": 0, 00:15:13.584 "r_mbytes_per_sec": 0, 00:15:13.584 "w_mbytes_per_sec": 0 00:15:13.584 }, 00:15:13.584 "claimed": true, 00:15:13.584 "claim_type": "exclusive_write", 00:15:13.584 "zoned": false, 00:15:13.584 "supported_io_types": { 00:15:13.584 "read": true, 00:15:13.584 "write": true, 00:15:13.584 "unmap": true, 00:15:13.584 "write_zeroes": true, 00:15:13.584 "flush": true, 00:15:13.584 "reset": true, 00:15:13.584 "compare": false, 00:15:13.584 "compare_and_write": false, 00:15:13.584 "abort": true, 00:15:13.584 "nvme_admin": false, 00:15:13.584 "nvme_io": false 00:15:13.584 }, 00:15:13.584 "memory_domains": [ 00:15:13.584 { 00:15:13.584 "dma_device_id": "system", 00:15:13.584 "dma_device_type": 1 00:15:13.584 }, 00:15:13.584 { 00:15:13.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.584 "dma_device_type": 2 00:15:13.584 } 00:15:13.584 ], 00:15:13.584 "driver_specific": {} 00:15:13.584 }' 00:15:13.584 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.584 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.584 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:13.584 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.842 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.842 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:13.842 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:13.842 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:13.842 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:13.842 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:13.842 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:13.842 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:13.842 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:13.842 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:13.842 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:14.101 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:14.101 "name": "BaseBdev2", 00:15:14.101 "aliases": [ 00:15:14.101 "a11a01b6-6f4d-45d9-a2a1-589c42a6da9a" 00:15:14.101 ], 00:15:14.101 "product_name": "Malloc disk", 00:15:14.101 "block_size": 512, 00:15:14.101 "num_blocks": 65536, 00:15:14.101 "uuid": "a11a01b6-6f4d-45d9-a2a1-589c42a6da9a", 00:15:14.101 "assigned_rate_limits": { 00:15:14.101 "rw_ios_per_sec": 0, 00:15:14.101 "rw_mbytes_per_sec": 0, 00:15:14.101 "r_mbytes_per_sec": 0, 00:15:14.101 "w_mbytes_per_sec": 0 00:15:14.101 }, 00:15:14.101 "claimed": true, 00:15:14.101 "claim_type": "exclusive_write", 00:15:14.101 "zoned": false, 00:15:14.101 "supported_io_types": { 00:15:14.101 "read": true, 00:15:14.101 "write": true, 00:15:14.101 "unmap": true, 00:15:14.101 "write_zeroes": true, 00:15:14.101 "flush": true, 00:15:14.101 "reset": true, 00:15:14.101 "compare": false, 00:15:14.101 "compare_and_write": false, 00:15:14.101 "abort": true, 00:15:14.101 "nvme_admin": false, 00:15:14.101 "nvme_io": false 00:15:14.101 }, 00:15:14.101 "memory_domains": [ 00:15:14.101 { 00:15:14.101 "dma_device_id": "system", 00:15:14.101 "dma_device_type": 1 00:15:14.101 }, 00:15:14.101 { 00:15:14.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.101 "dma_device_type": 2 00:15:14.101 } 00:15:14.101 ], 00:15:14.101 "driver_specific": {} 00:15:14.101 }' 00:15:14.101 07:25:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:14.360 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:14.361 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:14.361 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:14.361 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:14.361 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:14.361 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.361 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.361 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:14.619 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.619 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.619 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:14.619 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:14.878 [2024-07-12 07:25:48.504135] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:14.878 [2024-07-12 07:25:48.504381] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.878 [2024-07-12 07:25:48.504628] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.878 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.137 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:15.137 "name": "Existed_Raid", 00:15:15.137 "uuid": "a0a44a55-4182-42a6-afce-c94a98865348", 00:15:15.137 "strip_size_kb": 64, 00:15:15.137 "state": "offline", 00:15:15.137 "raid_level": "raid0", 00:15:15.137 "superblock": true, 00:15:15.137 "num_base_bdevs": 2, 00:15:15.137 "num_base_bdevs_discovered": 1, 00:15:15.137 "num_base_bdevs_operational": 1, 00:15:15.137 "base_bdevs_list": [ 00:15:15.137 { 00:15:15.137 "name": null, 00:15:15.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.137 "is_configured": false, 00:15:15.137 "data_offset": 2048, 00:15:15.137 "data_size": 63488 00:15:15.137 }, 00:15:15.137 { 00:15:15.137 "name": "BaseBdev2", 00:15:15.137 "uuid": "a11a01b6-6f4d-45d9-a2a1-589c42a6da9a", 00:15:15.137 "is_configured": true, 00:15:15.137 "data_offset": 2048, 00:15:15.137 "data_size": 63488 00:15:15.137 } 00:15:15.137 ] 00:15:15.137 }' 00:15:15.137 07:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:15.137 07:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.705 07:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:15.705 07:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:15.705 07:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.705 07:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:15.963 07:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:15.963 07:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:15.963 07:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:16.222 [2024-07-12 07:25:49.993690] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:16.222 [2024-07-12 07:25:49.994031] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:15:16.222 07:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:16.222 07:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:16.222 07:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.222 07:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:16.481 07:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:16.481 07:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:16.481 07:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:16.481 07:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 131382 00:15:16.481 07:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 131382 ']' 00:15:16.481 07:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 131382 00:15:16.481 07:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:15:16.481 07:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:16.481 07:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 131382 00:15:16.481 07:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:16.481 07:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:16.481 07:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 131382' 00:15:16.481 killing process with pid 131382 00:15:16.481 07:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 131382 00:15:16.481 07:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 131382 00:15:16.481 [2024-07-12 07:25:50.336021] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.481 [2024-07-12 07:25:50.336144] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.049 ************************************ 00:15:17.049 END TEST raid_state_function_test_sb 00:15:17.049 ************************************ 00:15:17.049 07:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:17.049 00:15:17.049 real 0m10.707s 00:15:17.049 user 0m18.839s 00:15:17.049 sys 0m1.877s 00:15:17.049 07:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:17.049 07:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.049 07:25:50 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:15:17.049 07:25:50 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:15:17.049 07:25:50 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:17.049 07:25:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.049 ************************************ 00:15:17.049 START TEST raid_superblock_test 00:15:17.049 ************************************ 00:15:17.049 07:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 2 00:15:17.049 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:15:17.049 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:15:17.049 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:17.049 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:17.049 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:17.049 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:17.049 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:17.049 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:17.049 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:17.049 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:17.049 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:17.049 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:17.049 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:17.050 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:15:17.050 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:17.050 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:17.050 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=131752 00:15:17.050 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 131752 /var/tmp/spdk-raid.sock 00:15:17.050 07:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:17.050 07:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 131752 ']' 00:15:17.050 07:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:17.050 07:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:17.050 07:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:17.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:17.050 07:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:17.050 07:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.050 [2024-07-12 07:25:50.859608] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:17.050 [2024-07-12 07:25:50.860061] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131752 ] 00:15:17.309 [2024-07-12 07:25:50.997746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.309 [2024-07-12 07:25:51.074799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.309 [2024-07-12 07:25:51.154585] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.275 07:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:18.275 07:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:15:18.275 07:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:18.275 07:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:18.275 07:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:18.275 07:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:18.275 07:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:18.275 07:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:18.275 07:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:18.275 07:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:18.275 07:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:18.275 malloc1 00:15:18.275 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:18.551 [2024-07-12 07:25:52.291187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:18.551 [2024-07-12 07:25:52.291528] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.551 [2024-07-12 07:25:52.291728] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:15:18.551 [2024-07-12 07:25:52.291868] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.551 [2024-07-12 07:25:52.294995] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.551 [2024-07-12 07:25:52.295195] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:18.551 pt1 00:15:18.551 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:18.551 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:18.551 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:18.551 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:18.551 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:18.551 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:18.551 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:18.551 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:18.551 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:18.810 malloc2 00:15:18.810 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:19.069 [2024-07-12 07:25:52.695355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:19.069 [2024-07-12 07:25:52.695612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.069 [2024-07-12 07:25:52.695692] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:19.069 [2024-07-12 07:25:52.695817] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.069 [2024-07-12 07:25:52.698697] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.069 [2024-07-12 07:25:52.698843] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:19.069 pt2 00:15:19.069 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:19.069 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:19.069 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:19.069 [2024-07-12 07:25:52.883611] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:19.069 [2024-07-12 07:25:52.886242] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:19.069 [2024-07-12 07:25:52.886568] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:15:19.069 [2024-07-12 07:25:52.886662] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:19.069 [2024-07-12 07:25:52.886865] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:15:19.069 [2024-07-12 07:25:52.887330] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:15:19.069 [2024-07-12 07:25:52.887432] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:15:19.069 [2024-07-12 07:25:52.887725] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.069 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:19.069 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:19.069 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:19.069 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:19.069 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:19.069 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:19.069 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:19.069 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:19.070 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:19.070 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:19.070 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.070 07:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.328 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.328 "name": "raid_bdev1", 00:15:19.328 "uuid": "b594c1b0-e939-4837-94c7-cd4d9d12e5b2", 00:15:19.328 "strip_size_kb": 64, 00:15:19.328 "state": "online", 00:15:19.328 "raid_level": "raid0", 00:15:19.328 "superblock": true, 00:15:19.328 "num_base_bdevs": 2, 00:15:19.328 "num_base_bdevs_discovered": 2, 00:15:19.328 "num_base_bdevs_operational": 2, 00:15:19.328 "base_bdevs_list": [ 00:15:19.328 { 00:15:19.328 "name": "pt1", 00:15:19.328 "uuid": "83ec795f-3d2c-5749-b32d-7eed6067554a", 00:15:19.328 "is_configured": true, 00:15:19.328 "data_offset": 2048, 00:15:19.328 "data_size": 63488 00:15:19.328 }, 00:15:19.328 { 00:15:19.328 "name": "pt2", 00:15:19.328 "uuid": "d17f0dd0-7e2f-5968-a7b9-a92723fc9819", 00:15:19.328 "is_configured": true, 00:15:19.328 "data_offset": 2048, 00:15:19.328 "data_size": 63488 00:15:19.328 } 00:15:19.328 ] 00:15:19.328 }' 00:15:19.328 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.328 07:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.894 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:19.895 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:19.895 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:19.895 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:19.895 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:19.895 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:19.895 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:19.895 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:20.154 [2024-07-12 07:25:53.916124] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.154 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:20.154 "name": "raid_bdev1", 00:15:20.154 "aliases": [ 00:15:20.154 "b594c1b0-e939-4837-94c7-cd4d9d12e5b2" 00:15:20.154 ], 00:15:20.154 "product_name": "Raid Volume", 00:15:20.154 "block_size": 512, 00:15:20.154 "num_blocks": 126976, 00:15:20.154 "uuid": "b594c1b0-e939-4837-94c7-cd4d9d12e5b2", 00:15:20.154 "assigned_rate_limits": { 00:15:20.154 "rw_ios_per_sec": 0, 00:15:20.154 "rw_mbytes_per_sec": 0, 00:15:20.154 "r_mbytes_per_sec": 0, 00:15:20.154 "w_mbytes_per_sec": 0 00:15:20.154 }, 00:15:20.154 "claimed": false, 00:15:20.154 "zoned": false, 00:15:20.154 "supported_io_types": { 00:15:20.154 "read": true, 00:15:20.154 "write": true, 00:15:20.154 "unmap": true, 00:15:20.154 "write_zeroes": true, 00:15:20.154 "flush": true, 00:15:20.154 "reset": true, 00:15:20.154 "compare": false, 00:15:20.154 "compare_and_write": false, 00:15:20.154 "abort": false, 00:15:20.154 "nvme_admin": false, 00:15:20.154 "nvme_io": false 00:15:20.154 }, 00:15:20.154 "memory_domains": [ 00:15:20.154 { 00:15:20.154 "dma_device_id": "system", 00:15:20.154 "dma_device_type": 1 00:15:20.154 }, 00:15:20.154 { 00:15:20.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.154 "dma_device_type": 2 00:15:20.154 }, 00:15:20.154 { 00:15:20.154 "dma_device_id": "system", 00:15:20.154 "dma_device_type": 1 00:15:20.154 }, 00:15:20.154 { 00:15:20.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.154 "dma_device_type": 2 00:15:20.154 } 00:15:20.154 ], 00:15:20.154 "driver_specific": { 00:15:20.154 "raid": { 00:15:20.154 "uuid": "b594c1b0-e939-4837-94c7-cd4d9d12e5b2", 00:15:20.154 "strip_size_kb": 64, 00:15:20.154 "state": "online", 00:15:20.154 "raid_level": "raid0", 00:15:20.154 "superblock": true, 00:15:20.154 "num_base_bdevs": 2, 00:15:20.154 "num_base_bdevs_discovered": 2, 00:15:20.154 "num_base_bdevs_operational": 2, 00:15:20.154 "base_bdevs_list": [ 00:15:20.154 { 00:15:20.154 "name": "pt1", 00:15:20.154 "uuid": "83ec795f-3d2c-5749-b32d-7eed6067554a", 00:15:20.154 "is_configured": true, 00:15:20.154 "data_offset": 2048, 00:15:20.154 "data_size": 63488 00:15:20.154 }, 00:15:20.154 { 00:15:20.154 "name": "pt2", 00:15:20.154 "uuid": "d17f0dd0-7e2f-5968-a7b9-a92723fc9819", 00:15:20.154 "is_configured": true, 00:15:20.154 "data_offset": 2048, 00:15:20.154 "data_size": 63488 00:15:20.154 } 00:15:20.154 ] 00:15:20.154 } 00:15:20.154 } 00:15:20.154 }' 00:15:20.154 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:20.154 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:20.154 pt2' 00:15:20.154 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:20.154 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:20.154 07:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:20.415 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:20.415 "name": "pt1", 00:15:20.415 "aliases": [ 00:15:20.415 "83ec795f-3d2c-5749-b32d-7eed6067554a" 00:15:20.415 ], 00:15:20.415 "product_name": "passthru", 00:15:20.415 "block_size": 512, 00:15:20.415 "num_blocks": 65536, 00:15:20.415 "uuid": "83ec795f-3d2c-5749-b32d-7eed6067554a", 00:15:20.415 "assigned_rate_limits": { 00:15:20.415 "rw_ios_per_sec": 0, 00:15:20.415 "rw_mbytes_per_sec": 0, 00:15:20.415 "r_mbytes_per_sec": 0, 00:15:20.415 "w_mbytes_per_sec": 0 00:15:20.415 }, 00:15:20.415 "claimed": true, 00:15:20.415 "claim_type": "exclusive_write", 00:15:20.415 "zoned": false, 00:15:20.415 "supported_io_types": { 00:15:20.415 "read": true, 00:15:20.415 "write": true, 00:15:20.415 "unmap": true, 00:15:20.415 "write_zeroes": true, 00:15:20.415 "flush": true, 00:15:20.415 "reset": true, 00:15:20.415 "compare": false, 00:15:20.415 "compare_and_write": false, 00:15:20.415 "abort": true, 00:15:20.415 "nvme_admin": false, 00:15:20.415 "nvme_io": false 00:15:20.415 }, 00:15:20.415 "memory_domains": [ 00:15:20.415 { 00:15:20.415 "dma_device_id": "system", 00:15:20.415 "dma_device_type": 1 00:15:20.415 }, 00:15:20.415 { 00:15:20.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.415 "dma_device_type": 2 00:15:20.415 } 00:15:20.415 ], 00:15:20.415 "driver_specific": { 00:15:20.415 "passthru": { 00:15:20.415 "name": "pt1", 00:15:20.415 "base_bdev_name": "malloc1" 00:15:20.415 } 00:15:20.415 } 00:15:20.415 }' 00:15:20.415 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:20.415 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:20.415 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:20.415 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:20.415 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:20.673 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:20.673 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:20.673 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:20.673 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:20.673 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:20.673 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:20.673 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:20.673 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:20.673 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:20.673 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:20.932 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:20.932 "name": "pt2", 00:15:20.932 "aliases": [ 00:15:20.932 "d17f0dd0-7e2f-5968-a7b9-a92723fc9819" 00:15:20.932 ], 00:15:20.932 "product_name": "passthru", 00:15:20.932 "block_size": 512, 00:15:20.932 "num_blocks": 65536, 00:15:20.932 "uuid": "d17f0dd0-7e2f-5968-a7b9-a92723fc9819", 00:15:20.932 "assigned_rate_limits": { 00:15:20.932 "rw_ios_per_sec": 0, 00:15:20.932 "rw_mbytes_per_sec": 0, 00:15:20.932 "r_mbytes_per_sec": 0, 00:15:20.932 "w_mbytes_per_sec": 0 00:15:20.932 }, 00:15:20.932 "claimed": true, 00:15:20.932 "claim_type": "exclusive_write", 00:15:20.932 "zoned": false, 00:15:20.932 "supported_io_types": { 00:15:20.932 "read": true, 00:15:20.932 "write": true, 00:15:20.932 "unmap": true, 00:15:20.932 "write_zeroes": true, 00:15:20.932 "flush": true, 00:15:20.932 "reset": true, 00:15:20.932 "compare": false, 00:15:20.932 "compare_and_write": false, 00:15:20.932 "abort": true, 00:15:20.932 "nvme_admin": false, 00:15:20.932 "nvme_io": false 00:15:20.932 }, 00:15:20.932 "memory_domains": [ 00:15:20.932 { 00:15:20.932 "dma_device_id": "system", 00:15:20.932 "dma_device_type": 1 00:15:20.932 }, 00:15:20.932 { 00:15:20.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.932 "dma_device_type": 2 00:15:20.932 } 00:15:20.932 ], 00:15:20.932 "driver_specific": { 00:15:20.932 "passthru": { 00:15:20.932 "name": "pt2", 00:15:20.932 "base_bdev_name": "malloc2" 00:15:20.932 } 00:15:20.932 } 00:15:20.932 }' 00:15:20.932 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:21.190 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:21.190 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:21.190 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:21.190 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:21.190 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:21.190 07:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:21.190 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:21.190 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:21.190 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:21.448 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:21.448 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:21.448 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:21.448 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:21.707 [2024-07-12 07:25:55.396367] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.707 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=b594c1b0-e939-4837-94c7-cd4d9d12e5b2 00:15:21.707 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z b594c1b0-e939-4837-94c7-cd4d9d12e5b2 ']' 00:15:21.707 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:21.965 [2024-07-12 07:25:55.592178] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:21.966 [2024-07-12 07:25:55.592395] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.966 [2024-07-12 07:25:55.592686] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.966 [2024-07-12 07:25:55.592847] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.966 [2024-07-12 07:25:55.592937] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:15:21.966 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.966 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:22.224 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:22.224 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:22.224 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:22.224 07:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:22.483 07:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:22.483 07:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:22.742 07:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:22.742 07:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:23.000 07:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:23.000 07:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:23.000 07:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:23.000 07:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:23.000 07:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:23.000 07:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.000 07:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:23.000 07:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.000 07:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:23.000 07:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.000 07:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:23.000 07:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:23.000 07:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:23.000 [2024-07-12 07:25:56.880377] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:23.000 [2024-07-12 07:25:56.882950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:23.000 [2024-07-12 07:25:56.883162] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:23.000 [2024-07-12 07:25:56.883376] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:23.000 [2024-07-12 07:25:56.883532] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.000 [2024-07-12 07:25:56.883618] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:15:23.259 request: 00:15:23.259 { 00:15:23.259 "name": "raid_bdev1", 00:15:23.259 "raid_level": "raid0", 00:15:23.259 "base_bdevs": [ 00:15:23.259 "malloc1", 00:15:23.259 "malloc2" 00:15:23.259 ], 00:15:23.259 "superblock": false, 00:15:23.259 "strip_size_kb": 64, 00:15:23.259 "method": "bdev_raid_create", 00:15:23.259 "req_id": 1 00:15:23.259 } 00:15:23.259 Got JSON-RPC error response 00:15:23.259 response: 00:15:23.259 { 00:15:23.259 "code": -17, 00:15:23.259 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:23.259 } 00:15:23.259 07:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:23.259 07:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:23.259 07:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:23.259 07:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:23.259 07:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:23.259 07:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.259 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:23.259 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:23.259 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:23.518 [2024-07-12 07:25:57.332459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:23.518 [2024-07-12 07:25:57.332789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.518 [2024-07-12 07:25:57.332929] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:23.518 [2024-07-12 07:25:57.333033] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.518 [2024-07-12 07:25:57.335824] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.518 [2024-07-12 07:25:57.335993] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:23.518 [2024-07-12 07:25:57.336171] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:23.518 [2024-07-12 07:25:57.336308] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:23.518 pt1 00:15:23.518 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:15:23.518 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:23.518 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:23.518 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:23.518 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:23.518 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:23.518 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.518 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.518 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.518 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.518 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.518 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.777 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.777 "name": "raid_bdev1", 00:15:23.777 "uuid": "b594c1b0-e939-4837-94c7-cd4d9d12e5b2", 00:15:23.777 "strip_size_kb": 64, 00:15:23.777 "state": "configuring", 00:15:23.777 "raid_level": "raid0", 00:15:23.777 "superblock": true, 00:15:23.777 "num_base_bdevs": 2, 00:15:23.777 "num_base_bdevs_discovered": 1, 00:15:23.777 "num_base_bdevs_operational": 2, 00:15:23.777 "base_bdevs_list": [ 00:15:23.777 { 00:15:23.777 "name": "pt1", 00:15:23.777 "uuid": "83ec795f-3d2c-5749-b32d-7eed6067554a", 00:15:23.777 "is_configured": true, 00:15:23.777 "data_offset": 2048, 00:15:23.777 "data_size": 63488 00:15:23.777 }, 00:15:23.777 { 00:15:23.777 "name": null, 00:15:23.777 "uuid": "d17f0dd0-7e2f-5968-a7b9-a92723fc9819", 00:15:23.777 "is_configured": false, 00:15:23.777 "data_offset": 2048, 00:15:23.777 "data_size": 63488 00:15:23.777 } 00:15:23.777 ] 00:15:23.777 }' 00:15:23.777 07:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.777 07:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.344 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:15:24.344 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:24.344 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:24.344 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:24.603 [2024-07-12 07:25:58.268788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:24.603 [2024-07-12 07:25:58.269151] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.603 [2024-07-12 07:25:58.269224] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:15:24.603 [2024-07-12 07:25:58.269366] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.603 [2024-07-12 07:25:58.269898] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.603 [2024-07-12 07:25:58.270046] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:24.603 [2024-07-12 07:25:58.270237] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:24.603 [2024-07-12 07:25:58.270346] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:24.603 [2024-07-12 07:25:58.270515] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:24.603 [2024-07-12 07:25:58.270604] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:24.603 [2024-07-12 07:25:58.270728] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:24.603 [2024-07-12 07:25:58.271098] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:24.603 [2024-07-12 07:25:58.271209] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:24.603 [2024-07-12 07:25:58.271398] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.603 pt2 00:15:24.603 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:24.603 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:24.603 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:24.603 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:24.603 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:24.604 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:24.604 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:24.604 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:24.604 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:24.604 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:24.604 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:24.604 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:24.604 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.604 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.863 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:24.863 "name": "raid_bdev1", 00:15:24.863 "uuid": "b594c1b0-e939-4837-94c7-cd4d9d12e5b2", 00:15:24.863 "strip_size_kb": 64, 00:15:24.863 "state": "online", 00:15:24.863 "raid_level": "raid0", 00:15:24.863 "superblock": true, 00:15:24.863 "num_base_bdevs": 2, 00:15:24.863 "num_base_bdevs_discovered": 2, 00:15:24.863 "num_base_bdevs_operational": 2, 00:15:24.863 "base_bdevs_list": [ 00:15:24.863 { 00:15:24.863 "name": "pt1", 00:15:24.863 "uuid": "83ec795f-3d2c-5749-b32d-7eed6067554a", 00:15:24.863 "is_configured": true, 00:15:24.863 "data_offset": 2048, 00:15:24.863 "data_size": 63488 00:15:24.863 }, 00:15:24.863 { 00:15:24.863 "name": "pt2", 00:15:24.863 "uuid": "d17f0dd0-7e2f-5968-a7b9-a92723fc9819", 00:15:24.863 "is_configured": true, 00:15:24.863 "data_offset": 2048, 00:15:24.863 "data_size": 63488 00:15:24.863 } 00:15:24.863 ] 00:15:24.863 }' 00:15:24.863 07:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:24.863 07:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.430 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:25.430 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:25.430 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:25.430 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:25.431 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:25.431 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:25.431 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:25.431 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:25.431 [2024-07-12 07:25:59.305928] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.690 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:25.690 "name": "raid_bdev1", 00:15:25.690 "aliases": [ 00:15:25.690 "b594c1b0-e939-4837-94c7-cd4d9d12e5b2" 00:15:25.690 ], 00:15:25.690 "product_name": "Raid Volume", 00:15:25.690 "block_size": 512, 00:15:25.690 "num_blocks": 126976, 00:15:25.690 "uuid": "b594c1b0-e939-4837-94c7-cd4d9d12e5b2", 00:15:25.690 "assigned_rate_limits": { 00:15:25.690 "rw_ios_per_sec": 0, 00:15:25.690 "rw_mbytes_per_sec": 0, 00:15:25.690 "r_mbytes_per_sec": 0, 00:15:25.690 "w_mbytes_per_sec": 0 00:15:25.690 }, 00:15:25.690 "claimed": false, 00:15:25.690 "zoned": false, 00:15:25.690 "supported_io_types": { 00:15:25.690 "read": true, 00:15:25.690 "write": true, 00:15:25.690 "unmap": true, 00:15:25.690 "write_zeroes": true, 00:15:25.690 "flush": true, 00:15:25.690 "reset": true, 00:15:25.690 "compare": false, 00:15:25.690 "compare_and_write": false, 00:15:25.690 "abort": false, 00:15:25.690 "nvme_admin": false, 00:15:25.690 "nvme_io": false 00:15:25.690 }, 00:15:25.690 "memory_domains": [ 00:15:25.690 { 00:15:25.690 "dma_device_id": "system", 00:15:25.690 "dma_device_type": 1 00:15:25.690 }, 00:15:25.690 { 00:15:25.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.690 "dma_device_type": 2 00:15:25.690 }, 00:15:25.690 { 00:15:25.690 "dma_device_id": "system", 00:15:25.690 "dma_device_type": 1 00:15:25.690 }, 00:15:25.690 { 00:15:25.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.690 "dma_device_type": 2 00:15:25.690 } 00:15:25.690 ], 00:15:25.690 "driver_specific": { 00:15:25.690 "raid": { 00:15:25.690 "uuid": "b594c1b0-e939-4837-94c7-cd4d9d12e5b2", 00:15:25.690 "strip_size_kb": 64, 00:15:25.690 "state": "online", 00:15:25.690 "raid_level": "raid0", 00:15:25.690 "superblock": true, 00:15:25.690 "num_base_bdevs": 2, 00:15:25.690 "num_base_bdevs_discovered": 2, 00:15:25.690 "num_base_bdevs_operational": 2, 00:15:25.690 "base_bdevs_list": [ 00:15:25.690 { 00:15:25.690 "name": "pt1", 00:15:25.690 "uuid": "83ec795f-3d2c-5749-b32d-7eed6067554a", 00:15:25.690 "is_configured": true, 00:15:25.690 "data_offset": 2048, 00:15:25.690 "data_size": 63488 00:15:25.690 }, 00:15:25.690 { 00:15:25.690 "name": "pt2", 00:15:25.690 "uuid": "d17f0dd0-7e2f-5968-a7b9-a92723fc9819", 00:15:25.690 "is_configured": true, 00:15:25.690 "data_offset": 2048, 00:15:25.690 "data_size": 63488 00:15:25.690 } 00:15:25.690 ] 00:15:25.690 } 00:15:25.690 } 00:15:25.690 }' 00:15:25.690 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:25.690 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:25.690 pt2' 00:15:25.690 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:25.690 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:25.690 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:25.951 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:25.951 "name": "pt1", 00:15:25.951 "aliases": [ 00:15:25.951 "83ec795f-3d2c-5749-b32d-7eed6067554a" 00:15:25.951 ], 00:15:25.951 "product_name": "passthru", 00:15:25.951 "block_size": 512, 00:15:25.951 "num_blocks": 65536, 00:15:25.951 "uuid": "83ec795f-3d2c-5749-b32d-7eed6067554a", 00:15:25.951 "assigned_rate_limits": { 00:15:25.951 "rw_ios_per_sec": 0, 00:15:25.951 "rw_mbytes_per_sec": 0, 00:15:25.951 "r_mbytes_per_sec": 0, 00:15:25.951 "w_mbytes_per_sec": 0 00:15:25.951 }, 00:15:25.951 "claimed": true, 00:15:25.951 "claim_type": "exclusive_write", 00:15:25.951 "zoned": false, 00:15:25.951 "supported_io_types": { 00:15:25.951 "read": true, 00:15:25.951 "write": true, 00:15:25.951 "unmap": true, 00:15:25.951 "write_zeroes": true, 00:15:25.951 "flush": true, 00:15:25.951 "reset": true, 00:15:25.951 "compare": false, 00:15:25.951 "compare_and_write": false, 00:15:25.951 "abort": true, 00:15:25.951 "nvme_admin": false, 00:15:25.951 "nvme_io": false 00:15:25.951 }, 00:15:25.951 "memory_domains": [ 00:15:25.951 { 00:15:25.951 "dma_device_id": "system", 00:15:25.951 "dma_device_type": 1 00:15:25.951 }, 00:15:25.951 { 00:15:25.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.951 "dma_device_type": 2 00:15:25.951 } 00:15:25.951 ], 00:15:25.951 "driver_specific": { 00:15:25.951 "passthru": { 00:15:25.951 "name": "pt1", 00:15:25.951 "base_bdev_name": "malloc1" 00:15:25.951 } 00:15:25.951 } 00:15:25.951 }' 00:15:25.951 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:25.951 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:25.951 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:25.951 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:25.951 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:25.951 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:25.951 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.221 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.221 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:26.221 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.221 07:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.221 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:26.221 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:26.221 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:26.221 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:26.479 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:26.479 "name": "pt2", 00:15:26.479 "aliases": [ 00:15:26.479 "d17f0dd0-7e2f-5968-a7b9-a92723fc9819" 00:15:26.479 ], 00:15:26.479 "product_name": "passthru", 00:15:26.479 "block_size": 512, 00:15:26.479 "num_blocks": 65536, 00:15:26.479 "uuid": "d17f0dd0-7e2f-5968-a7b9-a92723fc9819", 00:15:26.479 "assigned_rate_limits": { 00:15:26.479 "rw_ios_per_sec": 0, 00:15:26.479 "rw_mbytes_per_sec": 0, 00:15:26.479 "r_mbytes_per_sec": 0, 00:15:26.479 "w_mbytes_per_sec": 0 00:15:26.479 }, 00:15:26.479 "claimed": true, 00:15:26.479 "claim_type": "exclusive_write", 00:15:26.479 "zoned": false, 00:15:26.479 "supported_io_types": { 00:15:26.479 "read": true, 00:15:26.479 "write": true, 00:15:26.479 "unmap": true, 00:15:26.479 "write_zeroes": true, 00:15:26.479 "flush": true, 00:15:26.479 "reset": true, 00:15:26.479 "compare": false, 00:15:26.479 "compare_and_write": false, 00:15:26.479 "abort": true, 00:15:26.479 "nvme_admin": false, 00:15:26.479 "nvme_io": false 00:15:26.479 }, 00:15:26.479 "memory_domains": [ 00:15:26.479 { 00:15:26.479 "dma_device_id": "system", 00:15:26.479 "dma_device_type": 1 00:15:26.479 }, 00:15:26.479 { 00:15:26.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.479 "dma_device_type": 2 00:15:26.479 } 00:15:26.479 ], 00:15:26.479 "driver_specific": { 00:15:26.479 "passthru": { 00:15:26.479 "name": "pt2", 00:15:26.479 "base_bdev_name": "malloc2" 00:15:26.479 } 00:15:26.479 } 00:15:26.479 }' 00:15:26.479 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.479 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.738 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:26.738 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.738 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.738 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:26.738 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.738 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.738 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:26.738 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.738 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.996 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:26.996 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:26.996 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:27.255 [2024-07-12 07:26:00.894164] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.255 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' b594c1b0-e939-4837-94c7-cd4d9d12e5b2 '!=' b594c1b0-e939-4837-94c7-cd4d9d12e5b2 ']' 00:15:27.255 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:15:27.255 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:27.255 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:27.255 07:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 131752 00:15:27.255 07:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 131752 ']' 00:15:27.255 07:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 131752 00:15:27.255 07:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:15:27.255 07:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:27.255 07:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 131752 00:15:27.255 killing process with pid 131752 00:15:27.255 07:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:27.255 07:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:27.255 07:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 131752' 00:15:27.255 07:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 131752 00:15:27.255 07:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 131752 00:15:27.255 [2024-07-12 07:26:00.944062] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.255 [2024-07-12 07:26:00.944169] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.255 [2024-07-12 07:26:00.944243] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.255 [2024-07-12 07:26:00.944252] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:27.255 [2024-07-12 07:26:00.985227] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:27.514 ************************************ 00:15:27.514 END TEST raid_superblock_test 00:15:27.514 ************************************ 00:15:27.514 07:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:27.514 00:15:27.514 real 0m10.585s 00:15:27.514 user 0m18.560s 00:15:27.514 sys 0m1.955s 00:15:27.514 07:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:27.514 07:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.770 07:26:01 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:15:27.771 07:26:01 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:27.771 07:26:01 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:27.771 07:26:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:27.771 ************************************ 00:15:27.771 START TEST raid_read_error_test 00:15:27.771 ************************************ 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 2 read 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.khOOj7vGpo 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=132115 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 132115 /var/tmp/spdk-raid.sock 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 132115 ']' 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:27.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:27.771 07:26:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.771 [2024-07-12 07:26:01.539189] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:27.771 [2024-07-12 07:26:01.539764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132115 ] 00:15:28.028 [2024-07-12 07:26:01.700132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.028 [2024-07-12 07:26:01.785921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.028 [2024-07-12 07:26:01.869173] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.959 07:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:28.959 07:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:15:28.959 07:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:28.959 07:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:28.959 BaseBdev1_malloc 00:15:28.959 07:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:29.216 true 00:15:29.216 07:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:29.474 [2024-07-12 07:26:03.209432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:29.474 [2024-07-12 07:26:03.209779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.474 [2024-07-12 07:26:03.209872] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:15:29.474 [2024-07-12 07:26:03.210010] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.474 [2024-07-12 07:26:03.213119] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.474 [2024-07-12 07:26:03.213311] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:29.474 BaseBdev1 00:15:29.474 07:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:29.474 07:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:29.730 BaseBdev2_malloc 00:15:29.730 07:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:29.986 true 00:15:29.986 07:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:29.986 [2024-07-12 07:26:03.845237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:29.987 [2024-07-12 07:26:03.845559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.987 [2024-07-12 07:26:03.845704] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:29.987 [2024-07-12 07:26:03.845826] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.987 [2024-07-12 07:26:03.848767] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.987 [2024-07-12 07:26:03.848940] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:29.987 BaseBdev2 00:15:29.987 07:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:30.244 [2024-07-12 07:26:04.037376] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.244 [2024-07-12 07:26:04.040083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.244 [2024-07-12 07:26:04.040502] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:30.244 [2024-07-12 07:26:04.040620] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:30.244 [2024-07-12 07:26:04.040821] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:15:30.244 [2024-07-12 07:26:04.041300] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:30.244 [2024-07-12 07:26:04.041413] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:15:30.244 [2024-07-12 07:26:04.041722] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.244 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:30.244 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:30.244 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:30.244 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:30.244 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:30.244 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:30.244 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:30.244 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:30.244 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:30.244 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:30.244 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.244 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.501 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:30.501 "name": "raid_bdev1", 00:15:30.501 "uuid": "d24191c5-7219-4326-b119-0339e054115b", 00:15:30.501 "strip_size_kb": 64, 00:15:30.501 "state": "online", 00:15:30.501 "raid_level": "raid0", 00:15:30.501 "superblock": true, 00:15:30.501 "num_base_bdevs": 2, 00:15:30.501 "num_base_bdevs_discovered": 2, 00:15:30.501 "num_base_bdevs_operational": 2, 00:15:30.501 "base_bdevs_list": [ 00:15:30.501 { 00:15:30.501 "name": "BaseBdev1", 00:15:30.501 "uuid": "38b014e0-16cd-5952-afa9-f14da1edeef0", 00:15:30.501 "is_configured": true, 00:15:30.501 "data_offset": 2048, 00:15:30.501 "data_size": 63488 00:15:30.501 }, 00:15:30.501 { 00:15:30.501 "name": "BaseBdev2", 00:15:30.501 "uuid": "fcd3e832-be2f-5284-a9a0-ba697fc37361", 00:15:30.501 "is_configured": true, 00:15:30.501 "data_offset": 2048, 00:15:30.501 "data_size": 63488 00:15:30.501 } 00:15:30.501 ] 00:15:30.501 }' 00:15:30.501 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:30.501 07:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.065 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:31.065 07:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:31.322 [2024-07-12 07:26:04.950347] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:15:32.254 07:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:32.511 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:32.511 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:32.511 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:32.511 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:32.511 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:32.511 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:32.511 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:32.511 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:32.511 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:32.511 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:32.511 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:32.511 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:32.511 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:32.511 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.511 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.768 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:32.768 "name": "raid_bdev1", 00:15:32.768 "uuid": "d24191c5-7219-4326-b119-0339e054115b", 00:15:32.768 "strip_size_kb": 64, 00:15:32.769 "state": "online", 00:15:32.769 "raid_level": "raid0", 00:15:32.769 "superblock": true, 00:15:32.769 "num_base_bdevs": 2, 00:15:32.769 "num_base_bdevs_discovered": 2, 00:15:32.769 "num_base_bdevs_operational": 2, 00:15:32.769 "base_bdevs_list": [ 00:15:32.769 { 00:15:32.769 "name": "BaseBdev1", 00:15:32.769 "uuid": "38b014e0-16cd-5952-afa9-f14da1edeef0", 00:15:32.769 "is_configured": true, 00:15:32.769 "data_offset": 2048, 00:15:32.769 "data_size": 63488 00:15:32.769 }, 00:15:32.769 { 00:15:32.769 "name": "BaseBdev2", 00:15:32.769 "uuid": "fcd3e832-be2f-5284-a9a0-ba697fc37361", 00:15:32.769 "is_configured": true, 00:15:32.769 "data_offset": 2048, 00:15:32.769 "data_size": 63488 00:15:32.769 } 00:15:32.769 ] 00:15:32.769 }' 00:15:32.769 07:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:32.769 07:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.332 07:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:33.332 [2024-07-12 07:26:07.199883] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:33.332 [2024-07-12 07:26:07.200141] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.332 [2024-07-12 07:26:07.202783] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.332 [2024-07-12 07:26:07.202955] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.332 [2024-07-12 07:26:07.203022] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.332 [2024-07-12 07:26:07.203168] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:15:33.332 0 00:15:33.611 07:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 132115 00:15:33.611 07:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 132115 ']' 00:15:33.611 07:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 132115 00:15:33.611 07:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:15:33.611 07:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:33.611 07:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 132115 00:15:33.611 07:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:33.611 07:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:33.611 07:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 132115' 00:15:33.611 killing process with pid 132115 00:15:33.611 07:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 132115 00:15:33.611 07:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 132115 00:15:33.611 [2024-07-12 07:26:07.262021] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:33.611 [2024-07-12 07:26:07.289697] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:33.870 07:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.khOOj7vGpo 00:15:33.870 07:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:33.870 07:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:33.870 07:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:15:33.870 07:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:15:33.870 07:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:33.870 07:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:33.870 07:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:15:33.870 00:15:33.870 real 0m6.272s 00:15:33.870 user 0m9.389s 00:15:33.870 sys 0m1.181s 00:15:33.870 07:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:33.870 07:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.870 ************************************ 00:15:33.870 END TEST raid_read_error_test 00:15:33.870 ************************************ 00:15:34.126 07:26:07 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:15:34.126 07:26:07 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:34.126 07:26:07 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:34.126 07:26:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:34.126 ************************************ 00:15:34.126 START TEST raid_write_error_test 00:15:34.126 ************************************ 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 2 write 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.URgwDThZzF 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=132294 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 132294 /var/tmp/spdk-raid.sock 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 132294 ']' 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:34.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:34.126 07:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.126 [2024-07-12 07:26:07.881313] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:34.126 [2024-07-12 07:26:07.881819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132294 ] 00:15:34.383 [2024-07-12 07:26:08.033041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.383 [2024-07-12 07:26:08.114642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.383 [2024-07-12 07:26:08.194411] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.945 07:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:34.945 07:26:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:15:34.945 07:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:34.945 07:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:35.202 BaseBdev1_malloc 00:15:35.202 07:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:35.459 true 00:15:35.459 07:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:35.715 [2024-07-12 07:26:09.434944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:35.715 [2024-07-12 07:26:09.435264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.715 [2024-07-12 07:26:09.435356] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:15:35.715 [2024-07-12 07:26:09.435644] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.715 [2024-07-12 07:26:09.438704] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.715 [2024-07-12 07:26:09.438876] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:35.715 BaseBdev1 00:15:35.715 07:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:35.715 07:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:35.972 BaseBdev2_malloc 00:15:35.972 07:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:36.228 true 00:15:36.228 07:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:36.228 [2024-07-12 07:26:10.082900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:36.228 [2024-07-12 07:26:10.083257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.228 [2024-07-12 07:26:10.083425] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:36.228 [2024-07-12 07:26:10.083552] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.228 [2024-07-12 07:26:10.086484] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.228 [2024-07-12 07:26:10.086652] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:36.228 BaseBdev2 00:15:36.228 07:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:36.485 [2024-07-12 07:26:10.275150] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.485 [2024-07-12 07:26:10.277832] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.485 [2024-07-12 07:26:10.278211] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:36.485 [2024-07-12 07:26:10.278328] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:36.485 [2024-07-12 07:26:10.278573] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:15:36.485 [2024-07-12 07:26:10.279073] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:36.485 [2024-07-12 07:26:10.279178] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:15:36.485 [2024-07-12 07:26:10.279486] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.485 07:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:36.485 07:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:36.485 07:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:36.485 07:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:36.485 07:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:36.485 07:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:36.485 07:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.485 07:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.485 07:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.485 07:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.485 07:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.485 07:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.742 07:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.742 "name": "raid_bdev1", 00:15:36.742 "uuid": "419770a6-069b-41ff-abe9-b0c60af34fe1", 00:15:36.742 "strip_size_kb": 64, 00:15:36.742 "state": "online", 00:15:36.742 "raid_level": "raid0", 00:15:36.742 "superblock": true, 00:15:36.742 "num_base_bdevs": 2, 00:15:36.742 "num_base_bdevs_discovered": 2, 00:15:36.742 "num_base_bdevs_operational": 2, 00:15:36.742 "base_bdevs_list": [ 00:15:36.742 { 00:15:36.742 "name": "BaseBdev1", 00:15:36.743 "uuid": "0372cc2f-ea6f-50e0-adbf-fd6f8b11d008", 00:15:36.743 "is_configured": true, 00:15:36.743 "data_offset": 2048, 00:15:36.743 "data_size": 63488 00:15:36.743 }, 00:15:36.743 { 00:15:36.743 "name": "BaseBdev2", 00:15:36.743 "uuid": "06f5bf72-ccc5-5055-822f-cd08971cf6fd", 00:15:36.743 "is_configured": true, 00:15:36.743 "data_offset": 2048, 00:15:36.743 "data_size": 63488 00:15:36.743 } 00:15:36.743 ] 00:15:36.743 }' 00:15:36.743 07:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.743 07:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.307 07:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:37.307 07:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:37.565 [2024-07-12 07:26:11.288214] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:15:38.497 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:38.755 "name": "raid_bdev1", 00:15:38.755 "uuid": "419770a6-069b-41ff-abe9-b0c60af34fe1", 00:15:38.755 "strip_size_kb": 64, 00:15:38.755 "state": "online", 00:15:38.755 "raid_level": "raid0", 00:15:38.755 "superblock": true, 00:15:38.755 "num_base_bdevs": 2, 00:15:38.755 "num_base_bdevs_discovered": 2, 00:15:38.755 "num_base_bdevs_operational": 2, 00:15:38.755 "base_bdevs_list": [ 00:15:38.755 { 00:15:38.755 "name": "BaseBdev1", 00:15:38.755 "uuid": "0372cc2f-ea6f-50e0-adbf-fd6f8b11d008", 00:15:38.755 "is_configured": true, 00:15:38.755 "data_offset": 2048, 00:15:38.755 "data_size": 63488 00:15:38.755 }, 00:15:38.755 { 00:15:38.755 "name": "BaseBdev2", 00:15:38.755 "uuid": "06f5bf72-ccc5-5055-822f-cd08971cf6fd", 00:15:38.755 "is_configured": true, 00:15:38.755 "data_offset": 2048, 00:15:38.755 "data_size": 63488 00:15:38.755 } 00:15:38.755 ] 00:15:38.755 }' 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:38.755 07:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.321 07:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:39.578 [2024-07-12 07:26:13.365016] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.578 [2024-07-12 07:26:13.365364] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.578 [2024-07-12 07:26:13.367998] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.578 [2024-07-12 07:26:13.368173] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.578 [2024-07-12 07:26:13.368244] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.578 [2024-07-12 07:26:13.368320] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:15:39.578 0 00:15:39.578 07:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 132294 00:15:39.578 07:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 132294 ']' 00:15:39.578 07:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 132294 00:15:39.578 07:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:15:39.578 07:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:39.578 07:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 132294 00:15:39.578 07:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:39.578 07:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:39.578 07:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 132294' 00:15:39.578 killing process with pid 132294 00:15:39.578 07:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 132294 00:15:39.578 [2024-07-12 07:26:13.416911] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:39.578 07:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 132294 00:15:39.578 [2024-07-12 07:26:13.445028] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:40.144 07:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:40.144 07:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.URgwDThZzF 00:15:40.144 07:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:40.144 ************************************ 00:15:40.144 END TEST raid_write_error_test 00:15:40.144 ************************************ 00:15:40.144 07:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:15:40.144 07:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:15:40.144 07:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:40.144 07:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:40.144 07:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:15:40.144 00:15:40.144 real 0m6.072s 00:15:40.144 user 0m9.223s 00:15:40.144 sys 0m1.054s 00:15:40.144 07:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:40.144 07:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.144 07:26:13 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:40.144 07:26:13 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:40.144 07:26:13 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:40.144 07:26:13 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:40.144 07:26:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:40.144 ************************************ 00:15:40.144 START TEST raid_state_function_test 00:15:40.144 ************************************ 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 false 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=132477 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 132477' 00:15:40.144 Process raid pid: 132477 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 132477 /var/tmp/spdk-raid.sock 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 132477 ']' 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:40.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:40.144 07:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.144 [2024-07-12 07:26:14.012070] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:40.144 [2024-07-12 07:26:14.012341] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.403 [2024-07-12 07:26:14.166019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.403 [2024-07-12 07:26:14.247927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.661 [2024-07-12 07:26:14.328176] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.224 07:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:41.224 07:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:15:41.224 07:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:41.481 [2024-07-12 07:26:15.119941] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:41.481 [2024-07-12 07:26:15.120054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:41.481 [2024-07-12 07:26:15.120067] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:41.481 [2024-07-12 07:26:15.120087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:41.481 07:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:41.481 07:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:41.481 07:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:41.481 07:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:41.481 07:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:41.481 07:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:41.481 07:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:41.481 07:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:41.481 07:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:41.481 07:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:41.481 07:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.481 07:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.481 07:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:41.481 "name": "Existed_Raid", 00:15:41.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.482 "strip_size_kb": 64, 00:15:41.482 "state": "configuring", 00:15:41.482 "raid_level": "concat", 00:15:41.482 "superblock": false, 00:15:41.482 "num_base_bdevs": 2, 00:15:41.482 "num_base_bdevs_discovered": 0, 00:15:41.482 "num_base_bdevs_operational": 2, 00:15:41.482 "base_bdevs_list": [ 00:15:41.482 { 00:15:41.482 "name": "BaseBdev1", 00:15:41.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.482 "is_configured": false, 00:15:41.482 "data_offset": 0, 00:15:41.482 "data_size": 0 00:15:41.482 }, 00:15:41.482 { 00:15:41.482 "name": "BaseBdev2", 00:15:41.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.482 "is_configured": false, 00:15:41.482 "data_offset": 0, 00:15:41.482 "data_size": 0 00:15:41.482 } 00:15:41.482 ] 00:15:41.482 }' 00:15:41.482 07:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:41.482 07:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.439 07:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:42.439 [2024-07-12 07:26:16.203987] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.439 [2024-07-12 07:26:16.204046] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:42.439 07:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:42.705 [2024-07-12 07:26:16.472047] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.705 [2024-07-12 07:26:16.472175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.705 [2024-07-12 07:26:16.472187] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.705 [2024-07-12 07:26:16.472220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.705 07:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:42.973 [2024-07-12 07:26:16.771960] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.973 BaseBdev1 00:15:42.973 07:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:42.973 07:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:42.973 07:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:42.973 07:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:42.973 07:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:42.973 07:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:42.973 07:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:43.231 07:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:43.490 [ 00:15:43.490 { 00:15:43.490 "name": "BaseBdev1", 00:15:43.490 "aliases": [ 00:15:43.490 "a0531eb5-ec12-4214-a211-36180939194f" 00:15:43.490 ], 00:15:43.490 "product_name": "Malloc disk", 00:15:43.490 "block_size": 512, 00:15:43.490 "num_blocks": 65536, 00:15:43.490 "uuid": "a0531eb5-ec12-4214-a211-36180939194f", 00:15:43.490 "assigned_rate_limits": { 00:15:43.490 "rw_ios_per_sec": 0, 00:15:43.490 "rw_mbytes_per_sec": 0, 00:15:43.490 "r_mbytes_per_sec": 0, 00:15:43.490 "w_mbytes_per_sec": 0 00:15:43.490 }, 00:15:43.490 "claimed": true, 00:15:43.490 "claim_type": "exclusive_write", 00:15:43.490 "zoned": false, 00:15:43.490 "supported_io_types": { 00:15:43.490 "read": true, 00:15:43.490 "write": true, 00:15:43.490 "unmap": true, 00:15:43.490 "write_zeroes": true, 00:15:43.490 "flush": true, 00:15:43.490 "reset": true, 00:15:43.490 "compare": false, 00:15:43.490 "compare_and_write": false, 00:15:43.490 "abort": true, 00:15:43.490 "nvme_admin": false, 00:15:43.490 "nvme_io": false 00:15:43.490 }, 00:15:43.490 "memory_domains": [ 00:15:43.490 { 00:15:43.490 "dma_device_id": "system", 00:15:43.490 "dma_device_type": 1 00:15:43.490 }, 00:15:43.490 { 00:15:43.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.490 "dma_device_type": 2 00:15:43.490 } 00:15:43.490 ], 00:15:43.490 "driver_specific": {} 00:15:43.490 } 00:15:43.490 ] 00:15:43.490 07:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:43.490 07:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:43.490 07:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:43.490 07:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:43.490 07:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:43.490 07:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:43.490 07:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:43.490 07:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.490 07:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.490 07:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.490 07:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.490 07:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.490 07:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.748 07:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.748 "name": "Existed_Raid", 00:15:43.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.748 "strip_size_kb": 64, 00:15:43.748 "state": "configuring", 00:15:43.748 "raid_level": "concat", 00:15:43.748 "superblock": false, 00:15:43.748 "num_base_bdevs": 2, 00:15:43.749 "num_base_bdevs_discovered": 1, 00:15:43.749 "num_base_bdevs_operational": 2, 00:15:43.749 "base_bdevs_list": [ 00:15:43.749 { 00:15:43.749 "name": "BaseBdev1", 00:15:43.749 "uuid": "a0531eb5-ec12-4214-a211-36180939194f", 00:15:43.749 "is_configured": true, 00:15:43.749 "data_offset": 0, 00:15:43.749 "data_size": 65536 00:15:43.749 }, 00:15:43.749 { 00:15:43.749 "name": "BaseBdev2", 00:15:43.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.749 "is_configured": false, 00:15:43.749 "data_offset": 0, 00:15:43.749 "data_size": 0 00:15:43.749 } 00:15:43.749 ] 00:15:43.749 }' 00:15:43.749 07:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.749 07:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.314 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:44.572 [2024-07-12 07:26:18.224339] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:44.572 [2024-07-12 07:26:18.224434] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:44.572 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:44.830 [2024-07-12 07:26:18.488462] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.830 [2024-07-12 07:26:18.490991] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.830 [2024-07-12 07:26:18.491087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.830 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:44.830 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:44.830 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:44.830 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:44.830 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:44.830 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:44.830 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:44.830 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:44.830 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.830 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.830 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.830 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.830 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.830 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.087 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.087 "name": "Existed_Raid", 00:15:45.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.087 "strip_size_kb": 64, 00:15:45.087 "state": "configuring", 00:15:45.087 "raid_level": "concat", 00:15:45.087 "superblock": false, 00:15:45.087 "num_base_bdevs": 2, 00:15:45.087 "num_base_bdevs_discovered": 1, 00:15:45.087 "num_base_bdevs_operational": 2, 00:15:45.087 "base_bdevs_list": [ 00:15:45.087 { 00:15:45.087 "name": "BaseBdev1", 00:15:45.087 "uuid": "a0531eb5-ec12-4214-a211-36180939194f", 00:15:45.087 "is_configured": true, 00:15:45.087 "data_offset": 0, 00:15:45.087 "data_size": 65536 00:15:45.087 }, 00:15:45.087 { 00:15:45.087 "name": "BaseBdev2", 00:15:45.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.087 "is_configured": false, 00:15:45.087 "data_offset": 0, 00:15:45.087 "data_size": 0 00:15:45.087 } 00:15:45.087 ] 00:15:45.087 }' 00:15:45.087 07:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.087 07:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.653 07:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:45.912 [2024-07-12 07:26:19.611086] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.912 [2024-07-12 07:26:19.611166] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:15:45.912 [2024-07-12 07:26:19.611181] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:45.912 [2024-07-12 07:26:19.611389] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:15:45.912 [2024-07-12 07:26:19.611926] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:15:45.912 [2024-07-12 07:26:19.611941] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:15:45.912 [2024-07-12 07:26:19.612315] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.912 BaseBdev2 00:15:45.912 07:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:45.912 07:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:45.912 07:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:45.912 07:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:45.912 07:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:45.912 07:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:45.912 07:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:46.171 07:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:46.428 [ 00:15:46.428 { 00:15:46.428 "name": "BaseBdev2", 00:15:46.428 "aliases": [ 00:15:46.428 "3e54e156-a5e8-41b6-a369-f38b1dce3582" 00:15:46.428 ], 00:15:46.428 "product_name": "Malloc disk", 00:15:46.428 "block_size": 512, 00:15:46.428 "num_blocks": 65536, 00:15:46.428 "uuid": "3e54e156-a5e8-41b6-a369-f38b1dce3582", 00:15:46.428 "assigned_rate_limits": { 00:15:46.428 "rw_ios_per_sec": 0, 00:15:46.428 "rw_mbytes_per_sec": 0, 00:15:46.428 "r_mbytes_per_sec": 0, 00:15:46.428 "w_mbytes_per_sec": 0 00:15:46.428 }, 00:15:46.428 "claimed": true, 00:15:46.428 "claim_type": "exclusive_write", 00:15:46.428 "zoned": false, 00:15:46.428 "supported_io_types": { 00:15:46.428 "read": true, 00:15:46.428 "write": true, 00:15:46.428 "unmap": true, 00:15:46.428 "write_zeroes": true, 00:15:46.428 "flush": true, 00:15:46.428 "reset": true, 00:15:46.428 "compare": false, 00:15:46.428 "compare_and_write": false, 00:15:46.428 "abort": true, 00:15:46.428 "nvme_admin": false, 00:15:46.428 "nvme_io": false 00:15:46.428 }, 00:15:46.428 "memory_domains": [ 00:15:46.428 { 00:15:46.428 "dma_device_id": "system", 00:15:46.428 "dma_device_type": 1 00:15:46.428 }, 00:15:46.428 { 00:15:46.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.428 "dma_device_type": 2 00:15:46.428 } 00:15:46.428 ], 00:15:46.428 "driver_specific": {} 00:15:46.428 } 00:15:46.428 ] 00:15:46.428 07:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:46.428 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:46.428 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:46.428 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:46.428 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:46.428 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:46.428 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:46.428 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:46.428 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:46.428 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.429 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.429 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.429 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.429 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.429 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.686 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.686 "name": "Existed_Raid", 00:15:46.686 "uuid": "806b0094-bc2f-4234-a32c-d591d065d58e", 00:15:46.686 "strip_size_kb": 64, 00:15:46.686 "state": "online", 00:15:46.686 "raid_level": "concat", 00:15:46.686 "superblock": false, 00:15:46.686 "num_base_bdevs": 2, 00:15:46.686 "num_base_bdevs_discovered": 2, 00:15:46.686 "num_base_bdevs_operational": 2, 00:15:46.686 "base_bdevs_list": [ 00:15:46.686 { 00:15:46.686 "name": "BaseBdev1", 00:15:46.686 "uuid": "a0531eb5-ec12-4214-a211-36180939194f", 00:15:46.686 "is_configured": true, 00:15:46.686 "data_offset": 0, 00:15:46.686 "data_size": 65536 00:15:46.686 }, 00:15:46.686 { 00:15:46.686 "name": "BaseBdev2", 00:15:46.686 "uuid": "3e54e156-a5e8-41b6-a369-f38b1dce3582", 00:15:46.686 "is_configured": true, 00:15:46.686 "data_offset": 0, 00:15:46.686 "data_size": 65536 00:15:46.686 } 00:15:46.686 ] 00:15:46.686 }' 00:15:46.686 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.686 07:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.252 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:47.252 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:47.252 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:47.252 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:47.252 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:47.252 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:47.252 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:47.252 07:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:47.510 [2024-07-12 07:26:21.187733] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.510 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:47.510 "name": "Existed_Raid", 00:15:47.510 "aliases": [ 00:15:47.510 "806b0094-bc2f-4234-a32c-d591d065d58e" 00:15:47.510 ], 00:15:47.511 "product_name": "Raid Volume", 00:15:47.511 "block_size": 512, 00:15:47.511 "num_blocks": 131072, 00:15:47.511 "uuid": "806b0094-bc2f-4234-a32c-d591d065d58e", 00:15:47.511 "assigned_rate_limits": { 00:15:47.511 "rw_ios_per_sec": 0, 00:15:47.511 "rw_mbytes_per_sec": 0, 00:15:47.511 "r_mbytes_per_sec": 0, 00:15:47.511 "w_mbytes_per_sec": 0 00:15:47.511 }, 00:15:47.511 "claimed": false, 00:15:47.511 "zoned": false, 00:15:47.511 "supported_io_types": { 00:15:47.511 "read": true, 00:15:47.511 "write": true, 00:15:47.511 "unmap": true, 00:15:47.511 "write_zeroes": true, 00:15:47.511 "flush": true, 00:15:47.511 "reset": true, 00:15:47.511 "compare": false, 00:15:47.511 "compare_and_write": false, 00:15:47.511 "abort": false, 00:15:47.511 "nvme_admin": false, 00:15:47.511 "nvme_io": false 00:15:47.511 }, 00:15:47.511 "memory_domains": [ 00:15:47.511 { 00:15:47.511 "dma_device_id": "system", 00:15:47.511 "dma_device_type": 1 00:15:47.511 }, 00:15:47.511 { 00:15:47.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.511 "dma_device_type": 2 00:15:47.511 }, 00:15:47.511 { 00:15:47.511 "dma_device_id": "system", 00:15:47.511 "dma_device_type": 1 00:15:47.511 }, 00:15:47.511 { 00:15:47.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.511 "dma_device_type": 2 00:15:47.511 } 00:15:47.511 ], 00:15:47.511 "driver_specific": { 00:15:47.511 "raid": { 00:15:47.511 "uuid": "806b0094-bc2f-4234-a32c-d591d065d58e", 00:15:47.511 "strip_size_kb": 64, 00:15:47.511 "state": "online", 00:15:47.511 "raid_level": "concat", 00:15:47.511 "superblock": false, 00:15:47.511 "num_base_bdevs": 2, 00:15:47.511 "num_base_bdevs_discovered": 2, 00:15:47.511 "num_base_bdevs_operational": 2, 00:15:47.511 "base_bdevs_list": [ 00:15:47.511 { 00:15:47.511 "name": "BaseBdev1", 00:15:47.511 "uuid": "a0531eb5-ec12-4214-a211-36180939194f", 00:15:47.511 "is_configured": true, 00:15:47.511 "data_offset": 0, 00:15:47.511 "data_size": 65536 00:15:47.511 }, 00:15:47.511 { 00:15:47.511 "name": "BaseBdev2", 00:15:47.511 "uuid": "3e54e156-a5e8-41b6-a369-f38b1dce3582", 00:15:47.511 "is_configured": true, 00:15:47.511 "data_offset": 0, 00:15:47.511 "data_size": 65536 00:15:47.511 } 00:15:47.511 ] 00:15:47.511 } 00:15:47.511 } 00:15:47.511 }' 00:15:47.511 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.511 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:47.511 BaseBdev2' 00:15:47.511 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:47.511 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:47.511 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:47.769 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:47.769 "name": "BaseBdev1", 00:15:47.769 "aliases": [ 00:15:47.769 "a0531eb5-ec12-4214-a211-36180939194f" 00:15:47.769 ], 00:15:47.769 "product_name": "Malloc disk", 00:15:47.769 "block_size": 512, 00:15:47.769 "num_blocks": 65536, 00:15:47.769 "uuid": "a0531eb5-ec12-4214-a211-36180939194f", 00:15:47.769 "assigned_rate_limits": { 00:15:47.769 "rw_ios_per_sec": 0, 00:15:47.769 "rw_mbytes_per_sec": 0, 00:15:47.769 "r_mbytes_per_sec": 0, 00:15:47.769 "w_mbytes_per_sec": 0 00:15:47.769 }, 00:15:47.769 "claimed": true, 00:15:47.769 "claim_type": "exclusive_write", 00:15:47.769 "zoned": false, 00:15:47.769 "supported_io_types": { 00:15:47.769 "read": true, 00:15:47.769 "write": true, 00:15:47.769 "unmap": true, 00:15:47.769 "write_zeroes": true, 00:15:47.769 "flush": true, 00:15:47.769 "reset": true, 00:15:47.769 "compare": false, 00:15:47.769 "compare_and_write": false, 00:15:47.769 "abort": true, 00:15:47.769 "nvme_admin": false, 00:15:47.769 "nvme_io": false 00:15:47.769 }, 00:15:47.769 "memory_domains": [ 00:15:47.769 { 00:15:47.769 "dma_device_id": "system", 00:15:47.769 "dma_device_type": 1 00:15:47.769 }, 00:15:47.769 { 00:15:47.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.769 "dma_device_type": 2 00:15:47.769 } 00:15:47.769 ], 00:15:47.769 "driver_specific": {} 00:15:47.769 }' 00:15:47.769 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.769 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.769 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:47.769 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.769 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.769 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:47.769 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.769 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.028 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:48.028 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.028 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.028 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:48.028 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:48.028 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:48.028 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:48.286 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:48.286 "name": "BaseBdev2", 00:15:48.286 "aliases": [ 00:15:48.286 "3e54e156-a5e8-41b6-a369-f38b1dce3582" 00:15:48.286 ], 00:15:48.286 "product_name": "Malloc disk", 00:15:48.286 "block_size": 512, 00:15:48.286 "num_blocks": 65536, 00:15:48.286 "uuid": "3e54e156-a5e8-41b6-a369-f38b1dce3582", 00:15:48.286 "assigned_rate_limits": { 00:15:48.286 "rw_ios_per_sec": 0, 00:15:48.286 "rw_mbytes_per_sec": 0, 00:15:48.286 "r_mbytes_per_sec": 0, 00:15:48.286 "w_mbytes_per_sec": 0 00:15:48.286 }, 00:15:48.286 "claimed": true, 00:15:48.286 "claim_type": "exclusive_write", 00:15:48.286 "zoned": false, 00:15:48.286 "supported_io_types": { 00:15:48.286 "read": true, 00:15:48.286 "write": true, 00:15:48.286 "unmap": true, 00:15:48.286 "write_zeroes": true, 00:15:48.286 "flush": true, 00:15:48.286 "reset": true, 00:15:48.286 "compare": false, 00:15:48.286 "compare_and_write": false, 00:15:48.286 "abort": true, 00:15:48.286 "nvme_admin": false, 00:15:48.286 "nvme_io": false 00:15:48.286 }, 00:15:48.286 "memory_domains": [ 00:15:48.286 { 00:15:48.286 "dma_device_id": "system", 00:15:48.286 "dma_device_type": 1 00:15:48.286 }, 00:15:48.286 { 00:15:48.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.286 "dma_device_type": 2 00:15:48.286 } 00:15:48.286 ], 00:15:48.286 "driver_specific": {} 00:15:48.286 }' 00:15:48.286 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.286 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.286 07:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:48.286 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.286 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.286 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:48.286 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.286 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.543 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:48.543 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.543 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.543 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:48.543 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:48.799 [2024-07-12 07:26:22.503934] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:48.799 [2024-07-12 07:26:22.504168] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.799 [2024-07-12 07:26:22.504432] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.799 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:48.799 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:48.799 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:48.799 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:48.799 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:48.799 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:48.799 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:48.799 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:48.799 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:48.799 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:48.799 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:48.800 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.800 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.800 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.800 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.800 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.800 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.056 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:49.056 "name": "Existed_Raid", 00:15:49.056 "uuid": "806b0094-bc2f-4234-a32c-d591d065d58e", 00:15:49.056 "strip_size_kb": 64, 00:15:49.056 "state": "offline", 00:15:49.056 "raid_level": "concat", 00:15:49.056 "superblock": false, 00:15:49.056 "num_base_bdevs": 2, 00:15:49.056 "num_base_bdevs_discovered": 1, 00:15:49.056 "num_base_bdevs_operational": 1, 00:15:49.056 "base_bdevs_list": [ 00:15:49.056 { 00:15:49.056 "name": null, 00:15:49.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.056 "is_configured": false, 00:15:49.056 "data_offset": 0, 00:15:49.056 "data_size": 65536 00:15:49.056 }, 00:15:49.056 { 00:15:49.056 "name": "BaseBdev2", 00:15:49.056 "uuid": "3e54e156-a5e8-41b6-a369-f38b1dce3582", 00:15:49.056 "is_configured": true, 00:15:49.056 "data_offset": 0, 00:15:49.056 "data_size": 65536 00:15:49.056 } 00:15:49.056 ] 00:15:49.056 }' 00:15:49.056 07:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:49.056 07:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.632 07:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:49.632 07:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:49.632 07:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:49.632 07:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.890 07:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:49.890 07:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:49.890 07:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:50.147 [2024-07-12 07:26:23.897794] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:50.147 [2024-07-12 07:26:23.898105] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:15:50.147 07:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:50.147 07:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:50.147 07:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:50.147 07:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.405 07:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:50.405 07:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:50.405 07:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:50.405 07:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 132477 00:15:50.405 07:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 132477 ']' 00:15:50.405 07:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 132477 00:15:50.405 07:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:15:50.405 07:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:50.405 07:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 132477 00:15:50.405 killing process with pid 132477 00:15:50.405 07:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:50.405 07:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:50.405 07:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 132477' 00:15:50.405 07:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 132477 00:15:50.405 07:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 132477 00:15:50.405 [2024-07-12 07:26:24.216964] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:50.405 [2024-07-12 07:26:24.217080] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:50.975 07:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:50.975 00:15:50.975 real 0m10.692s 00:15:50.975 user 0m18.864s 00:15:50.975 sys 0m1.873s 00:15:50.975 07:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:50.975 07:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.975 ************************************ 00:15:50.975 END TEST raid_state_function_test 00:15:50.975 ************************************ 00:15:50.976 07:26:24 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:15:50.976 07:26:24 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:50.976 07:26:24 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:50.976 07:26:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:50.976 ************************************ 00:15:50.976 START TEST raid_state_function_test_sb 00:15:50.976 ************************************ 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 true 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=132846 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:50.976 Process raid pid: 132846 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 132846' 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 132846 /var/tmp/spdk-raid.sock 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 132846 ']' 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:50.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:50.976 07:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.976 [2024-07-12 07:26:24.770199] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:50.976 [2024-07-12 07:26:24.770604] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.245 [2024-07-12 07:26:24.909651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.245 [2024-07-12 07:26:24.990446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.245 [2024-07-12 07:26:25.070922] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.177 07:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:52.177 07:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:15:52.177 07:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:52.177 [2024-07-12 07:26:25.861227] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:52.177 [2024-07-12 07:26:25.861552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:52.177 [2024-07-12 07:26:25.861652] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.177 [2024-07-12 07:26:25.861708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.177 07:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:52.177 07:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:52.177 07:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:52.177 07:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:52.177 07:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:52.177 07:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:52.177 07:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:52.177 07:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:52.177 07:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:52.177 07:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:52.177 07:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.177 07:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.434 07:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:52.434 "name": "Existed_Raid", 00:15:52.434 "uuid": "4d8ced65-d51e-478a-8b30-d95ce542925f", 00:15:52.434 "strip_size_kb": 64, 00:15:52.434 "state": "configuring", 00:15:52.434 "raid_level": "concat", 00:15:52.434 "superblock": true, 00:15:52.434 "num_base_bdevs": 2, 00:15:52.434 "num_base_bdevs_discovered": 0, 00:15:52.434 "num_base_bdevs_operational": 2, 00:15:52.434 "base_bdevs_list": [ 00:15:52.434 { 00:15:52.434 "name": "BaseBdev1", 00:15:52.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.434 "is_configured": false, 00:15:52.434 "data_offset": 0, 00:15:52.434 "data_size": 0 00:15:52.434 }, 00:15:52.434 { 00:15:52.434 "name": "BaseBdev2", 00:15:52.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.434 "is_configured": false, 00:15:52.434 "data_offset": 0, 00:15:52.434 "data_size": 0 00:15:52.435 } 00:15:52.435 ] 00:15:52.435 }' 00:15:52.435 07:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:52.435 07:26:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.000 07:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:53.000 [2024-07-12 07:26:26.857231] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:53.000 [2024-07-12 07:26:26.857525] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:53.000 07:26:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:53.257 [2024-07-12 07:26:27.049289] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:53.258 [2024-07-12 07:26:27.049618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:53.258 [2024-07-12 07:26:27.049707] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:53.258 [2024-07-12 07:26:27.049787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:53.258 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:53.514 [2024-07-12 07:26:27.261264] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.514 BaseBdev1 00:15:53.515 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:53.515 07:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:53.515 07:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:53.515 07:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:53.515 07:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:53.515 07:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:53.515 07:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:53.772 07:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:53.772 [ 00:15:53.772 { 00:15:53.772 "name": "BaseBdev1", 00:15:53.772 "aliases": [ 00:15:53.772 "c519ff2b-b57c-4769-a80c-baddef4a1610" 00:15:53.772 ], 00:15:53.772 "product_name": "Malloc disk", 00:15:53.772 "block_size": 512, 00:15:53.772 "num_blocks": 65536, 00:15:53.772 "uuid": "c519ff2b-b57c-4769-a80c-baddef4a1610", 00:15:53.772 "assigned_rate_limits": { 00:15:53.772 "rw_ios_per_sec": 0, 00:15:53.772 "rw_mbytes_per_sec": 0, 00:15:53.772 "r_mbytes_per_sec": 0, 00:15:53.772 "w_mbytes_per_sec": 0 00:15:53.772 }, 00:15:53.772 "claimed": true, 00:15:53.772 "claim_type": "exclusive_write", 00:15:53.772 "zoned": false, 00:15:53.772 "supported_io_types": { 00:15:53.772 "read": true, 00:15:53.772 "write": true, 00:15:53.772 "unmap": true, 00:15:53.772 "write_zeroes": true, 00:15:53.772 "flush": true, 00:15:53.772 "reset": true, 00:15:53.772 "compare": false, 00:15:53.772 "compare_and_write": false, 00:15:53.772 "abort": true, 00:15:53.772 "nvme_admin": false, 00:15:53.772 "nvme_io": false 00:15:53.772 }, 00:15:53.772 "memory_domains": [ 00:15:53.772 { 00:15:53.772 "dma_device_id": "system", 00:15:53.772 "dma_device_type": 1 00:15:53.772 }, 00:15:53.772 { 00:15:53.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.772 "dma_device_type": 2 00:15:53.772 } 00:15:53.772 ], 00:15:53.772 "driver_specific": {} 00:15:53.772 } 00:15:53.772 ] 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:54.030 "name": "Existed_Raid", 00:15:54.030 "uuid": "0928f361-77bd-47b9-8c8d-204f1948bb30", 00:15:54.030 "strip_size_kb": 64, 00:15:54.030 "state": "configuring", 00:15:54.030 "raid_level": "concat", 00:15:54.030 "superblock": true, 00:15:54.030 "num_base_bdevs": 2, 00:15:54.030 "num_base_bdevs_discovered": 1, 00:15:54.030 "num_base_bdevs_operational": 2, 00:15:54.030 "base_bdevs_list": [ 00:15:54.030 { 00:15:54.030 "name": "BaseBdev1", 00:15:54.030 "uuid": "c519ff2b-b57c-4769-a80c-baddef4a1610", 00:15:54.030 "is_configured": true, 00:15:54.030 "data_offset": 2048, 00:15:54.030 "data_size": 63488 00:15:54.030 }, 00:15:54.030 { 00:15:54.030 "name": "BaseBdev2", 00:15:54.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.030 "is_configured": false, 00:15:54.030 "data_offset": 0, 00:15:54.030 "data_size": 0 00:15:54.030 } 00:15:54.030 ] 00:15:54.030 }' 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:54.030 07:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.596 07:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:54.854 [2024-07-12 07:26:28.717661] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.854 [2024-07-12 07:26:28.717885] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:55.112 07:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:55.112 [2024-07-12 07:26:28.985799] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.112 [2024-07-12 07:26:28.988443] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.112 [2024-07-12 07:26:28.988619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.369 07:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:55.369 "name": "Existed_Raid", 00:15:55.369 "uuid": "ff81a884-e46e-4984-90bf-8a89f4c55bbe", 00:15:55.369 "strip_size_kb": 64, 00:15:55.369 "state": "configuring", 00:15:55.369 "raid_level": "concat", 00:15:55.369 "superblock": true, 00:15:55.369 "num_base_bdevs": 2, 00:15:55.369 "num_base_bdevs_discovered": 1, 00:15:55.369 "num_base_bdevs_operational": 2, 00:15:55.369 "base_bdevs_list": [ 00:15:55.369 { 00:15:55.369 "name": "BaseBdev1", 00:15:55.369 "uuid": "c519ff2b-b57c-4769-a80c-baddef4a1610", 00:15:55.369 "is_configured": true, 00:15:55.369 "data_offset": 2048, 00:15:55.369 "data_size": 63488 00:15:55.369 }, 00:15:55.369 { 00:15:55.369 "name": "BaseBdev2", 00:15:55.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.369 "is_configured": false, 00:15:55.369 "data_offset": 0, 00:15:55.369 "data_size": 0 00:15:55.369 } 00:15:55.369 ] 00:15:55.369 }' 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:55.369 07:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.934 07:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:56.192 [2024-07-12 07:26:30.043326] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.192 [2024-07-12 07:26:30.044013] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:15:56.192 [2024-07-12 07:26:30.044207] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:56.192 [2024-07-12 07:26:30.044615] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:15:56.192 BaseBdev2 00:15:56.192 [2024-07-12 07:26:30.045473] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:15:56.192 [2024-07-12 07:26:30.045695] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:15:56.193 [2024-07-12 07:26:30.046110] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.193 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:56.193 07:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:56.193 07:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:56.193 07:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:56.193 07:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:56.193 07:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:56.193 07:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:56.450 07:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.707 [ 00:15:56.707 { 00:15:56.707 "name": "BaseBdev2", 00:15:56.707 "aliases": [ 00:15:56.707 "de533874-2fb6-47a5-8c3c-f3a3860a3abc" 00:15:56.707 ], 00:15:56.707 "product_name": "Malloc disk", 00:15:56.707 "block_size": 512, 00:15:56.707 "num_blocks": 65536, 00:15:56.707 "uuid": "de533874-2fb6-47a5-8c3c-f3a3860a3abc", 00:15:56.707 "assigned_rate_limits": { 00:15:56.707 "rw_ios_per_sec": 0, 00:15:56.707 "rw_mbytes_per_sec": 0, 00:15:56.707 "r_mbytes_per_sec": 0, 00:15:56.707 "w_mbytes_per_sec": 0 00:15:56.707 }, 00:15:56.707 "claimed": true, 00:15:56.707 "claim_type": "exclusive_write", 00:15:56.707 "zoned": false, 00:15:56.707 "supported_io_types": { 00:15:56.707 "read": true, 00:15:56.707 "write": true, 00:15:56.707 "unmap": true, 00:15:56.707 "write_zeroes": true, 00:15:56.707 "flush": true, 00:15:56.707 "reset": true, 00:15:56.707 "compare": false, 00:15:56.707 "compare_and_write": false, 00:15:56.707 "abort": true, 00:15:56.707 "nvme_admin": false, 00:15:56.707 "nvme_io": false 00:15:56.707 }, 00:15:56.707 "memory_domains": [ 00:15:56.707 { 00:15:56.707 "dma_device_id": "system", 00:15:56.707 "dma_device_type": 1 00:15:56.707 }, 00:15:56.707 { 00:15:56.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.707 "dma_device_type": 2 00:15:56.707 } 00:15:56.707 ], 00:15:56.707 "driver_specific": {} 00:15:56.707 } 00:15:56.707 ] 00:15:56.707 07:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:56.707 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:56.707 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:56.707 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:56.707 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:56.707 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:56.707 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:56.707 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:56.707 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:56.707 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:56.707 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:56.707 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:56.707 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:56.707 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.707 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.965 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:56.965 "name": "Existed_Raid", 00:15:56.965 "uuid": "ff81a884-e46e-4984-90bf-8a89f4c55bbe", 00:15:56.965 "strip_size_kb": 64, 00:15:56.965 "state": "online", 00:15:56.965 "raid_level": "concat", 00:15:56.965 "superblock": true, 00:15:56.965 "num_base_bdevs": 2, 00:15:56.965 "num_base_bdevs_discovered": 2, 00:15:56.965 "num_base_bdevs_operational": 2, 00:15:56.965 "base_bdevs_list": [ 00:15:56.965 { 00:15:56.965 "name": "BaseBdev1", 00:15:56.965 "uuid": "c519ff2b-b57c-4769-a80c-baddef4a1610", 00:15:56.965 "is_configured": true, 00:15:56.965 "data_offset": 2048, 00:15:56.965 "data_size": 63488 00:15:56.965 }, 00:15:56.965 { 00:15:56.965 "name": "BaseBdev2", 00:15:56.965 "uuid": "de533874-2fb6-47a5-8c3c-f3a3860a3abc", 00:15:56.965 "is_configured": true, 00:15:56.965 "data_offset": 2048, 00:15:56.965 "data_size": 63488 00:15:56.965 } 00:15:56.965 ] 00:15:56.965 }' 00:15:56.965 07:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:56.965 07:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.530 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:57.530 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:57.530 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:57.530 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:57.530 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:57.530 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:57.530 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:57.530 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:57.788 [2024-07-12 07:26:31.495968] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.788 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:57.788 "name": "Existed_Raid", 00:15:57.788 "aliases": [ 00:15:57.788 "ff81a884-e46e-4984-90bf-8a89f4c55bbe" 00:15:57.788 ], 00:15:57.788 "product_name": "Raid Volume", 00:15:57.788 "block_size": 512, 00:15:57.788 "num_blocks": 126976, 00:15:57.788 "uuid": "ff81a884-e46e-4984-90bf-8a89f4c55bbe", 00:15:57.788 "assigned_rate_limits": { 00:15:57.788 "rw_ios_per_sec": 0, 00:15:57.788 "rw_mbytes_per_sec": 0, 00:15:57.788 "r_mbytes_per_sec": 0, 00:15:57.788 "w_mbytes_per_sec": 0 00:15:57.788 }, 00:15:57.788 "claimed": false, 00:15:57.788 "zoned": false, 00:15:57.788 "supported_io_types": { 00:15:57.788 "read": true, 00:15:57.788 "write": true, 00:15:57.788 "unmap": true, 00:15:57.788 "write_zeroes": true, 00:15:57.788 "flush": true, 00:15:57.788 "reset": true, 00:15:57.788 "compare": false, 00:15:57.788 "compare_and_write": false, 00:15:57.788 "abort": false, 00:15:57.788 "nvme_admin": false, 00:15:57.788 "nvme_io": false 00:15:57.788 }, 00:15:57.788 "memory_domains": [ 00:15:57.788 { 00:15:57.788 "dma_device_id": "system", 00:15:57.788 "dma_device_type": 1 00:15:57.788 }, 00:15:57.788 { 00:15:57.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.788 "dma_device_type": 2 00:15:57.788 }, 00:15:57.788 { 00:15:57.788 "dma_device_id": "system", 00:15:57.788 "dma_device_type": 1 00:15:57.788 }, 00:15:57.788 { 00:15:57.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.788 "dma_device_type": 2 00:15:57.788 } 00:15:57.788 ], 00:15:57.788 "driver_specific": { 00:15:57.788 "raid": { 00:15:57.788 "uuid": "ff81a884-e46e-4984-90bf-8a89f4c55bbe", 00:15:57.788 "strip_size_kb": 64, 00:15:57.788 "state": "online", 00:15:57.788 "raid_level": "concat", 00:15:57.788 "superblock": true, 00:15:57.788 "num_base_bdevs": 2, 00:15:57.788 "num_base_bdevs_discovered": 2, 00:15:57.788 "num_base_bdevs_operational": 2, 00:15:57.788 "base_bdevs_list": [ 00:15:57.788 { 00:15:57.788 "name": "BaseBdev1", 00:15:57.788 "uuid": "c519ff2b-b57c-4769-a80c-baddef4a1610", 00:15:57.788 "is_configured": true, 00:15:57.788 "data_offset": 2048, 00:15:57.788 "data_size": 63488 00:15:57.788 }, 00:15:57.788 { 00:15:57.788 "name": "BaseBdev2", 00:15:57.788 "uuid": "de533874-2fb6-47a5-8c3c-f3a3860a3abc", 00:15:57.788 "is_configured": true, 00:15:57.788 "data_offset": 2048, 00:15:57.788 "data_size": 63488 00:15:57.788 } 00:15:57.788 ] 00:15:57.788 } 00:15:57.788 } 00:15:57.788 }' 00:15:57.788 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:57.788 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:57.788 BaseBdev2' 00:15:57.788 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:57.788 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:57.788 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:58.046 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:58.046 "name": "BaseBdev1", 00:15:58.046 "aliases": [ 00:15:58.046 "c519ff2b-b57c-4769-a80c-baddef4a1610" 00:15:58.046 ], 00:15:58.046 "product_name": "Malloc disk", 00:15:58.046 "block_size": 512, 00:15:58.046 "num_blocks": 65536, 00:15:58.046 "uuid": "c519ff2b-b57c-4769-a80c-baddef4a1610", 00:15:58.046 "assigned_rate_limits": { 00:15:58.046 "rw_ios_per_sec": 0, 00:15:58.046 "rw_mbytes_per_sec": 0, 00:15:58.046 "r_mbytes_per_sec": 0, 00:15:58.046 "w_mbytes_per_sec": 0 00:15:58.046 }, 00:15:58.046 "claimed": true, 00:15:58.046 "claim_type": "exclusive_write", 00:15:58.046 "zoned": false, 00:15:58.046 "supported_io_types": { 00:15:58.046 "read": true, 00:15:58.046 "write": true, 00:15:58.046 "unmap": true, 00:15:58.046 "write_zeroes": true, 00:15:58.046 "flush": true, 00:15:58.046 "reset": true, 00:15:58.046 "compare": false, 00:15:58.046 "compare_and_write": false, 00:15:58.046 "abort": true, 00:15:58.046 "nvme_admin": false, 00:15:58.046 "nvme_io": false 00:15:58.046 }, 00:15:58.046 "memory_domains": [ 00:15:58.046 { 00:15:58.046 "dma_device_id": "system", 00:15:58.046 "dma_device_type": 1 00:15:58.046 }, 00:15:58.046 { 00:15:58.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.046 "dma_device_type": 2 00:15:58.046 } 00:15:58.046 ], 00:15:58.046 "driver_specific": {} 00:15:58.046 }' 00:15:58.046 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.046 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.046 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:58.046 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.046 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.046 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:58.046 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.303 07:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.303 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:58.303 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.303 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.303 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:58.303 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:58.303 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:58.303 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:58.560 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:58.560 "name": "BaseBdev2", 00:15:58.560 "aliases": [ 00:15:58.560 "de533874-2fb6-47a5-8c3c-f3a3860a3abc" 00:15:58.560 ], 00:15:58.560 "product_name": "Malloc disk", 00:15:58.560 "block_size": 512, 00:15:58.561 "num_blocks": 65536, 00:15:58.561 "uuid": "de533874-2fb6-47a5-8c3c-f3a3860a3abc", 00:15:58.561 "assigned_rate_limits": { 00:15:58.561 "rw_ios_per_sec": 0, 00:15:58.561 "rw_mbytes_per_sec": 0, 00:15:58.561 "r_mbytes_per_sec": 0, 00:15:58.561 "w_mbytes_per_sec": 0 00:15:58.561 }, 00:15:58.561 "claimed": true, 00:15:58.561 "claim_type": "exclusive_write", 00:15:58.561 "zoned": false, 00:15:58.561 "supported_io_types": { 00:15:58.561 "read": true, 00:15:58.561 "write": true, 00:15:58.561 "unmap": true, 00:15:58.561 "write_zeroes": true, 00:15:58.561 "flush": true, 00:15:58.561 "reset": true, 00:15:58.561 "compare": false, 00:15:58.561 "compare_and_write": false, 00:15:58.561 "abort": true, 00:15:58.561 "nvme_admin": false, 00:15:58.561 "nvme_io": false 00:15:58.561 }, 00:15:58.561 "memory_domains": [ 00:15:58.561 { 00:15:58.561 "dma_device_id": "system", 00:15:58.561 "dma_device_type": 1 00:15:58.561 }, 00:15:58.561 { 00:15:58.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.561 "dma_device_type": 2 00:15:58.561 } 00:15:58.561 ], 00:15:58.561 "driver_specific": {} 00:15:58.561 }' 00:15:58.561 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.561 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.561 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:58.561 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.561 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.817 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:58.817 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.818 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.818 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:58.818 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.818 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.818 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:58.818 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:59.092 [2024-07-12 07:26:32.904140] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.092 [2024-07-12 07:26:32.904378] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.092 [2024-07-12 07:26:32.904561] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.092 07:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.654 07:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:59.654 "name": "Existed_Raid", 00:15:59.654 "uuid": "ff81a884-e46e-4984-90bf-8a89f4c55bbe", 00:15:59.654 "strip_size_kb": 64, 00:15:59.654 "state": "offline", 00:15:59.654 "raid_level": "concat", 00:15:59.654 "superblock": true, 00:15:59.654 "num_base_bdevs": 2, 00:15:59.654 "num_base_bdevs_discovered": 1, 00:15:59.654 "num_base_bdevs_operational": 1, 00:15:59.654 "base_bdevs_list": [ 00:15:59.654 { 00:15:59.654 "name": null, 00:15:59.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.654 "is_configured": false, 00:15:59.654 "data_offset": 2048, 00:15:59.654 "data_size": 63488 00:15:59.654 }, 00:15:59.654 { 00:15:59.654 "name": "BaseBdev2", 00:15:59.654 "uuid": "de533874-2fb6-47a5-8c3c-f3a3860a3abc", 00:15:59.654 "is_configured": true, 00:15:59.654 "data_offset": 2048, 00:15:59.654 "data_size": 63488 00:15:59.654 } 00:15:59.654 ] 00:15:59.654 }' 00:15:59.654 07:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:59.654 07:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.217 07:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:00.217 07:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:00.217 07:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.217 07:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:00.217 07:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:00.217 07:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:00.218 07:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:00.475 [2024-07-12 07:26:34.225714] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:00.475 [2024-07-12 07:26:34.226035] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:16:00.475 07:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:00.475 07:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:00.475 07:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:00.475 07:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.732 07:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:00.732 07:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:00.732 07:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:00.732 07:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 132846 00:16:00.732 07:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 132846 ']' 00:16:00.732 07:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 132846 00:16:00.732 07:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:16:00.732 07:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:00.732 07:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 132846 00:16:00.733 07:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:00.733 07:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:00.733 07:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 132846' 00:16:00.733 killing process with pid 132846 00:16:00.733 07:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 132846 00:16:00.733 07:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 132846 00:16:00.733 [2024-07-12 07:26:34.546531] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.733 [2024-07-12 07:26:34.546638] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.299 07:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:01.299 00:16:01.299 real 0m10.254s 00:16:01.299 user 0m18.034s 00:16:01.299 sys 0m1.833s 00:16:01.299 07:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:01.299 07:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.299 ************************************ 00:16:01.299 END TEST raid_state_function_test_sb 00:16:01.299 ************************************ 00:16:01.299 07:26:35 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:16:01.299 07:26:35 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:01.299 07:26:35 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:01.299 07:26:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.299 ************************************ 00:16:01.299 START TEST raid_superblock_test 00:16:01.299 ************************************ 00:16:01.299 07:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 2 00:16:01.299 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:16:01.299 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:16:01.299 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=133209 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 133209 /var/tmp/spdk-raid.sock 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 133209 ']' 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:01.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:01.300 07:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.300 [2024-07-12 07:26:35.089633] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:01.300 [2024-07-12 07:26:35.090038] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133209 ] 00:16:01.557 [2024-07-12 07:26:35.234521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.557 [2024-07-12 07:26:35.321996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.557 [2024-07-12 07:26:35.402283] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.123 07:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:02.123 07:26:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:16:02.123 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:02.123 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:02.123 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:02.123 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:02.123 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:02.123 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.123 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.123 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.123 07:26:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:02.381 malloc1 00:16:02.640 07:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:02.640 [2024-07-12 07:26:36.487683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:02.640 [2024-07-12 07:26:36.488031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.640 [2024-07-12 07:26:36.488125] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:16:02.640 [2024-07-12 07:26:36.488252] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.640 [2024-07-12 07:26:36.491354] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.640 [2024-07-12 07:26:36.491529] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:02.640 pt1 00:16:02.640 07:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:02.640 07:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:02.640 07:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:02.640 07:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:02.640 07:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:02.640 07:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.640 07:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.640 07:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.640 07:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:02.899 malloc2 00:16:03.157 07:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.157 [2024-07-12 07:26:36.967966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.157 [2024-07-12 07:26:36.968293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.157 [2024-07-12 07:26:36.968383] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:03.157 [2024-07-12 07:26:36.968537] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.157 [2024-07-12 07:26:36.971398] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.157 [2024-07-12 07:26:36.971556] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.157 pt2 00:16:03.157 07:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:03.157 07:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:03.157 07:26:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:16:03.416 [2024-07-12 07:26:37.228157] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:03.416 [2024-07-12 07:26:37.230951] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.416 [2024-07-12 07:26:37.231338] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:16:03.416 [2024-07-12 07:26:37.231452] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:03.416 [2024-07-12 07:26:37.231682] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:03.416 [2024-07-12 07:26:37.232167] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:16:03.416 [2024-07-12 07:26:37.232282] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:16:03.416 [2024-07-12 07:26:37.232625] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.416 07:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:03.416 07:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:03.416 07:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:03.416 07:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:03.416 07:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:03.416 07:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:03.416 07:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.416 07:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.416 07:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.416 07:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.416 07:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.416 07:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.673 07:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.673 "name": "raid_bdev1", 00:16:03.673 "uuid": "7948e0a9-e132-405b-9ff9-dc020ac20b2e", 00:16:03.673 "strip_size_kb": 64, 00:16:03.673 "state": "online", 00:16:03.673 "raid_level": "concat", 00:16:03.673 "superblock": true, 00:16:03.673 "num_base_bdevs": 2, 00:16:03.673 "num_base_bdevs_discovered": 2, 00:16:03.673 "num_base_bdevs_operational": 2, 00:16:03.673 "base_bdevs_list": [ 00:16:03.673 { 00:16:03.673 "name": "pt1", 00:16:03.673 "uuid": "9be993f7-9532-534b-8afd-4ec9d974fcf2", 00:16:03.673 "is_configured": true, 00:16:03.673 "data_offset": 2048, 00:16:03.673 "data_size": 63488 00:16:03.673 }, 00:16:03.673 { 00:16:03.673 "name": "pt2", 00:16:03.673 "uuid": "0712ba67-7de7-5f71-9e8d-6532b680dd38", 00:16:03.673 "is_configured": true, 00:16:03.673 "data_offset": 2048, 00:16:03.673 "data_size": 63488 00:16:03.673 } 00:16:03.673 ] 00:16:03.673 }' 00:16:03.673 07:26:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.673 07:26:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.238 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:04.238 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:04.238 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:04.238 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:04.238 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:04.238 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:04.238 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:04.238 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:04.495 [2024-07-12 07:26:38.285031] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.495 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:04.495 "name": "raid_bdev1", 00:16:04.495 "aliases": [ 00:16:04.495 "7948e0a9-e132-405b-9ff9-dc020ac20b2e" 00:16:04.495 ], 00:16:04.495 "product_name": "Raid Volume", 00:16:04.495 "block_size": 512, 00:16:04.495 "num_blocks": 126976, 00:16:04.495 "uuid": "7948e0a9-e132-405b-9ff9-dc020ac20b2e", 00:16:04.495 "assigned_rate_limits": { 00:16:04.495 "rw_ios_per_sec": 0, 00:16:04.495 "rw_mbytes_per_sec": 0, 00:16:04.495 "r_mbytes_per_sec": 0, 00:16:04.495 "w_mbytes_per_sec": 0 00:16:04.495 }, 00:16:04.495 "claimed": false, 00:16:04.495 "zoned": false, 00:16:04.495 "supported_io_types": { 00:16:04.495 "read": true, 00:16:04.495 "write": true, 00:16:04.495 "unmap": true, 00:16:04.495 "write_zeroes": true, 00:16:04.495 "flush": true, 00:16:04.495 "reset": true, 00:16:04.495 "compare": false, 00:16:04.495 "compare_and_write": false, 00:16:04.495 "abort": false, 00:16:04.495 "nvme_admin": false, 00:16:04.495 "nvme_io": false 00:16:04.495 }, 00:16:04.495 "memory_domains": [ 00:16:04.495 { 00:16:04.496 "dma_device_id": "system", 00:16:04.496 "dma_device_type": 1 00:16:04.496 }, 00:16:04.496 { 00:16:04.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.496 "dma_device_type": 2 00:16:04.496 }, 00:16:04.496 { 00:16:04.496 "dma_device_id": "system", 00:16:04.496 "dma_device_type": 1 00:16:04.496 }, 00:16:04.496 { 00:16:04.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.496 "dma_device_type": 2 00:16:04.496 } 00:16:04.496 ], 00:16:04.496 "driver_specific": { 00:16:04.496 "raid": { 00:16:04.496 "uuid": "7948e0a9-e132-405b-9ff9-dc020ac20b2e", 00:16:04.496 "strip_size_kb": 64, 00:16:04.496 "state": "online", 00:16:04.496 "raid_level": "concat", 00:16:04.496 "superblock": true, 00:16:04.496 "num_base_bdevs": 2, 00:16:04.496 "num_base_bdevs_discovered": 2, 00:16:04.496 "num_base_bdevs_operational": 2, 00:16:04.496 "base_bdevs_list": [ 00:16:04.496 { 00:16:04.496 "name": "pt1", 00:16:04.496 "uuid": "9be993f7-9532-534b-8afd-4ec9d974fcf2", 00:16:04.496 "is_configured": true, 00:16:04.496 "data_offset": 2048, 00:16:04.496 "data_size": 63488 00:16:04.496 }, 00:16:04.496 { 00:16:04.496 "name": "pt2", 00:16:04.496 "uuid": "0712ba67-7de7-5f71-9e8d-6532b680dd38", 00:16:04.496 "is_configured": true, 00:16:04.496 "data_offset": 2048, 00:16:04.496 "data_size": 63488 00:16:04.496 } 00:16:04.496 ] 00:16:04.496 } 00:16:04.496 } 00:16:04.496 }' 00:16:04.496 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.496 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:04.496 pt2' 00:16:04.496 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:04.496 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:04.496 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:04.753 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:04.753 "name": "pt1", 00:16:04.753 "aliases": [ 00:16:04.753 "9be993f7-9532-534b-8afd-4ec9d974fcf2" 00:16:04.753 ], 00:16:04.753 "product_name": "passthru", 00:16:04.753 "block_size": 512, 00:16:04.753 "num_blocks": 65536, 00:16:04.753 "uuid": "9be993f7-9532-534b-8afd-4ec9d974fcf2", 00:16:04.753 "assigned_rate_limits": { 00:16:04.753 "rw_ios_per_sec": 0, 00:16:04.753 "rw_mbytes_per_sec": 0, 00:16:04.753 "r_mbytes_per_sec": 0, 00:16:04.753 "w_mbytes_per_sec": 0 00:16:04.753 }, 00:16:04.753 "claimed": true, 00:16:04.753 "claim_type": "exclusive_write", 00:16:04.753 "zoned": false, 00:16:04.753 "supported_io_types": { 00:16:04.753 "read": true, 00:16:04.753 "write": true, 00:16:04.753 "unmap": true, 00:16:04.753 "write_zeroes": true, 00:16:04.753 "flush": true, 00:16:04.753 "reset": true, 00:16:04.753 "compare": false, 00:16:04.753 "compare_and_write": false, 00:16:04.753 "abort": true, 00:16:04.753 "nvme_admin": false, 00:16:04.753 "nvme_io": false 00:16:04.753 }, 00:16:04.753 "memory_domains": [ 00:16:04.753 { 00:16:04.753 "dma_device_id": "system", 00:16:04.753 "dma_device_type": 1 00:16:04.753 }, 00:16:04.753 { 00:16:04.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.753 "dma_device_type": 2 00:16:04.753 } 00:16:04.753 ], 00:16:04.753 "driver_specific": { 00:16:04.753 "passthru": { 00:16:04.753 "name": "pt1", 00:16:04.753 "base_bdev_name": "malloc1" 00:16:04.753 } 00:16:04.753 } 00:16:04.753 }' 00:16:04.753 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.010 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.010 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:05.010 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.010 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.010 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:05.010 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.010 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.010 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:05.010 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.268 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.268 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:05.268 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:05.268 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:05.268 07:26:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:05.526 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:05.526 "name": "pt2", 00:16:05.526 "aliases": [ 00:16:05.526 "0712ba67-7de7-5f71-9e8d-6532b680dd38" 00:16:05.526 ], 00:16:05.526 "product_name": "passthru", 00:16:05.526 "block_size": 512, 00:16:05.526 "num_blocks": 65536, 00:16:05.526 "uuid": "0712ba67-7de7-5f71-9e8d-6532b680dd38", 00:16:05.526 "assigned_rate_limits": { 00:16:05.526 "rw_ios_per_sec": 0, 00:16:05.526 "rw_mbytes_per_sec": 0, 00:16:05.526 "r_mbytes_per_sec": 0, 00:16:05.526 "w_mbytes_per_sec": 0 00:16:05.526 }, 00:16:05.526 "claimed": true, 00:16:05.526 "claim_type": "exclusive_write", 00:16:05.526 "zoned": false, 00:16:05.526 "supported_io_types": { 00:16:05.526 "read": true, 00:16:05.526 "write": true, 00:16:05.526 "unmap": true, 00:16:05.526 "write_zeroes": true, 00:16:05.526 "flush": true, 00:16:05.526 "reset": true, 00:16:05.526 "compare": false, 00:16:05.526 "compare_and_write": false, 00:16:05.526 "abort": true, 00:16:05.526 "nvme_admin": false, 00:16:05.526 "nvme_io": false 00:16:05.526 }, 00:16:05.526 "memory_domains": [ 00:16:05.526 { 00:16:05.526 "dma_device_id": "system", 00:16:05.526 "dma_device_type": 1 00:16:05.526 }, 00:16:05.526 { 00:16:05.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.526 "dma_device_type": 2 00:16:05.526 } 00:16:05.526 ], 00:16:05.526 "driver_specific": { 00:16:05.526 "passthru": { 00:16:05.526 "name": "pt2", 00:16:05.526 "base_bdev_name": "malloc2" 00:16:05.526 } 00:16:05.526 } 00:16:05.526 }' 00:16:05.526 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.526 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.526 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:05.526 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.526 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.526 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:05.526 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.526 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.783 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:05.783 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.783 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.783 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:05.783 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:05.783 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:06.041 [2024-07-12 07:26:39.697248] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.041 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=7948e0a9-e132-405b-9ff9-dc020ac20b2e 00:16:06.041 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 7948e0a9-e132-405b-9ff9-dc020ac20b2e ']' 00:16:06.041 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:06.314 [2024-07-12 07:26:39.953056] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.314 [2024-07-12 07:26:39.953284] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.314 [2024-07-12 07:26:39.953592] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.314 [2024-07-12 07:26:39.953747] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.314 [2024-07-12 07:26:39.953827] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:16:06.314 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.314 07:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:06.586 07:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:06.586 07:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:06.586 07:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:06.586 07:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:06.845 07:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:06.845 07:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:07.104 07:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:07.104 07:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:07.363 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:07.363 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:07.363 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:07.363 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:07.363 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:07.363 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:07.363 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:07.363 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:07.363 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:07.363 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:07.363 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:07.363 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:07.363 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:07.622 [2024-07-12 07:26:41.249243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:07.622 [2024-07-12 07:26:41.251903] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:07.622 [2024-07-12 07:26:41.252093] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:07.622 [2024-07-12 07:26:41.252270] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:07.622 [2024-07-12 07:26:41.252346] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.622 [2024-07-12 07:26:41.252591] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:16:07.622 request: 00:16:07.622 { 00:16:07.622 "name": "raid_bdev1", 00:16:07.622 "raid_level": "concat", 00:16:07.622 "base_bdevs": [ 00:16:07.622 "malloc1", 00:16:07.622 "malloc2" 00:16:07.622 ], 00:16:07.622 "superblock": false, 00:16:07.622 "strip_size_kb": 64, 00:16:07.622 "method": "bdev_raid_create", 00:16:07.622 "req_id": 1 00:16:07.622 } 00:16:07.622 Got JSON-RPC error response 00:16:07.622 response: 00:16:07.622 { 00:16:07.622 "code": -17, 00:16:07.622 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:07.622 } 00:16:07.622 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:07.622 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:07.622 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:07.622 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:07.622 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.622 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:07.622 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:07.622 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:07.622 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.882 [2024-07-12 07:26:41.661331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.882 [2024-07-12 07:26:41.661669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.882 [2024-07-12 07:26:41.661751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:07.882 [2024-07-12 07:26:41.661864] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.882 [2024-07-12 07:26:41.664689] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.882 [2024-07-12 07:26:41.664878] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.882 [2024-07-12 07:26:41.665064] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:07.882 [2024-07-12 07:26:41.665211] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:07.882 pt1 00:16:07.882 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:16:07.882 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:07.882 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:07.882 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:07.882 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:07.882 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:07.882 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.882 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.882 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.882 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.882 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.882 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.141 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:08.141 "name": "raid_bdev1", 00:16:08.141 "uuid": "7948e0a9-e132-405b-9ff9-dc020ac20b2e", 00:16:08.141 "strip_size_kb": 64, 00:16:08.141 "state": "configuring", 00:16:08.141 "raid_level": "concat", 00:16:08.141 "superblock": true, 00:16:08.141 "num_base_bdevs": 2, 00:16:08.141 "num_base_bdevs_discovered": 1, 00:16:08.141 "num_base_bdevs_operational": 2, 00:16:08.141 "base_bdevs_list": [ 00:16:08.141 { 00:16:08.141 "name": "pt1", 00:16:08.141 "uuid": "9be993f7-9532-534b-8afd-4ec9d974fcf2", 00:16:08.141 "is_configured": true, 00:16:08.141 "data_offset": 2048, 00:16:08.141 "data_size": 63488 00:16:08.141 }, 00:16:08.141 { 00:16:08.141 "name": null, 00:16:08.141 "uuid": "0712ba67-7de7-5f71-9e8d-6532b680dd38", 00:16:08.141 "is_configured": false, 00:16:08.141 "data_offset": 2048, 00:16:08.141 "data_size": 63488 00:16:08.141 } 00:16:08.141 ] 00:16:08.141 }' 00:16:08.141 07:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:08.141 07:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.709 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:16:08.709 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:08.709 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:08.709 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:08.968 [2024-07-12 07:26:42.677748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:08.968 [2024-07-12 07:26:42.678105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.968 [2024-07-12 07:26:42.678180] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:08.968 [2024-07-12 07:26:42.678300] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.968 [2024-07-12 07:26:42.678837] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.968 [2024-07-12 07:26:42.679011] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:08.968 [2024-07-12 07:26:42.679210] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:08.968 [2024-07-12 07:26:42.679325] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:08.968 [2024-07-12 07:26:42.679510] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:16:08.968 [2024-07-12 07:26:42.679603] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:08.968 [2024-07-12 07:26:42.679781] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:08.968 [2024-07-12 07:26:42.680245] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:16:08.968 [2024-07-12 07:26:42.680354] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:16:08.968 [2024-07-12 07:26:42.680536] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.968 pt2 00:16:08.968 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:08.968 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:08.968 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:08.968 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:08.968 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:08.968 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:08.968 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:08.968 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:08.968 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.968 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.968 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.968 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.968 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.968 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.227 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.227 "name": "raid_bdev1", 00:16:09.227 "uuid": "7948e0a9-e132-405b-9ff9-dc020ac20b2e", 00:16:09.227 "strip_size_kb": 64, 00:16:09.227 "state": "online", 00:16:09.227 "raid_level": "concat", 00:16:09.227 "superblock": true, 00:16:09.227 "num_base_bdevs": 2, 00:16:09.227 "num_base_bdevs_discovered": 2, 00:16:09.227 "num_base_bdevs_operational": 2, 00:16:09.227 "base_bdevs_list": [ 00:16:09.227 { 00:16:09.227 "name": "pt1", 00:16:09.227 "uuid": "9be993f7-9532-534b-8afd-4ec9d974fcf2", 00:16:09.227 "is_configured": true, 00:16:09.227 "data_offset": 2048, 00:16:09.227 "data_size": 63488 00:16:09.227 }, 00:16:09.227 { 00:16:09.227 "name": "pt2", 00:16:09.227 "uuid": "0712ba67-7de7-5f71-9e8d-6532b680dd38", 00:16:09.227 "is_configured": true, 00:16:09.227 "data_offset": 2048, 00:16:09.227 "data_size": 63488 00:16:09.227 } 00:16:09.227 ] 00:16:09.227 }' 00:16:09.227 07:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.227 07:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.792 07:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:09.792 07:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:09.792 07:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:09.792 07:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:09.792 07:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:09.792 07:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:09.792 07:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:09.792 07:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:10.050 [2024-07-12 07:26:43.786178] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.050 07:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:10.050 "name": "raid_bdev1", 00:16:10.050 "aliases": [ 00:16:10.050 "7948e0a9-e132-405b-9ff9-dc020ac20b2e" 00:16:10.050 ], 00:16:10.050 "product_name": "Raid Volume", 00:16:10.050 "block_size": 512, 00:16:10.050 "num_blocks": 126976, 00:16:10.050 "uuid": "7948e0a9-e132-405b-9ff9-dc020ac20b2e", 00:16:10.050 "assigned_rate_limits": { 00:16:10.050 "rw_ios_per_sec": 0, 00:16:10.050 "rw_mbytes_per_sec": 0, 00:16:10.050 "r_mbytes_per_sec": 0, 00:16:10.050 "w_mbytes_per_sec": 0 00:16:10.050 }, 00:16:10.050 "claimed": false, 00:16:10.050 "zoned": false, 00:16:10.050 "supported_io_types": { 00:16:10.050 "read": true, 00:16:10.050 "write": true, 00:16:10.050 "unmap": true, 00:16:10.050 "write_zeroes": true, 00:16:10.050 "flush": true, 00:16:10.050 "reset": true, 00:16:10.050 "compare": false, 00:16:10.050 "compare_and_write": false, 00:16:10.050 "abort": false, 00:16:10.050 "nvme_admin": false, 00:16:10.050 "nvme_io": false 00:16:10.050 }, 00:16:10.050 "memory_domains": [ 00:16:10.050 { 00:16:10.050 "dma_device_id": "system", 00:16:10.050 "dma_device_type": 1 00:16:10.050 }, 00:16:10.050 { 00:16:10.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.050 "dma_device_type": 2 00:16:10.050 }, 00:16:10.050 { 00:16:10.050 "dma_device_id": "system", 00:16:10.050 "dma_device_type": 1 00:16:10.050 }, 00:16:10.050 { 00:16:10.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.050 "dma_device_type": 2 00:16:10.050 } 00:16:10.050 ], 00:16:10.050 "driver_specific": { 00:16:10.050 "raid": { 00:16:10.050 "uuid": "7948e0a9-e132-405b-9ff9-dc020ac20b2e", 00:16:10.050 "strip_size_kb": 64, 00:16:10.050 "state": "online", 00:16:10.050 "raid_level": "concat", 00:16:10.050 "superblock": true, 00:16:10.050 "num_base_bdevs": 2, 00:16:10.050 "num_base_bdevs_discovered": 2, 00:16:10.050 "num_base_bdevs_operational": 2, 00:16:10.050 "base_bdevs_list": [ 00:16:10.050 { 00:16:10.050 "name": "pt1", 00:16:10.050 "uuid": "9be993f7-9532-534b-8afd-4ec9d974fcf2", 00:16:10.050 "is_configured": true, 00:16:10.050 "data_offset": 2048, 00:16:10.050 "data_size": 63488 00:16:10.050 }, 00:16:10.050 { 00:16:10.050 "name": "pt2", 00:16:10.050 "uuid": "0712ba67-7de7-5f71-9e8d-6532b680dd38", 00:16:10.050 "is_configured": true, 00:16:10.050 "data_offset": 2048, 00:16:10.050 "data_size": 63488 00:16:10.050 } 00:16:10.050 ] 00:16:10.050 } 00:16:10.050 } 00:16:10.050 }' 00:16:10.050 07:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:10.050 07:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:10.050 pt2' 00:16:10.050 07:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:10.050 07:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:10.050 07:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:10.307 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:10.307 "name": "pt1", 00:16:10.307 "aliases": [ 00:16:10.307 "9be993f7-9532-534b-8afd-4ec9d974fcf2" 00:16:10.307 ], 00:16:10.307 "product_name": "passthru", 00:16:10.307 "block_size": 512, 00:16:10.307 "num_blocks": 65536, 00:16:10.307 "uuid": "9be993f7-9532-534b-8afd-4ec9d974fcf2", 00:16:10.307 "assigned_rate_limits": { 00:16:10.307 "rw_ios_per_sec": 0, 00:16:10.307 "rw_mbytes_per_sec": 0, 00:16:10.307 "r_mbytes_per_sec": 0, 00:16:10.307 "w_mbytes_per_sec": 0 00:16:10.307 }, 00:16:10.307 "claimed": true, 00:16:10.307 "claim_type": "exclusive_write", 00:16:10.307 "zoned": false, 00:16:10.307 "supported_io_types": { 00:16:10.307 "read": true, 00:16:10.307 "write": true, 00:16:10.307 "unmap": true, 00:16:10.307 "write_zeroes": true, 00:16:10.307 "flush": true, 00:16:10.307 "reset": true, 00:16:10.307 "compare": false, 00:16:10.307 "compare_and_write": false, 00:16:10.307 "abort": true, 00:16:10.307 "nvme_admin": false, 00:16:10.307 "nvme_io": false 00:16:10.307 }, 00:16:10.307 "memory_domains": [ 00:16:10.307 { 00:16:10.307 "dma_device_id": "system", 00:16:10.307 "dma_device_type": 1 00:16:10.307 }, 00:16:10.307 { 00:16:10.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.307 "dma_device_type": 2 00:16:10.307 } 00:16:10.307 ], 00:16:10.307 "driver_specific": { 00:16:10.307 "passthru": { 00:16:10.307 "name": "pt1", 00:16:10.307 "base_bdev_name": "malloc1" 00:16:10.307 } 00:16:10.307 } 00:16:10.307 }' 00:16:10.307 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:10.307 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:10.307 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:10.307 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:10.307 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:10.564 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:10.564 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:10.564 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:10.564 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:10.564 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:10.564 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:10.564 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:10.564 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:10.564 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:10.564 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:10.821 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:10.821 "name": "pt2", 00:16:10.821 "aliases": [ 00:16:10.821 "0712ba67-7de7-5f71-9e8d-6532b680dd38" 00:16:10.821 ], 00:16:10.821 "product_name": "passthru", 00:16:10.821 "block_size": 512, 00:16:10.821 "num_blocks": 65536, 00:16:10.821 "uuid": "0712ba67-7de7-5f71-9e8d-6532b680dd38", 00:16:10.821 "assigned_rate_limits": { 00:16:10.821 "rw_ios_per_sec": 0, 00:16:10.821 "rw_mbytes_per_sec": 0, 00:16:10.821 "r_mbytes_per_sec": 0, 00:16:10.821 "w_mbytes_per_sec": 0 00:16:10.821 }, 00:16:10.821 "claimed": true, 00:16:10.821 "claim_type": "exclusive_write", 00:16:10.821 "zoned": false, 00:16:10.821 "supported_io_types": { 00:16:10.821 "read": true, 00:16:10.821 "write": true, 00:16:10.821 "unmap": true, 00:16:10.821 "write_zeroes": true, 00:16:10.821 "flush": true, 00:16:10.821 "reset": true, 00:16:10.821 "compare": false, 00:16:10.821 "compare_and_write": false, 00:16:10.821 "abort": true, 00:16:10.821 "nvme_admin": false, 00:16:10.821 "nvme_io": false 00:16:10.821 }, 00:16:10.821 "memory_domains": [ 00:16:10.821 { 00:16:10.821 "dma_device_id": "system", 00:16:10.821 "dma_device_type": 1 00:16:10.821 }, 00:16:10.821 { 00:16:10.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.821 "dma_device_type": 2 00:16:10.821 } 00:16:10.821 ], 00:16:10.821 "driver_specific": { 00:16:10.821 "passthru": { 00:16:10.821 "name": "pt2", 00:16:10.821 "base_bdev_name": "malloc2" 00:16:10.821 } 00:16:10.821 } 00:16:10.821 }' 00:16:10.821 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.079 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.079 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:11.079 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.079 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.079 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:11.079 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.079 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.079 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:11.079 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.337 07:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.337 07:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:11.337 07:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:11.337 07:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:11.595 [2024-07-12 07:26:45.258473] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.595 07:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 7948e0a9-e132-405b-9ff9-dc020ac20b2e '!=' 7948e0a9-e132-405b-9ff9-dc020ac20b2e ']' 00:16:11.595 07:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:16:11.595 07:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:11.595 07:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:11.595 07:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 133209 00:16:11.595 07:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 133209 ']' 00:16:11.595 07:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 133209 00:16:11.595 07:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:16:11.596 07:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:11.596 07:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 133209 00:16:11.596 07:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:11.596 07:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:11.596 07:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 133209' 00:16:11.596 killing process with pid 133209 00:16:11.596 07:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 133209 00:16:11.596 [2024-07-12 07:26:45.314939] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.596 07:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 133209 00:16:11.596 [2024-07-12 07:26:45.315244] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.596 [2024-07-12 07:26:45.315459] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.596 [2024-07-12 07:26:45.315548] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:16:11.596 [2024-07-12 07:26:45.358003] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:12.162 07:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:12.162 00:16:12.162 real 0m10.731s 00:16:12.162 user 0m18.992s 00:16:12.162 sys 0m1.871s 00:16:12.162 07:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:12.162 ************************************ 00:16:12.162 END TEST raid_superblock_test 00:16:12.162 07:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.162 ************************************ 00:16:12.162 07:26:45 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:16:12.162 07:26:45 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:12.162 07:26:45 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:12.162 07:26:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:12.162 ************************************ 00:16:12.162 START TEST raid_read_error_test 00:16:12.162 ************************************ 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 2 read 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.f5OXd6K2EJ 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=133574 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 133574 /var/tmp/spdk-raid.sock 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 133574 ']' 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:12.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:12.162 07:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.162 [2024-07-12 07:26:45.927617] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:12.162 [2024-07-12 07:26:45.928238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133574 ] 00:16:12.420 [2024-07-12 07:26:46.084313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.420 [2024-07-12 07:26:46.172207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.420 [2024-07-12 07:26:46.252217] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.355 07:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:13.355 07:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:16:13.355 07:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:13.355 07:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:13.355 BaseBdev1_malloc 00:16:13.355 07:26:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:13.613 true 00:16:13.613 07:26:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:13.871 [2024-07-12 07:26:47.587245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:13.871 [2024-07-12 07:26:47.587682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.871 [2024-07-12 07:26:47.587850] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:16:13.871 [2024-07-12 07:26:47.587984] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.871 [2024-07-12 07:26:47.591236] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.871 [2024-07-12 07:26:47.591441] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:13.871 BaseBdev1 00:16:13.871 07:26:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:13.871 07:26:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:14.129 BaseBdev2_malloc 00:16:14.129 07:26:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:14.387 true 00:16:14.387 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:14.645 [2024-07-12 07:26:48.308083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:14.645 [2024-07-12 07:26:48.308455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.645 [2024-07-12 07:26:48.308544] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:14.645 [2024-07-12 07:26:48.308683] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.645 [2024-07-12 07:26:48.311605] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.645 [2024-07-12 07:26:48.311805] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:14.645 BaseBdev2 00:16:14.645 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:14.645 [2024-07-12 07:26:48.516331] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.645 [2024-07-12 07:26:48.519221] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.645 [2024-07-12 07:26:48.519658] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:14.645 [2024-07-12 07:26:48.519772] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:14.645 [2024-07-12 07:26:48.519978] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:14.645 [2024-07-12 07:26:48.520451] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:14.645 [2024-07-12 07:26:48.520564] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:16:14.645 [2024-07-12 07:26:48.520893] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.903 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:14.903 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:14.903 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:14.903 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:14.903 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:14.903 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:14.903 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:14.903 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:14.903 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:14.903 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:14.903 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.903 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.159 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:15.159 "name": "raid_bdev1", 00:16:15.159 "uuid": "6f2d7368-252f-4047-8905-8dfd874280d7", 00:16:15.159 "strip_size_kb": 64, 00:16:15.159 "state": "online", 00:16:15.159 "raid_level": "concat", 00:16:15.159 "superblock": true, 00:16:15.159 "num_base_bdevs": 2, 00:16:15.159 "num_base_bdevs_discovered": 2, 00:16:15.159 "num_base_bdevs_operational": 2, 00:16:15.159 "base_bdevs_list": [ 00:16:15.159 { 00:16:15.159 "name": "BaseBdev1", 00:16:15.159 "uuid": "e223c495-aaa4-5768-878a-87ac9dc3de66", 00:16:15.159 "is_configured": true, 00:16:15.159 "data_offset": 2048, 00:16:15.159 "data_size": 63488 00:16:15.159 }, 00:16:15.159 { 00:16:15.159 "name": "BaseBdev2", 00:16:15.159 "uuid": "4aabda79-b1d3-578f-bc7b-6fe67e2e6ef5", 00:16:15.159 "is_configured": true, 00:16:15.159 "data_offset": 2048, 00:16:15.159 "data_size": 63488 00:16:15.159 } 00:16:15.159 ] 00:16:15.159 }' 00:16:15.159 07:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:15.159 07:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.723 07:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:15.723 07:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:15.723 [2024-07-12 07:26:49.473555] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:16:16.656 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:16.915 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:16.915 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:16:16.915 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:16.915 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:16.915 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:16.915 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:16.915 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:16.915 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:16.915 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:16.915 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.915 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.915 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.915 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.915 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.915 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.173 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.173 "name": "raid_bdev1", 00:16:17.173 "uuid": "6f2d7368-252f-4047-8905-8dfd874280d7", 00:16:17.173 "strip_size_kb": 64, 00:16:17.173 "state": "online", 00:16:17.173 "raid_level": "concat", 00:16:17.173 "superblock": true, 00:16:17.173 "num_base_bdevs": 2, 00:16:17.173 "num_base_bdevs_discovered": 2, 00:16:17.173 "num_base_bdevs_operational": 2, 00:16:17.173 "base_bdevs_list": [ 00:16:17.173 { 00:16:17.173 "name": "BaseBdev1", 00:16:17.173 "uuid": "e223c495-aaa4-5768-878a-87ac9dc3de66", 00:16:17.173 "is_configured": true, 00:16:17.173 "data_offset": 2048, 00:16:17.173 "data_size": 63488 00:16:17.173 }, 00:16:17.173 { 00:16:17.173 "name": "BaseBdev2", 00:16:17.173 "uuid": "4aabda79-b1d3-578f-bc7b-6fe67e2e6ef5", 00:16:17.173 "is_configured": true, 00:16:17.173 "data_offset": 2048, 00:16:17.173 "data_size": 63488 00:16:17.173 } 00:16:17.173 ] 00:16:17.173 }' 00:16:17.173 07:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.173 07:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.739 07:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:17.997 [2024-07-12 07:26:51.675100] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.997 [2024-07-12 07:26:51.675412] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.997 [2024-07-12 07:26:51.678227] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.997 [2024-07-12 07:26:51.678412] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.997 [2024-07-12 07:26:51.678487] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.997 [2024-07-12 07:26:51.678570] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:16:17.997 0 00:16:17.997 07:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 133574 00:16:17.997 07:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 133574 ']' 00:16:17.997 07:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 133574 00:16:17.997 07:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:16:17.997 07:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:17.997 07:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 133574 00:16:17.997 07:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:17.997 07:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:17.997 07:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 133574' 00:16:17.997 killing process with pid 133574 00:16:17.997 07:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 133574 00:16:17.997 07:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 133574 00:16:17.997 [2024-07-12 07:26:51.731596] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:17.997 [2024-07-12 07:26:51.760996] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:18.563 07:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.f5OXd6K2EJ 00:16:18.563 07:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:18.563 07:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:18.563 07:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:16:18.563 07:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:16:18.563 07:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:18.563 07:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:18.563 07:26:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:16:18.563 00:16:18.563 real 0m6.368s 00:16:18.563 user 0m9.664s 00:16:18.563 sys 0m1.158s 00:16:18.563 07:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:18.563 07:26:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.563 ************************************ 00:16:18.563 END TEST raid_read_error_test 00:16:18.563 ************************************ 00:16:18.563 07:26:52 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:16:18.563 07:26:52 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:18.563 07:26:52 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:18.563 07:26:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:18.563 ************************************ 00:16:18.563 START TEST raid_write_error_test 00:16:18.563 ************************************ 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 2 write 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.gJa8MHmT03 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=133759 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 133759 /var/tmp/spdk-raid.sock 00:16:18.563 07:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:18.564 07:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 133759 ']' 00:16:18.564 07:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:18.564 07:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:18.564 07:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:18.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:18.564 07:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:18.564 07:26:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.564 [2024-07-12 07:26:52.336134] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:18.564 [2024-07-12 07:26:52.337134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133759 ] 00:16:18.822 [2024-07-12 07:26:52.483215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.822 [2024-07-12 07:26:52.578318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.822 [2024-07-12 07:26:52.659936] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.388 07:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:19.388 07:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:16:19.388 07:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:19.388 07:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:19.644 BaseBdev1_malloc 00:16:19.902 07:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:19.902 true 00:16:19.902 07:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:20.159 [2024-07-12 07:26:54.002675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:20.159 [2024-07-12 07:26:54.003130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.159 [2024-07-12 07:26:54.003240] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:16:20.159 [2024-07-12 07:26:54.003392] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.159 [2024-07-12 07:26:54.006553] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.159 [2024-07-12 07:26:54.006780] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:20.159 BaseBdev1 00:16:20.159 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:20.159 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:20.417 BaseBdev2_malloc 00:16:20.417 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:20.675 true 00:16:20.675 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:20.933 [2024-07-12 07:26:54.731511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:20.933 [2024-07-12 07:26:54.731941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.933 [2024-07-12 07:26:54.732038] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:20.933 [2024-07-12 07:26:54.732315] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.933 [2024-07-12 07:26:54.735288] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.933 [2024-07-12 07:26:54.735518] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:20.933 BaseBdev2 00:16:20.933 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:21.192 [2024-07-12 07:26:54.948101] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.192 [2024-07-12 07:26:54.951084] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.192 [2024-07-12 07:26:54.951638] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:21.192 [2024-07-12 07:26:54.951764] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:21.192 [2024-07-12 07:26:54.951986] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:21.192 [2024-07-12 07:26:54.952563] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:21.192 [2024-07-12 07:26:54.952674] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:16:21.192 [2024-07-12 07:26:54.953009] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.192 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:21.192 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:21.192 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:21.192 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:21.192 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:21.192 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:21.192 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.192 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.192 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.192 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.192 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.192 07:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.451 07:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.452 "name": "raid_bdev1", 00:16:21.452 "uuid": "3faae1e8-eff5-400f-aa2d-4f3bd80c8555", 00:16:21.452 "strip_size_kb": 64, 00:16:21.452 "state": "online", 00:16:21.452 "raid_level": "concat", 00:16:21.452 "superblock": true, 00:16:21.452 "num_base_bdevs": 2, 00:16:21.452 "num_base_bdevs_discovered": 2, 00:16:21.452 "num_base_bdevs_operational": 2, 00:16:21.452 "base_bdevs_list": [ 00:16:21.452 { 00:16:21.452 "name": "BaseBdev1", 00:16:21.452 "uuid": "0932cdcd-e6f2-58b5-8c62-89f6881e8b25", 00:16:21.452 "is_configured": true, 00:16:21.452 "data_offset": 2048, 00:16:21.452 "data_size": 63488 00:16:21.452 }, 00:16:21.452 { 00:16:21.452 "name": "BaseBdev2", 00:16:21.452 "uuid": "3e27cdbc-4cde-5ba9-ad39-a8520449d89c", 00:16:21.452 "is_configured": true, 00:16:21.452 "data_offset": 2048, 00:16:21.452 "data_size": 63488 00:16:21.452 } 00:16:21.452 ] 00:16:21.452 }' 00:16:21.452 07:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.452 07:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.016 07:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:22.016 07:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:22.273 [2024-07-12 07:26:55.957688] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:16:23.205 07:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:23.462 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:23.462 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:16:23.462 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:23.462 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:23.462 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:23.462 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:23.462 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:23.462 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:23.462 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:23.462 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.462 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.462 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.462 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.462 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.462 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.718 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.718 "name": "raid_bdev1", 00:16:23.718 "uuid": "3faae1e8-eff5-400f-aa2d-4f3bd80c8555", 00:16:23.718 "strip_size_kb": 64, 00:16:23.718 "state": "online", 00:16:23.718 "raid_level": "concat", 00:16:23.718 "superblock": true, 00:16:23.718 "num_base_bdevs": 2, 00:16:23.718 "num_base_bdevs_discovered": 2, 00:16:23.718 "num_base_bdevs_operational": 2, 00:16:23.718 "base_bdevs_list": [ 00:16:23.718 { 00:16:23.718 "name": "BaseBdev1", 00:16:23.718 "uuid": "0932cdcd-e6f2-58b5-8c62-89f6881e8b25", 00:16:23.718 "is_configured": true, 00:16:23.718 "data_offset": 2048, 00:16:23.718 "data_size": 63488 00:16:23.718 }, 00:16:23.718 { 00:16:23.718 "name": "BaseBdev2", 00:16:23.718 "uuid": "3e27cdbc-4cde-5ba9-ad39-a8520449d89c", 00:16:23.718 "is_configured": true, 00:16:23.718 "data_offset": 2048, 00:16:23.718 "data_size": 63488 00:16:23.718 } 00:16:23.718 ] 00:16:23.718 }' 00:16:23.718 07:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.718 07:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.281 07:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:24.540 [2024-07-12 07:26:58.288130] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.540 [2024-07-12 07:26:58.288431] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.540 [2024-07-12 07:26:58.291124] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.540 [2024-07-12 07:26:58.291307] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.540 [2024-07-12 07:26:58.291376] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.540 [2024-07-12 07:26:58.291453] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:16:24.540 0 00:16:24.540 07:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 133759 00:16:24.540 07:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 133759 ']' 00:16:24.540 07:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 133759 00:16:24.540 07:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:16:24.540 07:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:24.540 07:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 133759 00:16:24.540 07:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:24.540 07:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:24.540 07:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 133759' 00:16:24.540 killing process with pid 133759 00:16:24.540 07:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 133759 00:16:24.540 [2024-07-12 07:26:58.359004] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.540 07:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 133759 00:16:24.540 [2024-07-12 07:26:58.387991] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.106 07:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:25.106 07:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.gJa8MHmT03 00:16:25.106 07:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:25.106 07:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:16:25.106 07:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:16:25.106 07:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:25.106 07:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:25.106 07:26:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:16:25.106 00:16:25.106 real 0m6.548s 00:16:25.106 user 0m10.089s 00:16:25.106 sys 0m1.110s 00:16:25.106 ************************************ 00:16:25.106 END TEST raid_write_error_test 00:16:25.106 ************************************ 00:16:25.106 07:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:25.106 07:26:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.106 07:26:58 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:25.106 07:26:58 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:16:25.106 07:26:58 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:25.106 07:26:58 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:25.106 07:26:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.106 ************************************ 00:16:25.106 START TEST raid_state_function_test 00:16:25.106 ************************************ 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 false 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=133942 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 133942' 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:25.106 Process raid pid: 133942 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 133942 /var/tmp/spdk-raid.sock 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 133942 ']' 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:25.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:25.106 07:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.106 [2024-07-12 07:26:58.958558] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:25.106 [2024-07-12 07:26:58.959048] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.365 [2024-07-12 07:26:59.102335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.365 [2024-07-12 07:26:59.196523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.623 [2024-07-12 07:26:59.277936] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.190 07:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:26.190 07:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:16:26.190 07:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:26.190 [2024-07-12 07:27:00.055260] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:26.190 [2024-07-12 07:27:00.055716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:26.190 [2024-07-12 07:27:00.055847] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.190 [2024-07-12 07:27:00.055928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.449 07:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:26.449 07:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:26.449 07:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:26.449 07:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:26.449 07:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:26.449 07:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:26.449 07:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:26.449 07:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:26.449 07:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:26.449 07:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:26.449 07:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.449 07:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.449 07:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.449 "name": "Existed_Raid", 00:16:26.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.449 "strip_size_kb": 0, 00:16:26.449 "state": "configuring", 00:16:26.449 "raid_level": "raid1", 00:16:26.449 "superblock": false, 00:16:26.449 "num_base_bdevs": 2, 00:16:26.449 "num_base_bdevs_discovered": 0, 00:16:26.449 "num_base_bdevs_operational": 2, 00:16:26.449 "base_bdevs_list": [ 00:16:26.449 { 00:16:26.449 "name": "BaseBdev1", 00:16:26.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.449 "is_configured": false, 00:16:26.449 "data_offset": 0, 00:16:26.449 "data_size": 0 00:16:26.449 }, 00:16:26.449 { 00:16:26.449 "name": "BaseBdev2", 00:16:26.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.449 "is_configured": false, 00:16:26.449 "data_offset": 0, 00:16:26.449 "data_size": 0 00:16:26.449 } 00:16:26.449 ] 00:16:26.449 }' 00:16:26.449 07:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.449 07:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.017 07:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:27.276 [2024-07-12 07:27:01.143344] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:27.276 [2024-07-12 07:27:01.143633] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:27.534 07:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:27.534 [2024-07-12 07:27:01.331368] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:27.534 [2024-07-12 07:27:01.331730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:27.534 [2024-07-12 07:27:01.331824] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:27.534 [2024-07-12 07:27:01.331945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:27.534 07:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:27.793 [2024-07-12 07:27:01.599692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.793 BaseBdev1 00:16:27.793 07:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:27.793 07:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:27.793 07:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:27.793 07:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:27.793 07:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:27.793 07:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:27.793 07:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:28.052 07:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:28.311 [ 00:16:28.311 { 00:16:28.311 "name": "BaseBdev1", 00:16:28.311 "aliases": [ 00:16:28.311 "d3262da0-82b8-47b2-8148-f1ecc4256d8a" 00:16:28.311 ], 00:16:28.311 "product_name": "Malloc disk", 00:16:28.311 "block_size": 512, 00:16:28.311 "num_blocks": 65536, 00:16:28.311 "uuid": "d3262da0-82b8-47b2-8148-f1ecc4256d8a", 00:16:28.311 "assigned_rate_limits": { 00:16:28.311 "rw_ios_per_sec": 0, 00:16:28.311 "rw_mbytes_per_sec": 0, 00:16:28.311 "r_mbytes_per_sec": 0, 00:16:28.311 "w_mbytes_per_sec": 0 00:16:28.311 }, 00:16:28.311 "claimed": true, 00:16:28.311 "claim_type": "exclusive_write", 00:16:28.311 "zoned": false, 00:16:28.311 "supported_io_types": { 00:16:28.311 "read": true, 00:16:28.311 "write": true, 00:16:28.311 "unmap": true, 00:16:28.311 "write_zeroes": true, 00:16:28.311 "flush": true, 00:16:28.311 "reset": true, 00:16:28.311 "compare": false, 00:16:28.311 "compare_and_write": false, 00:16:28.311 "abort": true, 00:16:28.311 "nvme_admin": false, 00:16:28.311 "nvme_io": false 00:16:28.311 }, 00:16:28.312 "memory_domains": [ 00:16:28.312 { 00:16:28.312 "dma_device_id": "system", 00:16:28.312 "dma_device_type": 1 00:16:28.312 }, 00:16:28.312 { 00:16:28.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.312 "dma_device_type": 2 00:16:28.312 } 00:16:28.312 ], 00:16:28.312 "driver_specific": {} 00:16:28.312 } 00:16:28.312 ] 00:16:28.312 07:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:28.312 07:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:28.312 07:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:28.312 07:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:28.312 07:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:28.312 07:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:28.312 07:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:28.312 07:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:28.312 07:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:28.312 07:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:28.312 07:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:28.312 07:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.312 07:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.569 07:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.569 "name": "Existed_Raid", 00:16:28.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.569 "strip_size_kb": 0, 00:16:28.569 "state": "configuring", 00:16:28.569 "raid_level": "raid1", 00:16:28.569 "superblock": false, 00:16:28.569 "num_base_bdevs": 2, 00:16:28.569 "num_base_bdevs_discovered": 1, 00:16:28.569 "num_base_bdevs_operational": 2, 00:16:28.569 "base_bdevs_list": [ 00:16:28.569 { 00:16:28.569 "name": "BaseBdev1", 00:16:28.569 "uuid": "d3262da0-82b8-47b2-8148-f1ecc4256d8a", 00:16:28.569 "is_configured": true, 00:16:28.569 "data_offset": 0, 00:16:28.569 "data_size": 65536 00:16:28.569 }, 00:16:28.569 { 00:16:28.569 "name": "BaseBdev2", 00:16:28.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.569 "is_configured": false, 00:16:28.569 "data_offset": 0, 00:16:28.569 "data_size": 0 00:16:28.569 } 00:16:28.569 ] 00:16:28.569 }' 00:16:28.569 07:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.569 07:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.136 07:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:29.395 [2024-07-12 07:27:03.108047] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:29.395 [2024-07-12 07:27:03.108358] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:29.395 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:29.654 [2024-07-12 07:27:03.300157] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.654 [2024-07-12 07:27:03.302837] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.654 [2024-07-12 07:27:03.303033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:29.654 "name": "Existed_Raid", 00:16:29.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.654 "strip_size_kb": 0, 00:16:29.654 "state": "configuring", 00:16:29.654 "raid_level": "raid1", 00:16:29.654 "superblock": false, 00:16:29.654 "num_base_bdevs": 2, 00:16:29.654 "num_base_bdevs_discovered": 1, 00:16:29.654 "num_base_bdevs_operational": 2, 00:16:29.654 "base_bdevs_list": [ 00:16:29.654 { 00:16:29.654 "name": "BaseBdev1", 00:16:29.654 "uuid": "d3262da0-82b8-47b2-8148-f1ecc4256d8a", 00:16:29.654 "is_configured": true, 00:16:29.654 "data_offset": 0, 00:16:29.654 "data_size": 65536 00:16:29.654 }, 00:16:29.654 { 00:16:29.654 "name": "BaseBdev2", 00:16:29.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.654 "is_configured": false, 00:16:29.654 "data_offset": 0, 00:16:29.654 "data_size": 0 00:16:29.654 } 00:16:29.654 ] 00:16:29.654 }' 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:29.654 07:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.587 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:30.587 [2024-07-12 07:27:04.343629] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:30.587 [2024-07-12 07:27:04.344006] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:16:30.587 [2024-07-12 07:27:04.344083] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:30.587 [2024-07-12 07:27:04.344529] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:16:30.587 [2024-07-12 07:27:04.345478] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:16:30.587 [2024-07-12 07:27:04.345676] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:16:30.587 [2024-07-12 07:27:04.346251] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.587 BaseBdev2 00:16:30.587 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:30.587 07:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:30.587 07:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:30.587 07:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:30.587 07:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:30.587 07:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:30.587 07:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:30.845 07:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:31.103 [ 00:16:31.103 { 00:16:31.103 "name": "BaseBdev2", 00:16:31.103 "aliases": [ 00:16:31.103 "72e3a165-68c2-4a88-aa8b-d17cde8fb00e" 00:16:31.103 ], 00:16:31.103 "product_name": "Malloc disk", 00:16:31.103 "block_size": 512, 00:16:31.103 "num_blocks": 65536, 00:16:31.103 "uuid": "72e3a165-68c2-4a88-aa8b-d17cde8fb00e", 00:16:31.103 "assigned_rate_limits": { 00:16:31.103 "rw_ios_per_sec": 0, 00:16:31.103 "rw_mbytes_per_sec": 0, 00:16:31.103 "r_mbytes_per_sec": 0, 00:16:31.103 "w_mbytes_per_sec": 0 00:16:31.103 }, 00:16:31.103 "claimed": true, 00:16:31.103 "claim_type": "exclusive_write", 00:16:31.103 "zoned": false, 00:16:31.103 "supported_io_types": { 00:16:31.103 "read": true, 00:16:31.103 "write": true, 00:16:31.103 "unmap": true, 00:16:31.103 "write_zeroes": true, 00:16:31.103 "flush": true, 00:16:31.103 "reset": true, 00:16:31.103 "compare": false, 00:16:31.103 "compare_and_write": false, 00:16:31.104 "abort": true, 00:16:31.104 "nvme_admin": false, 00:16:31.104 "nvme_io": false 00:16:31.104 }, 00:16:31.104 "memory_domains": [ 00:16:31.104 { 00:16:31.104 "dma_device_id": "system", 00:16:31.104 "dma_device_type": 1 00:16:31.104 }, 00:16:31.104 { 00:16:31.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.104 "dma_device_type": 2 00:16:31.104 } 00:16:31.104 ], 00:16:31.104 "driver_specific": {} 00:16:31.104 } 00:16:31.104 ] 00:16:31.104 07:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:31.104 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:31.104 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:31.104 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:31.104 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:31.104 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:31.104 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:31.104 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:31.104 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:31.104 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.104 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.104 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.104 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.104 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.104 07:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.362 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:31.362 "name": "Existed_Raid", 00:16:31.362 "uuid": "790ddd72-771d-4ec1-ad5c-21f8c6e8cbe6", 00:16:31.362 "strip_size_kb": 0, 00:16:31.362 "state": "online", 00:16:31.362 "raid_level": "raid1", 00:16:31.362 "superblock": false, 00:16:31.362 "num_base_bdevs": 2, 00:16:31.362 "num_base_bdevs_discovered": 2, 00:16:31.362 "num_base_bdevs_operational": 2, 00:16:31.362 "base_bdevs_list": [ 00:16:31.362 { 00:16:31.362 "name": "BaseBdev1", 00:16:31.362 "uuid": "d3262da0-82b8-47b2-8148-f1ecc4256d8a", 00:16:31.362 "is_configured": true, 00:16:31.362 "data_offset": 0, 00:16:31.362 "data_size": 65536 00:16:31.362 }, 00:16:31.362 { 00:16:31.362 "name": "BaseBdev2", 00:16:31.362 "uuid": "72e3a165-68c2-4a88-aa8b-d17cde8fb00e", 00:16:31.362 "is_configured": true, 00:16:31.362 "data_offset": 0, 00:16:31.362 "data_size": 65536 00:16:31.362 } 00:16:31.362 ] 00:16:31.362 }' 00:16:31.362 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:31.362 07:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.929 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:31.929 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:31.929 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:31.929 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:31.929 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:31.929 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:31.929 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:31.929 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:31.929 [2024-07-12 07:27:05.796220] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.188 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:32.188 "name": "Existed_Raid", 00:16:32.188 "aliases": [ 00:16:32.188 "790ddd72-771d-4ec1-ad5c-21f8c6e8cbe6" 00:16:32.188 ], 00:16:32.188 "product_name": "Raid Volume", 00:16:32.188 "block_size": 512, 00:16:32.188 "num_blocks": 65536, 00:16:32.188 "uuid": "790ddd72-771d-4ec1-ad5c-21f8c6e8cbe6", 00:16:32.188 "assigned_rate_limits": { 00:16:32.188 "rw_ios_per_sec": 0, 00:16:32.188 "rw_mbytes_per_sec": 0, 00:16:32.188 "r_mbytes_per_sec": 0, 00:16:32.188 "w_mbytes_per_sec": 0 00:16:32.188 }, 00:16:32.188 "claimed": false, 00:16:32.188 "zoned": false, 00:16:32.188 "supported_io_types": { 00:16:32.188 "read": true, 00:16:32.188 "write": true, 00:16:32.188 "unmap": false, 00:16:32.188 "write_zeroes": true, 00:16:32.188 "flush": false, 00:16:32.188 "reset": true, 00:16:32.188 "compare": false, 00:16:32.188 "compare_and_write": false, 00:16:32.188 "abort": false, 00:16:32.188 "nvme_admin": false, 00:16:32.188 "nvme_io": false 00:16:32.188 }, 00:16:32.188 "memory_domains": [ 00:16:32.188 { 00:16:32.188 "dma_device_id": "system", 00:16:32.188 "dma_device_type": 1 00:16:32.188 }, 00:16:32.188 { 00:16:32.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.188 "dma_device_type": 2 00:16:32.188 }, 00:16:32.188 { 00:16:32.188 "dma_device_id": "system", 00:16:32.188 "dma_device_type": 1 00:16:32.188 }, 00:16:32.188 { 00:16:32.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.188 "dma_device_type": 2 00:16:32.188 } 00:16:32.188 ], 00:16:32.188 "driver_specific": { 00:16:32.188 "raid": { 00:16:32.188 "uuid": "790ddd72-771d-4ec1-ad5c-21f8c6e8cbe6", 00:16:32.188 "strip_size_kb": 0, 00:16:32.188 "state": "online", 00:16:32.188 "raid_level": "raid1", 00:16:32.188 "superblock": false, 00:16:32.188 "num_base_bdevs": 2, 00:16:32.188 "num_base_bdevs_discovered": 2, 00:16:32.188 "num_base_bdevs_operational": 2, 00:16:32.188 "base_bdevs_list": [ 00:16:32.188 { 00:16:32.188 "name": "BaseBdev1", 00:16:32.188 "uuid": "d3262da0-82b8-47b2-8148-f1ecc4256d8a", 00:16:32.188 "is_configured": true, 00:16:32.188 "data_offset": 0, 00:16:32.188 "data_size": 65536 00:16:32.188 }, 00:16:32.188 { 00:16:32.188 "name": "BaseBdev2", 00:16:32.188 "uuid": "72e3a165-68c2-4a88-aa8b-d17cde8fb00e", 00:16:32.188 "is_configured": true, 00:16:32.188 "data_offset": 0, 00:16:32.188 "data_size": 65536 00:16:32.188 } 00:16:32.188 ] 00:16:32.188 } 00:16:32.188 } 00:16:32.188 }' 00:16:32.188 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.188 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:32.188 BaseBdev2' 00:16:32.188 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:32.188 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:32.188 07:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:32.188 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:32.188 "name": "BaseBdev1", 00:16:32.189 "aliases": [ 00:16:32.189 "d3262da0-82b8-47b2-8148-f1ecc4256d8a" 00:16:32.189 ], 00:16:32.189 "product_name": "Malloc disk", 00:16:32.189 "block_size": 512, 00:16:32.189 "num_blocks": 65536, 00:16:32.189 "uuid": "d3262da0-82b8-47b2-8148-f1ecc4256d8a", 00:16:32.189 "assigned_rate_limits": { 00:16:32.189 "rw_ios_per_sec": 0, 00:16:32.189 "rw_mbytes_per_sec": 0, 00:16:32.189 "r_mbytes_per_sec": 0, 00:16:32.189 "w_mbytes_per_sec": 0 00:16:32.189 }, 00:16:32.189 "claimed": true, 00:16:32.189 "claim_type": "exclusive_write", 00:16:32.189 "zoned": false, 00:16:32.189 "supported_io_types": { 00:16:32.189 "read": true, 00:16:32.189 "write": true, 00:16:32.189 "unmap": true, 00:16:32.189 "write_zeroes": true, 00:16:32.189 "flush": true, 00:16:32.189 "reset": true, 00:16:32.189 "compare": false, 00:16:32.189 "compare_and_write": false, 00:16:32.189 "abort": true, 00:16:32.189 "nvme_admin": false, 00:16:32.189 "nvme_io": false 00:16:32.189 }, 00:16:32.189 "memory_domains": [ 00:16:32.189 { 00:16:32.189 "dma_device_id": "system", 00:16:32.189 "dma_device_type": 1 00:16:32.189 }, 00:16:32.189 { 00:16:32.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.189 "dma_device_type": 2 00:16:32.189 } 00:16:32.189 ], 00:16:32.189 "driver_specific": {} 00:16:32.189 }' 00:16:32.189 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.449 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.449 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.449 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.449 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.449 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.449 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.449 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.449 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.449 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.851 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.851 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.851 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:32.851 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:32.851 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:33.110 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:33.110 "name": "BaseBdev2", 00:16:33.110 "aliases": [ 00:16:33.110 "72e3a165-68c2-4a88-aa8b-d17cde8fb00e" 00:16:33.110 ], 00:16:33.110 "product_name": "Malloc disk", 00:16:33.110 "block_size": 512, 00:16:33.110 "num_blocks": 65536, 00:16:33.110 "uuid": "72e3a165-68c2-4a88-aa8b-d17cde8fb00e", 00:16:33.110 "assigned_rate_limits": { 00:16:33.110 "rw_ios_per_sec": 0, 00:16:33.110 "rw_mbytes_per_sec": 0, 00:16:33.110 "r_mbytes_per_sec": 0, 00:16:33.110 "w_mbytes_per_sec": 0 00:16:33.110 }, 00:16:33.110 "claimed": true, 00:16:33.110 "claim_type": "exclusive_write", 00:16:33.110 "zoned": false, 00:16:33.110 "supported_io_types": { 00:16:33.110 "read": true, 00:16:33.110 "write": true, 00:16:33.110 "unmap": true, 00:16:33.110 "write_zeroes": true, 00:16:33.110 "flush": true, 00:16:33.110 "reset": true, 00:16:33.110 "compare": false, 00:16:33.110 "compare_and_write": false, 00:16:33.110 "abort": true, 00:16:33.110 "nvme_admin": false, 00:16:33.110 "nvme_io": false 00:16:33.110 }, 00:16:33.110 "memory_domains": [ 00:16:33.110 { 00:16:33.110 "dma_device_id": "system", 00:16:33.110 "dma_device_type": 1 00:16:33.110 }, 00:16:33.110 { 00:16:33.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.110 "dma_device_type": 2 00:16:33.110 } 00:16:33.110 ], 00:16:33.110 "driver_specific": {} 00:16:33.110 }' 00:16:33.110 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:33.110 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:33.110 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:33.110 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:33.110 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:33.111 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:33.111 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:33.111 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:33.111 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:33.111 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:33.111 07:27:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:33.369 [2024-07-12 07:27:07.184407] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.369 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.628 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.628 "name": "Existed_Raid", 00:16:33.628 "uuid": "790ddd72-771d-4ec1-ad5c-21f8c6e8cbe6", 00:16:33.628 "strip_size_kb": 0, 00:16:33.628 "state": "online", 00:16:33.628 "raid_level": "raid1", 00:16:33.628 "superblock": false, 00:16:33.628 "num_base_bdevs": 2, 00:16:33.628 "num_base_bdevs_discovered": 1, 00:16:33.628 "num_base_bdevs_operational": 1, 00:16:33.628 "base_bdevs_list": [ 00:16:33.628 { 00:16:33.628 "name": null, 00:16:33.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.628 "is_configured": false, 00:16:33.628 "data_offset": 0, 00:16:33.628 "data_size": 65536 00:16:33.628 }, 00:16:33.628 { 00:16:33.628 "name": "BaseBdev2", 00:16:33.628 "uuid": "72e3a165-68c2-4a88-aa8b-d17cde8fb00e", 00:16:33.628 "is_configured": true, 00:16:33.628 "data_offset": 0, 00:16:33.628 "data_size": 65536 00:16:33.628 } 00:16:33.628 ] 00:16:33.628 }' 00:16:33.628 07:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.628 07:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.564 07:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:34.564 07:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:34.564 07:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.564 07:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:34.823 07:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:34.823 07:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:34.823 07:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:34.823 [2024-07-12 07:27:08.698341] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:34.823 [2024-07-12 07:27:08.698744] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.081 [2024-07-12 07:27:08.721155] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.081 [2024-07-12 07:27:08.721502] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.081 [2024-07-12 07:27:08.721595] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:16:35.081 07:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:35.081 07:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:35.081 07:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.082 07:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:35.082 07:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:35.082 07:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:35.082 07:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:35.082 07:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 133942 00:16:35.082 07:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 133942 ']' 00:16:35.082 07:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 133942 00:16:35.082 07:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:16:35.082 07:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:35.082 07:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 133942 00:16:35.082 killing process with pid 133942 00:16:35.082 07:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:35.082 07:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:35.082 07:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 133942' 00:16:35.082 07:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 133942 00:16:35.082 07:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 133942 00:16:35.082 [2024-07-12 07:27:08.960966] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:35.082 [2024-07-12 07:27:08.961079] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:35.650 ************************************ 00:16:35.650 END TEST raid_state_function_test 00:16:35.650 ************************************ 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:35.650 00:16:35.650 real 0m10.472s 00:16:35.650 user 0m18.680s 00:16:35.650 sys 0m1.695s 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.650 07:27:09 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:16:35.650 07:27:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:35.650 07:27:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:35.650 07:27:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:35.650 ************************************ 00:16:35.650 START TEST raid_state_function_test_sb 00:16:35.650 ************************************ 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=134307 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 134307' 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:35.650 Process raid pid: 134307 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 134307 /var/tmp/spdk-raid.sock 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 134307 ']' 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:35.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:35.650 07:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.650 [2024-07-12 07:27:09.520421] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:35.650 [2024-07-12 07:27:09.520964] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.910 [2024-07-12 07:27:09.676280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.910 [2024-07-12 07:27:09.769738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.169 [2024-07-12 07:27:09.849554] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.737 07:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:36.737 07:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:16:36.737 07:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:36.995 [2024-07-12 07:27:10.695263] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.995 [2024-07-12 07:27:10.695609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.995 [2024-07-12 07:27:10.695740] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.995 [2024-07-12 07:27:10.695877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.995 07:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:36.995 07:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:36.995 07:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:36.995 07:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:36.995 07:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:36.995 07:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:36.995 07:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:36.995 07:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:36.995 07:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:36.995 07:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:36.995 07:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.995 07:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.252 07:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.252 "name": "Existed_Raid", 00:16:37.252 "uuid": "6298a9c6-1e8b-40f6-a6e2-af7c27607a81", 00:16:37.252 "strip_size_kb": 0, 00:16:37.252 "state": "configuring", 00:16:37.252 "raid_level": "raid1", 00:16:37.252 "superblock": true, 00:16:37.252 "num_base_bdevs": 2, 00:16:37.252 "num_base_bdevs_discovered": 0, 00:16:37.252 "num_base_bdevs_operational": 2, 00:16:37.252 "base_bdevs_list": [ 00:16:37.252 { 00:16:37.252 "name": "BaseBdev1", 00:16:37.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.252 "is_configured": false, 00:16:37.252 "data_offset": 0, 00:16:37.252 "data_size": 0 00:16:37.252 }, 00:16:37.252 { 00:16:37.252 "name": "BaseBdev2", 00:16:37.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.252 "is_configured": false, 00:16:37.252 "data_offset": 0, 00:16:37.252 "data_size": 0 00:16:37.252 } 00:16:37.252 ] 00:16:37.252 }' 00:16:37.253 07:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.253 07:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.818 07:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:38.076 [2024-07-12 07:27:11.747245] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:38.076 [2024-07-12 07:27:11.747480] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:38.076 07:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:38.076 [2024-07-12 07:27:11.947299] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:38.076 [2024-07-12 07:27:11.947663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:38.076 [2024-07-12 07:27:11.947785] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:38.076 [2024-07-12 07:27:11.947944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:38.334 07:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:38.334 [2024-07-12 07:27:12.163644] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.334 BaseBdev1 00:16:38.334 07:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:38.334 07:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:38.334 07:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:38.334 07:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:38.334 07:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:38.334 07:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:38.334 07:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.593 07:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:38.852 [ 00:16:38.852 { 00:16:38.852 "name": "BaseBdev1", 00:16:38.852 "aliases": [ 00:16:38.852 "28512de4-8185-41a8-a537-9b15c10a1669" 00:16:38.852 ], 00:16:38.852 "product_name": "Malloc disk", 00:16:38.852 "block_size": 512, 00:16:38.852 "num_blocks": 65536, 00:16:38.852 "uuid": "28512de4-8185-41a8-a537-9b15c10a1669", 00:16:38.852 "assigned_rate_limits": { 00:16:38.852 "rw_ios_per_sec": 0, 00:16:38.852 "rw_mbytes_per_sec": 0, 00:16:38.852 "r_mbytes_per_sec": 0, 00:16:38.852 "w_mbytes_per_sec": 0 00:16:38.852 }, 00:16:38.852 "claimed": true, 00:16:38.852 "claim_type": "exclusive_write", 00:16:38.852 "zoned": false, 00:16:38.852 "supported_io_types": { 00:16:38.852 "read": true, 00:16:38.852 "write": true, 00:16:38.852 "unmap": true, 00:16:38.852 "write_zeroes": true, 00:16:38.852 "flush": true, 00:16:38.852 "reset": true, 00:16:38.852 "compare": false, 00:16:38.852 "compare_and_write": false, 00:16:38.852 "abort": true, 00:16:38.852 "nvme_admin": false, 00:16:38.852 "nvme_io": false 00:16:38.852 }, 00:16:38.852 "memory_domains": [ 00:16:38.852 { 00:16:38.852 "dma_device_id": "system", 00:16:38.852 "dma_device_type": 1 00:16:38.852 }, 00:16:38.852 { 00:16:38.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.852 "dma_device_type": 2 00:16:38.852 } 00:16:38.852 ], 00:16:38.852 "driver_specific": {} 00:16:38.852 } 00:16:38.852 ] 00:16:38.852 07:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:38.852 07:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:38.852 07:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:38.852 07:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:38.852 07:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:38.852 07:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:38.852 07:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:38.852 07:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.852 07:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.852 07:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.852 07:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.852 07:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.852 07:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.111 07:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.111 "name": "Existed_Raid", 00:16:39.111 "uuid": "15505926-8904-4b47-bd6f-a9a4f2f114ca", 00:16:39.111 "strip_size_kb": 0, 00:16:39.111 "state": "configuring", 00:16:39.111 "raid_level": "raid1", 00:16:39.111 "superblock": true, 00:16:39.111 "num_base_bdevs": 2, 00:16:39.111 "num_base_bdevs_discovered": 1, 00:16:39.111 "num_base_bdevs_operational": 2, 00:16:39.111 "base_bdevs_list": [ 00:16:39.111 { 00:16:39.111 "name": "BaseBdev1", 00:16:39.111 "uuid": "28512de4-8185-41a8-a537-9b15c10a1669", 00:16:39.111 "is_configured": true, 00:16:39.111 "data_offset": 2048, 00:16:39.111 "data_size": 63488 00:16:39.111 }, 00:16:39.111 { 00:16:39.111 "name": "BaseBdev2", 00:16:39.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.111 "is_configured": false, 00:16:39.111 "data_offset": 0, 00:16:39.111 "data_size": 0 00:16:39.111 } 00:16:39.111 ] 00:16:39.111 }' 00:16:39.111 07:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.111 07:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.676 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:39.933 [2024-07-12 07:27:13.584023] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:39.933 [2024-07-12 07:27:13.584336] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:39.933 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:40.192 [2024-07-12 07:27:13.836168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.192 [2024-07-12 07:27:13.838981] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:40.192 [2024-07-12 07:27:13.839219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:40.192 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:40.192 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:40.192 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:40.192 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:40.192 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:40.192 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:40.192 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:40.192 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:40.192 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.192 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.192 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.192 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.192 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.192 07:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.451 07:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.451 "name": "Existed_Raid", 00:16:40.451 "uuid": "b8ec208b-e64b-4181-9d71-73c73c027ad0", 00:16:40.451 "strip_size_kb": 0, 00:16:40.451 "state": "configuring", 00:16:40.451 "raid_level": "raid1", 00:16:40.451 "superblock": true, 00:16:40.451 "num_base_bdevs": 2, 00:16:40.451 "num_base_bdevs_discovered": 1, 00:16:40.451 "num_base_bdevs_operational": 2, 00:16:40.451 "base_bdevs_list": [ 00:16:40.451 { 00:16:40.451 "name": "BaseBdev1", 00:16:40.451 "uuid": "28512de4-8185-41a8-a537-9b15c10a1669", 00:16:40.451 "is_configured": true, 00:16:40.451 "data_offset": 2048, 00:16:40.451 "data_size": 63488 00:16:40.451 }, 00:16:40.451 { 00:16:40.451 "name": "BaseBdev2", 00:16:40.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.451 "is_configured": false, 00:16:40.451 "data_offset": 0, 00:16:40.451 "data_size": 0 00:16:40.451 } 00:16:40.451 ] 00:16:40.451 }' 00:16:40.451 07:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.451 07:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.018 07:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:41.276 [2024-07-12 07:27:14.991177] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.276 [2024-07-12 07:27:14.991926] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:16:41.276 [2024-07-12 07:27:14.992092] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:41.276 [2024-07-12 07:27:14.992523] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:16:41.276 BaseBdev2 00:16:41.276 [2024-07-12 07:27:14.993340] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:16:41.276 [2024-07-12 07:27:14.993365] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:16:41.276 [2024-07-12 07:27:14.993663] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.276 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:41.276 07:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:41.276 07:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:41.276 07:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:41.276 07:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:41.276 07:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:41.276 07:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.535 07:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:41.794 [ 00:16:41.794 { 00:16:41.794 "name": "BaseBdev2", 00:16:41.794 "aliases": [ 00:16:41.794 "3ff7dc96-22ad-40f8-a73c-2006901c0bf3" 00:16:41.794 ], 00:16:41.794 "product_name": "Malloc disk", 00:16:41.794 "block_size": 512, 00:16:41.794 "num_blocks": 65536, 00:16:41.794 "uuid": "3ff7dc96-22ad-40f8-a73c-2006901c0bf3", 00:16:41.794 "assigned_rate_limits": { 00:16:41.794 "rw_ios_per_sec": 0, 00:16:41.794 "rw_mbytes_per_sec": 0, 00:16:41.794 "r_mbytes_per_sec": 0, 00:16:41.794 "w_mbytes_per_sec": 0 00:16:41.794 }, 00:16:41.794 "claimed": true, 00:16:41.794 "claim_type": "exclusive_write", 00:16:41.794 "zoned": false, 00:16:41.794 "supported_io_types": { 00:16:41.794 "read": true, 00:16:41.794 "write": true, 00:16:41.794 "unmap": true, 00:16:41.794 "write_zeroes": true, 00:16:41.794 "flush": true, 00:16:41.794 "reset": true, 00:16:41.794 "compare": false, 00:16:41.794 "compare_and_write": false, 00:16:41.794 "abort": true, 00:16:41.794 "nvme_admin": false, 00:16:41.794 "nvme_io": false 00:16:41.794 }, 00:16:41.794 "memory_domains": [ 00:16:41.794 { 00:16:41.794 "dma_device_id": "system", 00:16:41.794 "dma_device_type": 1 00:16:41.794 }, 00:16:41.794 { 00:16:41.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.794 "dma_device_type": 2 00:16:41.794 } 00:16:41.794 ], 00:16:41.794 "driver_specific": {} 00:16:41.794 } 00:16:41.794 ] 00:16:41.794 07:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:41.794 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:41.794 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:41.794 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:41.794 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:41.794 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:41.794 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:41.794 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:41.794 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:41.794 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.794 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.794 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.794 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.794 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.794 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.052 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.052 "name": "Existed_Raid", 00:16:42.052 "uuid": "b8ec208b-e64b-4181-9d71-73c73c027ad0", 00:16:42.052 "strip_size_kb": 0, 00:16:42.052 "state": "online", 00:16:42.052 "raid_level": "raid1", 00:16:42.052 "superblock": true, 00:16:42.052 "num_base_bdevs": 2, 00:16:42.052 "num_base_bdevs_discovered": 2, 00:16:42.052 "num_base_bdevs_operational": 2, 00:16:42.052 "base_bdevs_list": [ 00:16:42.052 { 00:16:42.052 "name": "BaseBdev1", 00:16:42.052 "uuid": "28512de4-8185-41a8-a537-9b15c10a1669", 00:16:42.052 "is_configured": true, 00:16:42.052 "data_offset": 2048, 00:16:42.052 "data_size": 63488 00:16:42.052 }, 00:16:42.052 { 00:16:42.052 "name": "BaseBdev2", 00:16:42.052 "uuid": "3ff7dc96-22ad-40f8-a73c-2006901c0bf3", 00:16:42.052 "is_configured": true, 00:16:42.052 "data_offset": 2048, 00:16:42.052 "data_size": 63488 00:16:42.052 } 00:16:42.052 ] 00:16:42.052 }' 00:16:42.052 07:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.052 07:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.620 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:42.621 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:42.621 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:42.621 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:42.621 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:42.621 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:42.621 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:42.621 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:42.880 [2024-07-12 07:27:16.603998] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.880 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:42.880 "name": "Existed_Raid", 00:16:42.880 "aliases": [ 00:16:42.880 "b8ec208b-e64b-4181-9d71-73c73c027ad0" 00:16:42.880 ], 00:16:42.880 "product_name": "Raid Volume", 00:16:42.880 "block_size": 512, 00:16:42.880 "num_blocks": 63488, 00:16:42.880 "uuid": "b8ec208b-e64b-4181-9d71-73c73c027ad0", 00:16:42.880 "assigned_rate_limits": { 00:16:42.880 "rw_ios_per_sec": 0, 00:16:42.880 "rw_mbytes_per_sec": 0, 00:16:42.880 "r_mbytes_per_sec": 0, 00:16:42.880 "w_mbytes_per_sec": 0 00:16:42.880 }, 00:16:42.880 "claimed": false, 00:16:42.880 "zoned": false, 00:16:42.880 "supported_io_types": { 00:16:42.880 "read": true, 00:16:42.880 "write": true, 00:16:42.880 "unmap": false, 00:16:42.880 "write_zeroes": true, 00:16:42.880 "flush": false, 00:16:42.880 "reset": true, 00:16:42.880 "compare": false, 00:16:42.880 "compare_and_write": false, 00:16:42.880 "abort": false, 00:16:42.880 "nvme_admin": false, 00:16:42.880 "nvme_io": false 00:16:42.880 }, 00:16:42.880 "memory_domains": [ 00:16:42.880 { 00:16:42.880 "dma_device_id": "system", 00:16:42.880 "dma_device_type": 1 00:16:42.880 }, 00:16:42.880 { 00:16:42.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.880 "dma_device_type": 2 00:16:42.880 }, 00:16:42.880 { 00:16:42.880 "dma_device_id": "system", 00:16:42.880 "dma_device_type": 1 00:16:42.880 }, 00:16:42.880 { 00:16:42.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.880 "dma_device_type": 2 00:16:42.880 } 00:16:42.880 ], 00:16:42.880 "driver_specific": { 00:16:42.880 "raid": { 00:16:42.880 "uuid": "b8ec208b-e64b-4181-9d71-73c73c027ad0", 00:16:42.880 "strip_size_kb": 0, 00:16:42.880 "state": "online", 00:16:42.880 "raid_level": "raid1", 00:16:42.880 "superblock": true, 00:16:42.880 "num_base_bdevs": 2, 00:16:42.880 "num_base_bdevs_discovered": 2, 00:16:42.880 "num_base_bdevs_operational": 2, 00:16:42.880 "base_bdevs_list": [ 00:16:42.880 { 00:16:42.880 "name": "BaseBdev1", 00:16:42.880 "uuid": "28512de4-8185-41a8-a537-9b15c10a1669", 00:16:42.880 "is_configured": true, 00:16:42.880 "data_offset": 2048, 00:16:42.880 "data_size": 63488 00:16:42.880 }, 00:16:42.880 { 00:16:42.880 "name": "BaseBdev2", 00:16:42.880 "uuid": "3ff7dc96-22ad-40f8-a73c-2006901c0bf3", 00:16:42.880 "is_configured": true, 00:16:42.880 "data_offset": 2048, 00:16:42.880 "data_size": 63488 00:16:42.880 } 00:16:42.880 ] 00:16:42.880 } 00:16:42.880 } 00:16:42.880 }' 00:16:42.880 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:42.880 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:42.880 BaseBdev2' 00:16:42.880 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:42.880 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:42.880 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:43.139 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:43.139 "name": "BaseBdev1", 00:16:43.139 "aliases": [ 00:16:43.139 "28512de4-8185-41a8-a537-9b15c10a1669" 00:16:43.139 ], 00:16:43.139 "product_name": "Malloc disk", 00:16:43.139 "block_size": 512, 00:16:43.139 "num_blocks": 65536, 00:16:43.139 "uuid": "28512de4-8185-41a8-a537-9b15c10a1669", 00:16:43.139 "assigned_rate_limits": { 00:16:43.139 "rw_ios_per_sec": 0, 00:16:43.139 "rw_mbytes_per_sec": 0, 00:16:43.139 "r_mbytes_per_sec": 0, 00:16:43.139 "w_mbytes_per_sec": 0 00:16:43.139 }, 00:16:43.139 "claimed": true, 00:16:43.139 "claim_type": "exclusive_write", 00:16:43.139 "zoned": false, 00:16:43.139 "supported_io_types": { 00:16:43.139 "read": true, 00:16:43.139 "write": true, 00:16:43.139 "unmap": true, 00:16:43.139 "write_zeroes": true, 00:16:43.139 "flush": true, 00:16:43.139 "reset": true, 00:16:43.139 "compare": false, 00:16:43.139 "compare_and_write": false, 00:16:43.139 "abort": true, 00:16:43.139 "nvme_admin": false, 00:16:43.139 "nvme_io": false 00:16:43.139 }, 00:16:43.139 "memory_domains": [ 00:16:43.139 { 00:16:43.139 "dma_device_id": "system", 00:16:43.139 "dma_device_type": 1 00:16:43.139 }, 00:16:43.139 { 00:16:43.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.139 "dma_device_type": 2 00:16:43.139 } 00:16:43.139 ], 00:16:43.139 "driver_specific": {} 00:16:43.139 }' 00:16:43.139 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.139 07:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.398 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:43.398 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.398 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.398 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:43.398 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.398 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:43.398 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:43.398 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.398 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:43.659 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:43.659 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:43.659 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:43.659 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:43.918 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:43.918 "name": "BaseBdev2", 00:16:43.918 "aliases": [ 00:16:43.918 "3ff7dc96-22ad-40f8-a73c-2006901c0bf3" 00:16:43.918 ], 00:16:43.918 "product_name": "Malloc disk", 00:16:43.918 "block_size": 512, 00:16:43.918 "num_blocks": 65536, 00:16:43.918 "uuid": "3ff7dc96-22ad-40f8-a73c-2006901c0bf3", 00:16:43.918 "assigned_rate_limits": { 00:16:43.918 "rw_ios_per_sec": 0, 00:16:43.918 "rw_mbytes_per_sec": 0, 00:16:43.918 "r_mbytes_per_sec": 0, 00:16:43.918 "w_mbytes_per_sec": 0 00:16:43.918 }, 00:16:43.918 "claimed": true, 00:16:43.918 "claim_type": "exclusive_write", 00:16:43.918 "zoned": false, 00:16:43.918 "supported_io_types": { 00:16:43.918 "read": true, 00:16:43.918 "write": true, 00:16:43.918 "unmap": true, 00:16:43.918 "write_zeroes": true, 00:16:43.918 "flush": true, 00:16:43.918 "reset": true, 00:16:43.918 "compare": false, 00:16:43.918 "compare_and_write": false, 00:16:43.918 "abort": true, 00:16:43.918 "nvme_admin": false, 00:16:43.918 "nvme_io": false 00:16:43.918 }, 00:16:43.918 "memory_domains": [ 00:16:43.918 { 00:16:43.918 "dma_device_id": "system", 00:16:43.918 "dma_device_type": 1 00:16:43.918 }, 00:16:43.918 { 00:16:43.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.918 "dma_device_type": 2 00:16:43.918 } 00:16:43.918 ], 00:16:43.918 "driver_specific": {} 00:16:43.918 }' 00:16:43.918 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.918 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:43.918 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:43.918 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.918 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:43.918 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:43.918 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:44.177 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:44.177 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:44.177 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:44.177 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:44.177 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:44.177 07:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:44.435 [2024-07-12 07:27:18.196040] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.435 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.694 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.694 "name": "Existed_Raid", 00:16:44.694 "uuid": "b8ec208b-e64b-4181-9d71-73c73c027ad0", 00:16:44.694 "strip_size_kb": 0, 00:16:44.694 "state": "online", 00:16:44.694 "raid_level": "raid1", 00:16:44.694 "superblock": true, 00:16:44.694 "num_base_bdevs": 2, 00:16:44.694 "num_base_bdevs_discovered": 1, 00:16:44.694 "num_base_bdevs_operational": 1, 00:16:44.694 "base_bdevs_list": [ 00:16:44.694 { 00:16:44.694 "name": null, 00:16:44.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.694 "is_configured": false, 00:16:44.694 "data_offset": 2048, 00:16:44.694 "data_size": 63488 00:16:44.694 }, 00:16:44.694 { 00:16:44.694 "name": "BaseBdev2", 00:16:44.694 "uuid": "3ff7dc96-22ad-40f8-a73c-2006901c0bf3", 00:16:44.694 "is_configured": true, 00:16:44.694 "data_offset": 2048, 00:16:44.694 "data_size": 63488 00:16:44.694 } 00:16:44.694 ] 00:16:44.694 }' 00:16:44.694 07:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.694 07:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.261 07:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:45.261 07:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:45.261 07:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:45.261 07:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.520 07:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:45.520 07:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:45.520 07:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:45.779 [2024-07-12 07:27:19.566160] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:45.779 [2024-07-12 07:27:19.566589] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.779 [2024-07-12 07:27:19.588143] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.779 [2024-07-12 07:27:19.588426] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.779 [2024-07-12 07:27:19.588501] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:16:45.779 07:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:45.779 07:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:45.779 07:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.779 07:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:46.037 07:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:46.037 07:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:46.037 07:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:46.037 07:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 134307 00:16:46.037 07:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 134307 ']' 00:16:46.037 07:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 134307 00:16:46.037 07:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:16:46.037 07:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:46.037 07:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 134307 00:16:46.037 07:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:46.037 07:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:46.037 07:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 134307' 00:16:46.037 killing process with pid 134307 00:16:46.037 07:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 134307 00:16:46.037 07:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 134307 00:16:46.037 [2024-07-12 07:27:19.876549] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.037 [2024-07-12 07:27:19.876878] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.605 07:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:46.605 00:16:46.605 real 0m10.853s 00:16:46.605 user 0m19.090s 00:16:46.605 sys 0m2.020s 00:16:46.605 07:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:46.605 07:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.605 ************************************ 00:16:46.605 END TEST raid_state_function_test_sb 00:16:46.605 ************************************ 00:16:46.605 07:27:20 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:16:46.605 07:27:20 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:46.605 07:27:20 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:46.605 07:27:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.605 ************************************ 00:16:46.605 START TEST raid_superblock_test 00:16:46.605 ************************************ 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=134678 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 134678 /var/tmp/spdk-raid.sock 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 134678 ']' 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:46.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:46.605 07:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.605 [2024-07-12 07:27:20.442358] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:46.605 [2024-07-12 07:27:20.442944] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134678 ] 00:16:46.869 [2024-07-12 07:27:20.596903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.869 [2024-07-12 07:27:20.681352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.134 [2024-07-12 07:27:20.761492] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.700 07:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:47.700 07:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:16:47.700 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:47.700 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:47.700 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:47.700 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:47.700 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:47.700 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:47.700 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:47.700 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:47.700 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:47.700 malloc1 00:16:47.700 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:47.959 [2024-07-12 07:27:21.790954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:47.959 [2024-07-12 07:27:21.792028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.959 [2024-07-12 07:27:21.792203] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:16:47.959 [2024-07-12 07:27:21.792362] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.959 [2024-07-12 07:27:21.798894] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.959 [2024-07-12 07:27:21.799276] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:47.959 pt1 00:16:47.959 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:47.959 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:47.959 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:47.959 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:47.959 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:47.959 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:47.959 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:47.959 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:47.959 07:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:48.218 malloc2 00:16:48.218 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.477 [2024-07-12 07:27:22.271760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.477 [2024-07-12 07:27:22.272111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.477 [2024-07-12 07:27:22.272210] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:48.477 [2024-07-12 07:27:22.272358] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.477 [2024-07-12 07:27:22.275191] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.477 [2024-07-12 07:27:22.275358] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.477 pt2 00:16:48.477 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:48.477 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:48.477 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:48.735 [2024-07-12 07:27:22.472001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:48.735 [2024-07-12 07:27:22.475535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.735 [2024-07-12 07:27:22.476134] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:16:48.735 [2024-07-12 07:27:22.476299] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:48.735 [2024-07-12 07:27:22.476579] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:48.735 [2024-07-12 07:27:22.477358] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:16:48.735 [2024-07-12 07:27:22.477519] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:16:48.735 [2024-07-12 07:27:22.477894] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.735 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:48.735 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:48.735 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:48.735 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:48.735 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:48.735 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:48.735 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.735 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.736 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.736 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.736 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.736 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.995 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.995 "name": "raid_bdev1", 00:16:48.995 "uuid": "ddcc8e38-ce0b-457e-80b1-39c807a5efde", 00:16:48.995 "strip_size_kb": 0, 00:16:48.995 "state": "online", 00:16:48.995 "raid_level": "raid1", 00:16:48.995 "superblock": true, 00:16:48.995 "num_base_bdevs": 2, 00:16:48.995 "num_base_bdevs_discovered": 2, 00:16:48.995 "num_base_bdevs_operational": 2, 00:16:48.995 "base_bdevs_list": [ 00:16:48.995 { 00:16:48.995 "name": "pt1", 00:16:48.995 "uuid": "c5338dbf-7c09-581d-9419-76b69d06aaea", 00:16:48.995 "is_configured": true, 00:16:48.995 "data_offset": 2048, 00:16:48.995 "data_size": 63488 00:16:48.995 }, 00:16:48.995 { 00:16:48.995 "name": "pt2", 00:16:48.995 "uuid": "87f11548-480a-552a-87e6-5960a77f9e81", 00:16:48.995 "is_configured": true, 00:16:48.995 "data_offset": 2048, 00:16:48.995 "data_size": 63488 00:16:48.995 } 00:16:48.995 ] 00:16:48.995 }' 00:16:48.995 07:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.995 07:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.561 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:49.561 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:49.561 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:49.561 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:49.561 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:49.561 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:49.561 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:49.561 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:49.819 [2024-07-12 07:27:23.568487] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.819 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:49.819 "name": "raid_bdev1", 00:16:49.819 "aliases": [ 00:16:49.819 "ddcc8e38-ce0b-457e-80b1-39c807a5efde" 00:16:49.819 ], 00:16:49.819 "product_name": "Raid Volume", 00:16:49.819 "block_size": 512, 00:16:49.819 "num_blocks": 63488, 00:16:49.819 "uuid": "ddcc8e38-ce0b-457e-80b1-39c807a5efde", 00:16:49.819 "assigned_rate_limits": { 00:16:49.819 "rw_ios_per_sec": 0, 00:16:49.819 "rw_mbytes_per_sec": 0, 00:16:49.819 "r_mbytes_per_sec": 0, 00:16:49.819 "w_mbytes_per_sec": 0 00:16:49.819 }, 00:16:49.819 "claimed": false, 00:16:49.819 "zoned": false, 00:16:49.819 "supported_io_types": { 00:16:49.819 "read": true, 00:16:49.819 "write": true, 00:16:49.819 "unmap": false, 00:16:49.819 "write_zeroes": true, 00:16:49.819 "flush": false, 00:16:49.819 "reset": true, 00:16:49.819 "compare": false, 00:16:49.819 "compare_and_write": false, 00:16:49.819 "abort": false, 00:16:49.819 "nvme_admin": false, 00:16:49.819 "nvme_io": false 00:16:49.819 }, 00:16:49.819 "memory_domains": [ 00:16:49.819 { 00:16:49.819 "dma_device_id": "system", 00:16:49.819 "dma_device_type": 1 00:16:49.819 }, 00:16:49.819 { 00:16:49.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.819 "dma_device_type": 2 00:16:49.819 }, 00:16:49.819 { 00:16:49.819 "dma_device_id": "system", 00:16:49.819 "dma_device_type": 1 00:16:49.819 }, 00:16:49.819 { 00:16:49.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.819 "dma_device_type": 2 00:16:49.819 } 00:16:49.819 ], 00:16:49.819 "driver_specific": { 00:16:49.819 "raid": { 00:16:49.819 "uuid": "ddcc8e38-ce0b-457e-80b1-39c807a5efde", 00:16:49.819 "strip_size_kb": 0, 00:16:49.819 "state": "online", 00:16:49.819 "raid_level": "raid1", 00:16:49.819 "superblock": true, 00:16:49.819 "num_base_bdevs": 2, 00:16:49.819 "num_base_bdevs_discovered": 2, 00:16:49.819 "num_base_bdevs_operational": 2, 00:16:49.819 "base_bdevs_list": [ 00:16:49.819 { 00:16:49.819 "name": "pt1", 00:16:49.819 "uuid": "c5338dbf-7c09-581d-9419-76b69d06aaea", 00:16:49.819 "is_configured": true, 00:16:49.819 "data_offset": 2048, 00:16:49.819 "data_size": 63488 00:16:49.819 }, 00:16:49.819 { 00:16:49.819 "name": "pt2", 00:16:49.819 "uuid": "87f11548-480a-552a-87e6-5960a77f9e81", 00:16:49.819 "is_configured": true, 00:16:49.819 "data_offset": 2048, 00:16:49.819 "data_size": 63488 00:16:49.819 } 00:16:49.819 ] 00:16:49.819 } 00:16:49.819 } 00:16:49.819 }' 00:16:49.819 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:49.819 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:49.819 pt2' 00:16:49.819 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:49.819 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:49.819 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:50.077 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:50.077 "name": "pt1", 00:16:50.077 "aliases": [ 00:16:50.077 "c5338dbf-7c09-581d-9419-76b69d06aaea" 00:16:50.077 ], 00:16:50.077 "product_name": "passthru", 00:16:50.077 "block_size": 512, 00:16:50.077 "num_blocks": 65536, 00:16:50.077 "uuid": "c5338dbf-7c09-581d-9419-76b69d06aaea", 00:16:50.077 "assigned_rate_limits": { 00:16:50.077 "rw_ios_per_sec": 0, 00:16:50.077 "rw_mbytes_per_sec": 0, 00:16:50.077 "r_mbytes_per_sec": 0, 00:16:50.077 "w_mbytes_per_sec": 0 00:16:50.077 }, 00:16:50.077 "claimed": true, 00:16:50.077 "claim_type": "exclusive_write", 00:16:50.077 "zoned": false, 00:16:50.077 "supported_io_types": { 00:16:50.077 "read": true, 00:16:50.077 "write": true, 00:16:50.077 "unmap": true, 00:16:50.077 "write_zeroes": true, 00:16:50.077 "flush": true, 00:16:50.077 "reset": true, 00:16:50.077 "compare": false, 00:16:50.077 "compare_and_write": false, 00:16:50.077 "abort": true, 00:16:50.077 "nvme_admin": false, 00:16:50.077 "nvme_io": false 00:16:50.077 }, 00:16:50.077 "memory_domains": [ 00:16:50.077 { 00:16:50.077 "dma_device_id": "system", 00:16:50.077 "dma_device_type": 1 00:16:50.077 }, 00:16:50.077 { 00:16:50.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.077 "dma_device_type": 2 00:16:50.077 } 00:16:50.077 ], 00:16:50.077 "driver_specific": { 00:16:50.077 "passthru": { 00:16:50.077 "name": "pt1", 00:16:50.077 "base_bdev_name": "malloc1" 00:16:50.077 } 00:16:50.077 } 00:16:50.077 }' 00:16:50.077 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.077 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.077 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:50.077 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.077 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.334 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:50.334 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.334 07:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.334 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:50.334 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.334 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.334 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:50.334 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:50.334 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:50.334 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:50.593 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:50.593 "name": "pt2", 00:16:50.593 "aliases": [ 00:16:50.593 "87f11548-480a-552a-87e6-5960a77f9e81" 00:16:50.593 ], 00:16:50.593 "product_name": "passthru", 00:16:50.593 "block_size": 512, 00:16:50.593 "num_blocks": 65536, 00:16:50.593 "uuid": "87f11548-480a-552a-87e6-5960a77f9e81", 00:16:50.593 "assigned_rate_limits": { 00:16:50.593 "rw_ios_per_sec": 0, 00:16:50.593 "rw_mbytes_per_sec": 0, 00:16:50.593 "r_mbytes_per_sec": 0, 00:16:50.593 "w_mbytes_per_sec": 0 00:16:50.593 }, 00:16:50.593 "claimed": true, 00:16:50.593 "claim_type": "exclusive_write", 00:16:50.593 "zoned": false, 00:16:50.593 "supported_io_types": { 00:16:50.593 "read": true, 00:16:50.593 "write": true, 00:16:50.593 "unmap": true, 00:16:50.593 "write_zeroes": true, 00:16:50.593 "flush": true, 00:16:50.593 "reset": true, 00:16:50.593 "compare": false, 00:16:50.593 "compare_and_write": false, 00:16:50.593 "abort": true, 00:16:50.593 "nvme_admin": false, 00:16:50.593 "nvme_io": false 00:16:50.593 }, 00:16:50.593 "memory_domains": [ 00:16:50.593 { 00:16:50.593 "dma_device_id": "system", 00:16:50.593 "dma_device_type": 1 00:16:50.593 }, 00:16:50.593 { 00:16:50.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.593 "dma_device_type": 2 00:16:50.593 } 00:16:50.593 ], 00:16:50.593 "driver_specific": { 00:16:50.593 "passthru": { 00:16:50.593 "name": "pt2", 00:16:50.593 "base_bdev_name": "malloc2" 00:16:50.593 } 00:16:50.593 } 00:16:50.593 }' 00:16:50.593 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.593 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.851 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:50.851 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.851 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.851 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:50.851 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.851 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.851 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:50.851 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.851 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.110 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:51.110 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:51.110 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:51.110 [2024-07-12 07:27:24.936719] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.110 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=ddcc8e38-ce0b-457e-80b1-39c807a5efde 00:16:51.110 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z ddcc8e38-ce0b-457e-80b1-39c807a5efde ']' 00:16:51.110 07:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:51.370 [2024-07-12 07:27:25.140525] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.370 [2024-07-12 07:27:25.140696] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.370 [2024-07-12 07:27:25.140915] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.370 [2024-07-12 07:27:25.141073] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.370 [2024-07-12 07:27:25.141144] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:16:51.370 07:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.370 07:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:51.629 07:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:51.629 07:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:51.629 07:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:51.629 07:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:51.887 07:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:51.887 07:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:52.146 07:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:52.146 07:27:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:52.404 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:52.404 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:52.404 07:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:52.404 07:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:52.404 07:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.404 07:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:52.404 07:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.404 07:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:52.404 07:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.404 07:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:52.404 07:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.404 07:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:52.404 07:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:52.663 [2024-07-12 07:27:26.417182] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:52.663 [2024-07-12 07:27:26.419907] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:52.663 [2024-07-12 07:27:26.420101] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:52.663 [2024-07-12 07:27:26.420302] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:52.663 [2024-07-12 07:27:26.420454] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.663 [2024-07-12 07:27:26.420492] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:16:52.663 request: 00:16:52.663 { 00:16:52.663 "name": "raid_bdev1", 00:16:52.663 "raid_level": "raid1", 00:16:52.663 "base_bdevs": [ 00:16:52.663 "malloc1", 00:16:52.663 "malloc2" 00:16:52.663 ], 00:16:52.663 "superblock": false, 00:16:52.663 "method": "bdev_raid_create", 00:16:52.663 "req_id": 1 00:16:52.663 } 00:16:52.663 Got JSON-RPC error response 00:16:52.663 response: 00:16:52.663 { 00:16:52.663 "code": -17, 00:16:52.663 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:52.663 } 00:16:52.663 07:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:52.663 07:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:52.663 07:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:52.663 07:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:52.663 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.663 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:52.922 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:52.922 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:52.922 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:53.180 [2024-07-12 07:27:26.953259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:53.180 [2024-07-12 07:27:26.953664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.180 [2024-07-12 07:27:26.953740] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:53.180 [2024-07-12 07:27:26.953853] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.180 [2024-07-12 07:27:26.956684] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.180 [2024-07-12 07:27:26.956864] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:53.180 [2024-07-12 07:27:26.957038] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:53.180 [2024-07-12 07:27:26.957148] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:53.180 pt1 00:16:53.180 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:53.180 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:53.180 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:53.180 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:53.180 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:53.180 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:53.180 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.180 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.180 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.180 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.180 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.180 07:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.438 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:53.438 "name": "raid_bdev1", 00:16:53.438 "uuid": "ddcc8e38-ce0b-457e-80b1-39c807a5efde", 00:16:53.438 "strip_size_kb": 0, 00:16:53.438 "state": "configuring", 00:16:53.438 "raid_level": "raid1", 00:16:53.438 "superblock": true, 00:16:53.438 "num_base_bdevs": 2, 00:16:53.438 "num_base_bdevs_discovered": 1, 00:16:53.438 "num_base_bdevs_operational": 2, 00:16:53.438 "base_bdevs_list": [ 00:16:53.438 { 00:16:53.438 "name": "pt1", 00:16:53.438 "uuid": "c5338dbf-7c09-581d-9419-76b69d06aaea", 00:16:53.438 "is_configured": true, 00:16:53.438 "data_offset": 2048, 00:16:53.438 "data_size": 63488 00:16:53.438 }, 00:16:53.438 { 00:16:53.438 "name": null, 00:16:53.438 "uuid": "87f11548-480a-552a-87e6-5960a77f9e81", 00:16:53.438 "is_configured": false, 00:16:53.438 "data_offset": 2048, 00:16:53.438 "data_size": 63488 00:16:53.438 } 00:16:53.438 ] 00:16:53.438 }' 00:16:53.438 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:53.438 07:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.005 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:16:54.005 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:54.005 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:54.005 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:54.264 [2024-07-12 07:27:27.905702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:54.264 [2024-07-12 07:27:27.906074] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.264 [2024-07-12 07:27:27.906149] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:54.264 [2024-07-12 07:27:27.906262] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.264 [2024-07-12 07:27:27.906785] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.264 [2024-07-12 07:27:27.906932] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:54.264 [2024-07-12 07:27:27.907118] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:54.264 [2024-07-12 07:27:27.907234] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:54.264 [2024-07-12 07:27:27.907415] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:16:54.264 [2024-07-12 07:27:27.907500] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:54.264 [2024-07-12 07:27:27.907624] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:16:54.264 [2024-07-12 07:27:27.908044] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:16:54.264 [2024-07-12 07:27:27.908149] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:16:54.264 [2024-07-12 07:27:27.908353] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.264 pt2 00:16:54.264 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:54.264 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:54.264 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:54.264 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:54.264 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:54.264 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:54.264 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:54.264 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:54.264 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:54.264 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:54.264 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:54.264 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:54.264 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.264 07:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.521 07:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.522 "name": "raid_bdev1", 00:16:54.522 "uuid": "ddcc8e38-ce0b-457e-80b1-39c807a5efde", 00:16:54.522 "strip_size_kb": 0, 00:16:54.522 "state": "online", 00:16:54.522 "raid_level": "raid1", 00:16:54.522 "superblock": true, 00:16:54.522 "num_base_bdevs": 2, 00:16:54.522 "num_base_bdevs_discovered": 2, 00:16:54.522 "num_base_bdevs_operational": 2, 00:16:54.522 "base_bdevs_list": [ 00:16:54.522 { 00:16:54.522 "name": "pt1", 00:16:54.522 "uuid": "c5338dbf-7c09-581d-9419-76b69d06aaea", 00:16:54.522 "is_configured": true, 00:16:54.522 "data_offset": 2048, 00:16:54.522 "data_size": 63488 00:16:54.522 }, 00:16:54.522 { 00:16:54.522 "name": "pt2", 00:16:54.522 "uuid": "87f11548-480a-552a-87e6-5960a77f9e81", 00:16:54.522 "is_configured": true, 00:16:54.522 "data_offset": 2048, 00:16:54.522 "data_size": 63488 00:16:54.522 } 00:16:54.522 ] 00:16:54.522 }' 00:16:54.522 07:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.522 07:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.102 07:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:55.102 07:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:55.102 07:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:55.102 07:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:55.102 07:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:55.102 07:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:55.102 07:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:55.102 07:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:55.102 [2024-07-12 07:27:28.950070] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.102 07:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:55.102 "name": "raid_bdev1", 00:16:55.102 "aliases": [ 00:16:55.102 "ddcc8e38-ce0b-457e-80b1-39c807a5efde" 00:16:55.102 ], 00:16:55.102 "product_name": "Raid Volume", 00:16:55.102 "block_size": 512, 00:16:55.102 "num_blocks": 63488, 00:16:55.102 "uuid": "ddcc8e38-ce0b-457e-80b1-39c807a5efde", 00:16:55.102 "assigned_rate_limits": { 00:16:55.102 "rw_ios_per_sec": 0, 00:16:55.102 "rw_mbytes_per_sec": 0, 00:16:55.102 "r_mbytes_per_sec": 0, 00:16:55.102 "w_mbytes_per_sec": 0 00:16:55.102 }, 00:16:55.102 "claimed": false, 00:16:55.102 "zoned": false, 00:16:55.102 "supported_io_types": { 00:16:55.102 "read": true, 00:16:55.102 "write": true, 00:16:55.102 "unmap": false, 00:16:55.102 "write_zeroes": true, 00:16:55.102 "flush": false, 00:16:55.102 "reset": true, 00:16:55.102 "compare": false, 00:16:55.102 "compare_and_write": false, 00:16:55.102 "abort": false, 00:16:55.102 "nvme_admin": false, 00:16:55.102 "nvme_io": false 00:16:55.102 }, 00:16:55.102 "memory_domains": [ 00:16:55.102 { 00:16:55.102 "dma_device_id": "system", 00:16:55.102 "dma_device_type": 1 00:16:55.102 }, 00:16:55.102 { 00:16:55.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.102 "dma_device_type": 2 00:16:55.102 }, 00:16:55.102 { 00:16:55.102 "dma_device_id": "system", 00:16:55.102 "dma_device_type": 1 00:16:55.102 }, 00:16:55.102 { 00:16:55.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.102 "dma_device_type": 2 00:16:55.102 } 00:16:55.102 ], 00:16:55.102 "driver_specific": { 00:16:55.102 "raid": { 00:16:55.102 "uuid": "ddcc8e38-ce0b-457e-80b1-39c807a5efde", 00:16:55.102 "strip_size_kb": 0, 00:16:55.102 "state": "online", 00:16:55.102 "raid_level": "raid1", 00:16:55.102 "superblock": true, 00:16:55.102 "num_base_bdevs": 2, 00:16:55.102 "num_base_bdevs_discovered": 2, 00:16:55.102 "num_base_bdevs_operational": 2, 00:16:55.102 "base_bdevs_list": [ 00:16:55.102 { 00:16:55.102 "name": "pt1", 00:16:55.102 "uuid": "c5338dbf-7c09-581d-9419-76b69d06aaea", 00:16:55.102 "is_configured": true, 00:16:55.102 "data_offset": 2048, 00:16:55.102 "data_size": 63488 00:16:55.102 }, 00:16:55.102 { 00:16:55.102 "name": "pt2", 00:16:55.102 "uuid": "87f11548-480a-552a-87e6-5960a77f9e81", 00:16:55.102 "is_configured": true, 00:16:55.102 "data_offset": 2048, 00:16:55.102 "data_size": 63488 00:16:55.102 } 00:16:55.102 ] 00:16:55.102 } 00:16:55.102 } 00:16:55.102 }' 00:16:55.102 07:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:55.401 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:55.401 pt2' 00:16:55.401 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:55.401 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:55.401 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:55.660 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:55.660 "name": "pt1", 00:16:55.660 "aliases": [ 00:16:55.660 "c5338dbf-7c09-581d-9419-76b69d06aaea" 00:16:55.660 ], 00:16:55.660 "product_name": "passthru", 00:16:55.660 "block_size": 512, 00:16:55.660 "num_blocks": 65536, 00:16:55.660 "uuid": "c5338dbf-7c09-581d-9419-76b69d06aaea", 00:16:55.660 "assigned_rate_limits": { 00:16:55.660 "rw_ios_per_sec": 0, 00:16:55.660 "rw_mbytes_per_sec": 0, 00:16:55.660 "r_mbytes_per_sec": 0, 00:16:55.660 "w_mbytes_per_sec": 0 00:16:55.660 }, 00:16:55.660 "claimed": true, 00:16:55.660 "claim_type": "exclusive_write", 00:16:55.660 "zoned": false, 00:16:55.660 "supported_io_types": { 00:16:55.660 "read": true, 00:16:55.660 "write": true, 00:16:55.660 "unmap": true, 00:16:55.660 "write_zeroes": true, 00:16:55.660 "flush": true, 00:16:55.660 "reset": true, 00:16:55.660 "compare": false, 00:16:55.660 "compare_and_write": false, 00:16:55.660 "abort": true, 00:16:55.660 "nvme_admin": false, 00:16:55.660 "nvme_io": false 00:16:55.660 }, 00:16:55.660 "memory_domains": [ 00:16:55.660 { 00:16:55.660 "dma_device_id": "system", 00:16:55.660 "dma_device_type": 1 00:16:55.660 }, 00:16:55.660 { 00:16:55.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.661 "dma_device_type": 2 00:16:55.661 } 00:16:55.661 ], 00:16:55.661 "driver_specific": { 00:16:55.661 "passthru": { 00:16:55.661 "name": "pt1", 00:16:55.661 "base_bdev_name": "malloc1" 00:16:55.661 } 00:16:55.661 } 00:16:55.661 }' 00:16:55.661 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.661 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.661 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:55.661 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.661 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.661 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:55.661 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.661 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.920 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:55.920 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.920 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.920 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:55.920 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:55.920 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:55.920 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:56.179 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:56.179 "name": "pt2", 00:16:56.179 "aliases": [ 00:16:56.179 "87f11548-480a-552a-87e6-5960a77f9e81" 00:16:56.179 ], 00:16:56.179 "product_name": "passthru", 00:16:56.179 "block_size": 512, 00:16:56.179 "num_blocks": 65536, 00:16:56.179 "uuid": "87f11548-480a-552a-87e6-5960a77f9e81", 00:16:56.179 "assigned_rate_limits": { 00:16:56.179 "rw_ios_per_sec": 0, 00:16:56.179 "rw_mbytes_per_sec": 0, 00:16:56.179 "r_mbytes_per_sec": 0, 00:16:56.179 "w_mbytes_per_sec": 0 00:16:56.179 }, 00:16:56.179 "claimed": true, 00:16:56.179 "claim_type": "exclusive_write", 00:16:56.179 "zoned": false, 00:16:56.179 "supported_io_types": { 00:16:56.179 "read": true, 00:16:56.179 "write": true, 00:16:56.179 "unmap": true, 00:16:56.179 "write_zeroes": true, 00:16:56.179 "flush": true, 00:16:56.179 "reset": true, 00:16:56.179 "compare": false, 00:16:56.179 "compare_and_write": false, 00:16:56.179 "abort": true, 00:16:56.179 "nvme_admin": false, 00:16:56.179 "nvme_io": false 00:16:56.179 }, 00:16:56.179 "memory_domains": [ 00:16:56.179 { 00:16:56.179 "dma_device_id": "system", 00:16:56.179 "dma_device_type": 1 00:16:56.179 }, 00:16:56.179 { 00:16:56.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.179 "dma_device_type": 2 00:16:56.179 } 00:16:56.179 ], 00:16:56.179 "driver_specific": { 00:16:56.179 "passthru": { 00:16:56.179 "name": "pt2", 00:16:56.179 "base_bdev_name": "malloc2" 00:16:56.179 } 00:16:56.179 } 00:16:56.179 }' 00:16:56.179 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:56.179 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:56.179 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:56.179 07:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:56.179 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:56.179 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:56.179 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:56.437 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:56.437 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:56.437 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:56.437 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:56.437 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:56.437 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:56.437 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:56.695 [2024-07-12 07:27:30.422363] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.695 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' ddcc8e38-ce0b-457e-80b1-39c807a5efde '!=' ddcc8e38-ce0b-457e-80b1-39c807a5efde ']' 00:16:56.695 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:16:56.695 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:56.695 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:56.695 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:56.952 [2024-07-12 07:27:30.714260] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:56.952 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:56.952 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:56.952 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:56.952 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:56.952 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:56.952 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:56.952 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:56.952 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:56.952 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:56.952 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:56.952 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.952 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.211 07:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.211 "name": "raid_bdev1", 00:16:57.211 "uuid": "ddcc8e38-ce0b-457e-80b1-39c807a5efde", 00:16:57.211 "strip_size_kb": 0, 00:16:57.211 "state": "online", 00:16:57.211 "raid_level": "raid1", 00:16:57.211 "superblock": true, 00:16:57.211 "num_base_bdevs": 2, 00:16:57.211 "num_base_bdevs_discovered": 1, 00:16:57.211 "num_base_bdevs_operational": 1, 00:16:57.211 "base_bdevs_list": [ 00:16:57.211 { 00:16:57.211 "name": null, 00:16:57.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.211 "is_configured": false, 00:16:57.211 "data_offset": 2048, 00:16:57.211 "data_size": 63488 00:16:57.211 }, 00:16:57.211 { 00:16:57.211 "name": "pt2", 00:16:57.211 "uuid": "87f11548-480a-552a-87e6-5960a77f9e81", 00:16:57.211 "is_configured": true, 00:16:57.211 "data_offset": 2048, 00:16:57.211 "data_size": 63488 00:16:57.211 } 00:16:57.211 ] 00:16:57.211 }' 00:16:57.211 07:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.211 07:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.778 07:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:58.037 [2024-07-12 07:27:31.770425] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.037 [2024-07-12 07:27:31.770478] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.037 [2024-07-12 07:27:31.770568] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.037 [2024-07-12 07:27:31.770630] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.037 [2024-07-12 07:27:31.770640] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:16:58.037 07:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.037 07:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:16:58.296 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:16:58.296 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:16:58.296 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:16:58.296 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:58.296 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:58.554 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:58.554 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:58.554 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:16:58.554 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:58.554 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:16:58.554 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:58.813 [2024-07-12 07:27:32.450542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:58.813 [2024-07-12 07:27:32.450676] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.813 [2024-07-12 07:27:32.450715] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:58.813 [2024-07-12 07:27:32.450764] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.813 [2024-07-12 07:27:32.453772] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.813 [2024-07-12 07:27:32.453833] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:58.813 [2024-07-12 07:27:32.453943] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:58.813 [2024-07-12 07:27:32.453983] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:58.813 [2024-07-12 07:27:32.454092] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:16:58.813 [2024-07-12 07:27:32.454100] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:58.814 [2024-07-12 07:27:32.454177] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:58.814 [2024-07-12 07:27:32.454476] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:16:58.814 [2024-07-12 07:27:32.454496] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:16:58.814 [2024-07-12 07:27:32.454625] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.814 pt2 00:16:58.814 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:58.814 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:58.814 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:58.814 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:58.814 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:58.814 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:58.814 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:58.814 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:58.814 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:58.814 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:58.814 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.814 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.072 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.072 "name": "raid_bdev1", 00:16:59.072 "uuid": "ddcc8e38-ce0b-457e-80b1-39c807a5efde", 00:16:59.072 "strip_size_kb": 0, 00:16:59.072 "state": "online", 00:16:59.072 "raid_level": "raid1", 00:16:59.072 "superblock": true, 00:16:59.072 "num_base_bdevs": 2, 00:16:59.072 "num_base_bdevs_discovered": 1, 00:16:59.072 "num_base_bdevs_operational": 1, 00:16:59.072 "base_bdevs_list": [ 00:16:59.072 { 00:16:59.072 "name": null, 00:16:59.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.072 "is_configured": false, 00:16:59.072 "data_offset": 2048, 00:16:59.072 "data_size": 63488 00:16:59.072 }, 00:16:59.072 { 00:16:59.072 "name": "pt2", 00:16:59.072 "uuid": "87f11548-480a-552a-87e6-5960a77f9e81", 00:16:59.072 "is_configured": true, 00:16:59.072 "data_offset": 2048, 00:16:59.072 "data_size": 63488 00:16:59.072 } 00:16:59.072 ] 00:16:59.072 }' 00:16:59.072 07:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.072 07:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.638 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:59.638 [2024-07-12 07:27:33.490866] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.638 [2024-07-12 07:27:33.490917] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.638 [2024-07-12 07:27:33.491003] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.638 [2024-07-12 07:27:33.491059] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.638 [2024-07-12 07:27:33.491069] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:16:59.638 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:16:59.638 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:00.205 [2024-07-12 07:27:33.974929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:00.205 [2024-07-12 07:27:33.975078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.205 [2024-07-12 07:27:33.975131] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:00.205 [2024-07-12 07:27:33.975157] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.205 [2024-07-12 07:27:33.978000] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.205 [2024-07-12 07:27:33.978055] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:00.205 [2024-07-12 07:27:33.978156] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:00.205 [2024-07-12 07:27:33.978192] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:00.205 [2024-07-12 07:27:33.978378] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:00.205 [2024-07-12 07:27:33.978389] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.205 [2024-07-12 07:27:33.978416] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:17:00.205 [2024-07-12 07:27:33.978485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:00.205 [2024-07-12 07:27:33.978574] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:17:00.205 [2024-07-12 07:27:33.978583] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:00.205 [2024-07-12 07:27:33.978659] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:17:00.205 [2024-07-12 07:27:33.978992] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:17:00.205 [2024-07-12 07:27:33.979012] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:17:00.205 [2024-07-12 07:27:33.979158] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.205 pt1 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.205 07:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.464 07:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:00.464 "name": "raid_bdev1", 00:17:00.464 "uuid": "ddcc8e38-ce0b-457e-80b1-39c807a5efde", 00:17:00.464 "strip_size_kb": 0, 00:17:00.464 "state": "online", 00:17:00.464 "raid_level": "raid1", 00:17:00.464 "superblock": true, 00:17:00.464 "num_base_bdevs": 2, 00:17:00.464 "num_base_bdevs_discovered": 1, 00:17:00.464 "num_base_bdevs_operational": 1, 00:17:00.464 "base_bdevs_list": [ 00:17:00.464 { 00:17:00.464 "name": null, 00:17:00.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.464 "is_configured": false, 00:17:00.464 "data_offset": 2048, 00:17:00.464 "data_size": 63488 00:17:00.464 }, 00:17:00.464 { 00:17:00.464 "name": "pt2", 00:17:00.464 "uuid": "87f11548-480a-552a-87e6-5960a77f9e81", 00:17:00.464 "is_configured": true, 00:17:00.464 "data_offset": 2048, 00:17:00.464 "data_size": 63488 00:17:00.464 } 00:17:00.464 ] 00:17:00.464 }' 00:17:00.464 07:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:00.464 07:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.032 07:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:01.032 07:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:01.291 07:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:01.291 07:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:01.291 07:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:01.550 [2024-07-12 07:27:35.255585] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.550 07:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' ddcc8e38-ce0b-457e-80b1-39c807a5efde '!=' ddcc8e38-ce0b-457e-80b1-39c807a5efde ']' 00:17:01.550 07:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 134678 00:17:01.550 07:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 134678 ']' 00:17:01.550 07:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 134678 00:17:01.550 07:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:17:01.550 07:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:01.550 07:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 134678 00:17:01.550 07:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:01.550 07:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:01.550 07:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 134678' 00:17:01.550 killing process with pid 134678 00:17:01.550 07:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 134678 00:17:01.550 [2024-07-12 07:27:35.309692] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:01.550 [2024-07-12 07:27:35.309791] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.550 [2024-07-12 07:27:35.309848] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.550 [2024-07-12 07:27:35.309858] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:17:01.550 07:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 134678 00:17:01.550 [2024-07-12 07:27:35.351340] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.119 07:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:02.119 00:17:02.119 real 0m15.384s 00:17:02.119 user 0m27.786s 00:17:02.119 sys 0m2.691s 00:17:02.119 07:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:02.119 07:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.119 ************************************ 00:17:02.119 END TEST raid_superblock_test 00:17:02.119 ************************************ 00:17:02.119 07:27:35 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:17:02.119 07:27:35 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:02.119 07:27:35 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:02.119 07:27:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:02.119 ************************************ 00:17:02.119 START TEST raid_read_error_test 00:17:02.119 ************************************ 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 2 read 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.73RbsoWNlh 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=135204 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 135204 /var/tmp/spdk-raid.sock 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 135204 ']' 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:02.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:02.119 07:27:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.119 [2024-07-12 07:27:35.918245] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:02.119 [2024-07-12 07:27:35.918519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135204 ] 00:17:02.378 [2024-07-12 07:27:36.079449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.378 [2024-07-12 07:27:36.173442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.378 [2024-07-12 07:27:36.259176] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.944 07:27:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:02.944 07:27:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:17:02.944 07:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:02.944 07:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:03.203 BaseBdev1_malloc 00:17:03.203 07:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:03.462 true 00:17:03.462 07:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:03.721 [2024-07-12 07:27:37.479263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:03.721 [2024-07-12 07:27:37.479396] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.721 [2024-07-12 07:27:37.479456] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:17:03.721 [2024-07-12 07:27:37.479511] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.721 [2024-07-12 07:27:37.482631] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.721 [2024-07-12 07:27:37.482689] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:03.721 BaseBdev1 00:17:03.721 07:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:03.721 07:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:03.980 BaseBdev2_malloc 00:17:03.980 07:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:04.239 true 00:17:04.239 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:04.497 [2024-07-12 07:27:38.223174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:04.497 [2024-07-12 07:27:38.223296] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.497 [2024-07-12 07:27:38.223347] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:04.497 [2024-07-12 07:27:38.223397] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.497 [2024-07-12 07:27:38.226444] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.497 [2024-07-12 07:27:38.226513] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:04.497 BaseBdev2 00:17:04.497 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:04.756 [2024-07-12 07:27:38.419516] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:04.756 [2024-07-12 07:27:38.422114] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:04.756 [2024-07-12 07:27:38.422411] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:04.756 [2024-07-12 07:27:38.422426] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:04.756 [2024-07-12 07:27:38.422607] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:17:04.756 [2024-07-12 07:27:38.423092] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:04.756 [2024-07-12 07:27:38.423111] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:17:04.756 [2024-07-12 07:27:38.423359] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.756 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:04.756 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:04.756 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:04.756 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:04.756 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:04.756 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:04.756 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:04.756 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:04.756 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:04.756 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:04.756 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.756 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.014 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.014 "name": "raid_bdev1", 00:17:05.014 "uuid": "f2c0da3c-d803-426d-a025-a19218a99a56", 00:17:05.014 "strip_size_kb": 0, 00:17:05.014 "state": "online", 00:17:05.014 "raid_level": "raid1", 00:17:05.014 "superblock": true, 00:17:05.014 "num_base_bdevs": 2, 00:17:05.014 "num_base_bdevs_discovered": 2, 00:17:05.014 "num_base_bdevs_operational": 2, 00:17:05.014 "base_bdevs_list": [ 00:17:05.014 { 00:17:05.014 "name": "BaseBdev1", 00:17:05.014 "uuid": "6b39ffc1-d462-5aa9-ab30-777d10e45ffe", 00:17:05.014 "is_configured": true, 00:17:05.014 "data_offset": 2048, 00:17:05.014 "data_size": 63488 00:17:05.014 }, 00:17:05.014 { 00:17:05.014 "name": "BaseBdev2", 00:17:05.014 "uuid": "5322ee2a-1699-5d08-9087-0b40b16c7241", 00:17:05.014 "is_configured": true, 00:17:05.014 "data_offset": 2048, 00:17:05.014 "data_size": 63488 00:17:05.014 } 00:17:05.014 ] 00:17:05.014 }' 00:17:05.014 07:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.014 07:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.589 07:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:05.589 07:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:05.589 [2024-07-12 07:27:39.348167] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:17:06.520 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.778 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.036 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.036 "name": "raid_bdev1", 00:17:07.036 "uuid": "f2c0da3c-d803-426d-a025-a19218a99a56", 00:17:07.036 "strip_size_kb": 0, 00:17:07.036 "state": "online", 00:17:07.036 "raid_level": "raid1", 00:17:07.036 "superblock": true, 00:17:07.036 "num_base_bdevs": 2, 00:17:07.036 "num_base_bdevs_discovered": 2, 00:17:07.036 "num_base_bdevs_operational": 2, 00:17:07.036 "base_bdevs_list": [ 00:17:07.036 { 00:17:07.036 "name": "BaseBdev1", 00:17:07.036 "uuid": "6b39ffc1-d462-5aa9-ab30-777d10e45ffe", 00:17:07.036 "is_configured": true, 00:17:07.036 "data_offset": 2048, 00:17:07.036 "data_size": 63488 00:17:07.036 }, 00:17:07.036 { 00:17:07.036 "name": "BaseBdev2", 00:17:07.036 "uuid": "5322ee2a-1699-5d08-9087-0b40b16c7241", 00:17:07.036 "is_configured": true, 00:17:07.036 "data_offset": 2048, 00:17:07.036 "data_size": 63488 00:17:07.036 } 00:17:07.036 ] 00:17:07.036 }' 00:17:07.036 07:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.036 07:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.695 07:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:07.954 [2024-07-12 07:27:41.654727] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.954 [2024-07-12 07:27:41.654786] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.954 [2024-07-12 07:27:41.657289] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.954 [2024-07-12 07:27:41.657345] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.954 [2024-07-12 07:27:41.657427] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.954 [2024-07-12 07:27:41.657437] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:17:07.954 0 00:17:07.954 07:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 135204 00:17:07.954 07:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 135204 ']' 00:17:07.954 07:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 135204 00:17:07.954 07:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:17:07.954 07:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:07.954 07:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 135204 00:17:07.954 07:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:07.954 07:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:07.954 killing process with pid 135204 00:17:07.954 07:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 135204' 00:17:07.954 07:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 135204 00:17:07.954 [2024-07-12 07:27:41.706079] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:07.954 07:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 135204 00:17:07.954 [2024-07-12 07:27:41.734979] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:08.521 07:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.73RbsoWNlh 00:17:08.521 07:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:08.521 07:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:08.521 ************************************ 00:17:08.521 END TEST raid_read_error_test 00:17:08.521 ************************************ 00:17:08.521 07:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:17:08.521 07:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:17:08.521 07:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:08.521 07:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:08.522 07:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:08.522 00:17:08.522 real 0m6.327s 00:17:08.522 user 0m9.620s 00:17:08.522 sys 0m1.166s 00:17:08.522 07:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:08.522 07:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.522 07:27:42 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:17:08.522 07:27:42 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:08.522 07:27:42 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:08.522 07:27:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:08.522 ************************************ 00:17:08.522 START TEST raid_write_error_test 00:17:08.522 ************************************ 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 2 write 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.f6eorWmHE8 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=135384 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 135384 /var/tmp/spdk-raid.sock 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 135384 ']' 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:08.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:08.522 07:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.522 [2024-07-12 07:27:42.328009] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:08.522 [2024-07-12 07:27:42.328262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135384 ] 00:17:08.781 [2024-07-12 07:27:42.486125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.781 [2024-07-12 07:27:42.574860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.781 [2024-07-12 07:27:42.655348] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.715 07:27:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:09.715 07:27:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:17:09.715 07:27:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:09.715 07:27:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:09.715 BaseBdev1_malloc 00:17:09.715 07:27:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:09.974 true 00:17:09.974 07:27:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:10.232 [2024-07-12 07:27:43.980600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:10.232 [2024-07-12 07:27:43.980941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.232 [2024-07-12 07:27:43.981032] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:17:10.232 [2024-07-12 07:27:43.981176] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.232 [2024-07-12 07:27:43.984268] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.232 [2024-07-12 07:27:43.984447] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:10.232 BaseBdev1 00:17:10.232 07:27:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:10.232 07:27:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:10.489 BaseBdev2_malloc 00:17:10.489 07:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:10.747 true 00:17:10.747 07:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:11.004 [2024-07-12 07:27:44.648685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:11.005 [2024-07-12 07:27:44.648938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.005 [2024-07-12 07:27:44.649026] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:11.005 [2024-07-12 07:27:44.649156] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.005 [2024-07-12 07:27:44.652179] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.005 [2024-07-12 07:27:44.652340] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:11.005 BaseBdev2 00:17:11.005 07:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:11.263 [2024-07-12 07:27:44.928940] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.263 [2024-07-12 07:27:44.931741] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.263 [2024-07-12 07:27:44.932140] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:11.263 [2024-07-12 07:27:44.932252] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:11.263 [2024-07-12 07:27:44.932470] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:17:11.263 [2024-07-12 07:27:44.933018] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:11.263 [2024-07-12 07:27:44.933127] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:17:11.263 [2024-07-12 07:27:44.933438] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.263 07:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:11.263 07:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:11.263 07:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:11.263 07:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:11.263 07:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:11.263 07:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:11.263 07:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.263 07:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.263 07:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.263 07:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.263 07:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.263 07:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.521 07:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.521 "name": "raid_bdev1", 00:17:11.521 "uuid": "5437405f-71c5-488b-b855-932ab9cea2a8", 00:17:11.522 "strip_size_kb": 0, 00:17:11.522 "state": "online", 00:17:11.522 "raid_level": "raid1", 00:17:11.522 "superblock": true, 00:17:11.522 "num_base_bdevs": 2, 00:17:11.522 "num_base_bdevs_discovered": 2, 00:17:11.522 "num_base_bdevs_operational": 2, 00:17:11.522 "base_bdevs_list": [ 00:17:11.522 { 00:17:11.522 "name": "BaseBdev1", 00:17:11.522 "uuid": "00ca6745-50d6-5130-822a-56438a5f042e", 00:17:11.522 "is_configured": true, 00:17:11.522 "data_offset": 2048, 00:17:11.522 "data_size": 63488 00:17:11.522 }, 00:17:11.522 { 00:17:11.522 "name": "BaseBdev2", 00:17:11.522 "uuid": "4a99c19f-1b66-52ea-834e-542c7014818c", 00:17:11.522 "is_configured": true, 00:17:11.522 "data_offset": 2048, 00:17:11.522 "data_size": 63488 00:17:11.522 } 00:17:11.522 ] 00:17:11.522 }' 00:17:11.522 07:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.522 07:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.088 07:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:12.088 07:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:12.088 [2024-07-12 07:27:45.854164] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:17:13.025 07:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:13.283 [2024-07-12 07:27:47.044559] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:13.283 [2024-07-12 07:27:47.044897] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.283 [2024-07-12 07:27:47.045193] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000022c0 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.283 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.541 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:13.541 "name": "raid_bdev1", 00:17:13.541 "uuid": "5437405f-71c5-488b-b855-932ab9cea2a8", 00:17:13.541 "strip_size_kb": 0, 00:17:13.541 "state": "online", 00:17:13.541 "raid_level": "raid1", 00:17:13.541 "superblock": true, 00:17:13.541 "num_base_bdevs": 2, 00:17:13.541 "num_base_bdevs_discovered": 1, 00:17:13.541 "num_base_bdevs_operational": 1, 00:17:13.541 "base_bdevs_list": [ 00:17:13.541 { 00:17:13.541 "name": null, 00:17:13.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.541 "is_configured": false, 00:17:13.541 "data_offset": 2048, 00:17:13.541 "data_size": 63488 00:17:13.541 }, 00:17:13.541 { 00:17:13.541 "name": "BaseBdev2", 00:17:13.541 "uuid": "4a99c19f-1b66-52ea-834e-542c7014818c", 00:17:13.541 "is_configured": true, 00:17:13.541 "data_offset": 2048, 00:17:13.541 "data_size": 63488 00:17:13.541 } 00:17:13.541 ] 00:17:13.541 }' 00:17:13.541 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:13.541 07:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.107 07:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:14.365 [2024-07-12 07:27:48.194419] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:14.365 [2024-07-12 07:27:48.194545] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.365 [2024-07-12 07:27:48.197165] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.365 [2024-07-12 07:27:48.197455] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.365 [2024-07-12 07:27:48.197546] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.365 [2024-07-12 07:27:48.197797] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:17:14.365 0 00:17:14.365 07:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 135384 00:17:14.365 07:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 135384 ']' 00:17:14.365 07:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 135384 00:17:14.365 07:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:17:14.365 07:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:14.365 07:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 135384 00:17:14.623 07:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:14.623 07:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:14.623 07:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 135384' 00:17:14.623 killing process with pid 135384 00:17:14.623 07:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 135384 00:17:14.623 [2024-07-12 07:27:48.252008] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:14.623 07:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 135384 00:17:14.623 [2024-07-12 07:27:48.280666] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:14.881 07:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.f6eorWmHE8 00:17:14.881 07:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:14.881 07:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:14.881 07:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:17:14.881 07:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:17:14.881 07:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:14.881 07:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:14.881 07:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:14.881 00:17:14.881 real 0m6.478s 00:17:14.881 user 0m9.813s 00:17:14.881 sys 0m1.223s 00:17:14.881 07:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:14.881 07:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.881 ************************************ 00:17:14.881 END TEST raid_write_error_test 00:17:14.881 ************************************ 00:17:14.881 07:27:48 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:17:14.881 07:27:48 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:14.881 07:27:48 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:17:14.881 07:27:48 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:14.881 07:27:48 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:14.881 07:27:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:15.138 ************************************ 00:17:15.138 START TEST raid_state_function_test 00:17:15.138 ************************************ 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 false 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=135574 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 135574' 00:17:15.138 Process raid pid: 135574 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 135574 /var/tmp/spdk-raid.sock 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 135574 ']' 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:15.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:15.138 07:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.138 [2024-07-12 07:27:48.833719] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:15.138 [2024-07-12 07:27:48.833941] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.138 [2024-07-12 07:27:48.978924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.396 [2024-07-12 07:27:49.064101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.396 [2024-07-12 07:27:49.143713] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.330 07:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:16.330 07:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:17:16.330 07:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:16.330 [2024-07-12 07:27:50.017059] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:16.330 [2024-07-12 07:27:50.017196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:16.330 [2024-07-12 07:27:50.017210] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:16.330 [2024-07-12 07:27:50.017232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:16.330 [2024-07-12 07:27:50.017240] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:16.330 [2024-07-12 07:27:50.017300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:16.330 07:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:16.330 07:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:16.330 07:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:16.330 07:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:16.330 07:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:16.330 07:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:16.330 07:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:16.330 07:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:16.330 07:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:16.330 07:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:16.330 07:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.330 07:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.588 07:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.588 "name": "Existed_Raid", 00:17:16.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.588 "strip_size_kb": 64, 00:17:16.588 "state": "configuring", 00:17:16.588 "raid_level": "raid0", 00:17:16.588 "superblock": false, 00:17:16.588 "num_base_bdevs": 3, 00:17:16.588 "num_base_bdevs_discovered": 0, 00:17:16.588 "num_base_bdevs_operational": 3, 00:17:16.588 "base_bdevs_list": [ 00:17:16.588 { 00:17:16.588 "name": "BaseBdev1", 00:17:16.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.588 "is_configured": false, 00:17:16.588 "data_offset": 0, 00:17:16.588 "data_size": 0 00:17:16.588 }, 00:17:16.588 { 00:17:16.588 "name": "BaseBdev2", 00:17:16.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.588 "is_configured": false, 00:17:16.588 "data_offset": 0, 00:17:16.588 "data_size": 0 00:17:16.588 }, 00:17:16.588 { 00:17:16.588 "name": "BaseBdev3", 00:17:16.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.588 "is_configured": false, 00:17:16.588 "data_offset": 0, 00:17:16.588 "data_size": 0 00:17:16.588 } 00:17:16.588 ] 00:17:16.588 }' 00:17:16.588 07:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.588 07:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.154 07:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:17.414 [2024-07-12 07:27:51.089065] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:17.414 [2024-07-12 07:27:51.089126] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:17.414 07:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:17.684 [2024-07-12 07:27:51.329078] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:17.684 [2024-07-12 07:27:51.329171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:17.684 [2024-07-12 07:27:51.329183] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:17.684 [2024-07-12 07:27:51.329225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:17.684 [2024-07-12 07:27:51.329232] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:17.684 [2024-07-12 07:27:51.329274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:17.684 07:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:17.942 [2024-07-12 07:27:51.573472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:17.942 BaseBdev1 00:17:17.942 07:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:17.942 07:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:17.942 07:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:17.942 07:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:17.942 07:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:17.942 07:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:17.942 07:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:18.201 07:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:18.201 [ 00:17:18.201 { 00:17:18.201 "name": "BaseBdev1", 00:17:18.201 "aliases": [ 00:17:18.201 "321c5d04-849e-457e-884d-e1720ab55282" 00:17:18.201 ], 00:17:18.201 "product_name": "Malloc disk", 00:17:18.201 "block_size": 512, 00:17:18.201 "num_blocks": 65536, 00:17:18.201 "uuid": "321c5d04-849e-457e-884d-e1720ab55282", 00:17:18.201 "assigned_rate_limits": { 00:17:18.201 "rw_ios_per_sec": 0, 00:17:18.201 "rw_mbytes_per_sec": 0, 00:17:18.201 "r_mbytes_per_sec": 0, 00:17:18.201 "w_mbytes_per_sec": 0 00:17:18.201 }, 00:17:18.201 "claimed": true, 00:17:18.201 "claim_type": "exclusive_write", 00:17:18.201 "zoned": false, 00:17:18.201 "supported_io_types": { 00:17:18.201 "read": true, 00:17:18.201 "write": true, 00:17:18.201 "unmap": true, 00:17:18.201 "write_zeroes": true, 00:17:18.201 "flush": true, 00:17:18.201 "reset": true, 00:17:18.201 "compare": false, 00:17:18.201 "compare_and_write": false, 00:17:18.201 "abort": true, 00:17:18.201 "nvme_admin": false, 00:17:18.201 "nvme_io": false 00:17:18.201 }, 00:17:18.201 "memory_domains": [ 00:17:18.201 { 00:17:18.201 "dma_device_id": "system", 00:17:18.201 "dma_device_type": 1 00:17:18.201 }, 00:17:18.201 { 00:17:18.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.201 "dma_device_type": 2 00:17:18.201 } 00:17:18.201 ], 00:17:18.201 "driver_specific": {} 00:17:18.201 } 00:17:18.201 ] 00:17:18.460 07:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:18.460 07:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:18.460 07:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:18.460 07:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:18.460 07:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:18.460 07:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:18.460 07:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:18.460 07:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.460 07:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.460 07:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.460 07:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.460 07:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.460 07:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.718 07:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.718 "name": "Existed_Raid", 00:17:18.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.718 "strip_size_kb": 64, 00:17:18.718 "state": "configuring", 00:17:18.718 "raid_level": "raid0", 00:17:18.718 "superblock": false, 00:17:18.718 "num_base_bdevs": 3, 00:17:18.718 "num_base_bdevs_discovered": 1, 00:17:18.718 "num_base_bdevs_operational": 3, 00:17:18.718 "base_bdevs_list": [ 00:17:18.718 { 00:17:18.718 "name": "BaseBdev1", 00:17:18.718 "uuid": "321c5d04-849e-457e-884d-e1720ab55282", 00:17:18.718 "is_configured": true, 00:17:18.718 "data_offset": 0, 00:17:18.718 "data_size": 65536 00:17:18.718 }, 00:17:18.718 { 00:17:18.718 "name": "BaseBdev2", 00:17:18.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.718 "is_configured": false, 00:17:18.718 "data_offset": 0, 00:17:18.718 "data_size": 0 00:17:18.718 }, 00:17:18.718 { 00:17:18.718 "name": "BaseBdev3", 00:17:18.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.718 "is_configured": false, 00:17:18.718 "data_offset": 0, 00:17:18.718 "data_size": 0 00:17:18.718 } 00:17:18.718 ] 00:17:18.718 }' 00:17:18.718 07:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.718 07:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.286 07:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:19.544 [2024-07-12 07:27:53.189889] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.544 [2024-07-12 07:27:53.189988] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:19.544 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:19.802 [2024-07-12 07:27:53.473983] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.802 [2024-07-12 07:27:53.476546] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.802 [2024-07-12 07:27:53.476633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.802 [2024-07-12 07:27:53.476645] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:19.802 [2024-07-12 07:27:53.476670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:19.802 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:19.802 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:19.802 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:19.802 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:19.802 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:19.802 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:19.802 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:19.802 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:19.802 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:19.802 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:19.802 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:19.802 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:19.802 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.802 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.061 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:20.061 "name": "Existed_Raid", 00:17:20.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.061 "strip_size_kb": 64, 00:17:20.061 "state": "configuring", 00:17:20.061 "raid_level": "raid0", 00:17:20.061 "superblock": false, 00:17:20.061 "num_base_bdevs": 3, 00:17:20.061 "num_base_bdevs_discovered": 1, 00:17:20.061 "num_base_bdevs_operational": 3, 00:17:20.061 "base_bdevs_list": [ 00:17:20.061 { 00:17:20.061 "name": "BaseBdev1", 00:17:20.061 "uuid": "321c5d04-849e-457e-884d-e1720ab55282", 00:17:20.061 "is_configured": true, 00:17:20.061 "data_offset": 0, 00:17:20.061 "data_size": 65536 00:17:20.061 }, 00:17:20.061 { 00:17:20.061 "name": "BaseBdev2", 00:17:20.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.061 "is_configured": false, 00:17:20.061 "data_offset": 0, 00:17:20.061 "data_size": 0 00:17:20.061 }, 00:17:20.061 { 00:17:20.061 "name": "BaseBdev3", 00:17:20.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.061 "is_configured": false, 00:17:20.061 "data_offset": 0, 00:17:20.061 "data_size": 0 00:17:20.061 } 00:17:20.061 ] 00:17:20.061 }' 00:17:20.061 07:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:20.061 07:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.628 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:20.628 [2024-07-12 07:27:54.430687] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.628 BaseBdev2 00:17:20.628 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:20.628 07:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:20.628 07:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:20.628 07:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:20.628 07:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:20.628 07:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:20.628 07:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:20.887 07:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:21.146 [ 00:17:21.146 { 00:17:21.146 "name": "BaseBdev2", 00:17:21.146 "aliases": [ 00:17:21.146 "30b4f779-9b7c-4e83-ac1e-8332c74a376e" 00:17:21.146 ], 00:17:21.146 "product_name": "Malloc disk", 00:17:21.146 "block_size": 512, 00:17:21.146 "num_blocks": 65536, 00:17:21.146 "uuid": "30b4f779-9b7c-4e83-ac1e-8332c74a376e", 00:17:21.146 "assigned_rate_limits": { 00:17:21.146 "rw_ios_per_sec": 0, 00:17:21.146 "rw_mbytes_per_sec": 0, 00:17:21.146 "r_mbytes_per_sec": 0, 00:17:21.146 "w_mbytes_per_sec": 0 00:17:21.146 }, 00:17:21.146 "claimed": true, 00:17:21.146 "claim_type": "exclusive_write", 00:17:21.146 "zoned": false, 00:17:21.146 "supported_io_types": { 00:17:21.146 "read": true, 00:17:21.146 "write": true, 00:17:21.146 "unmap": true, 00:17:21.146 "write_zeroes": true, 00:17:21.146 "flush": true, 00:17:21.146 "reset": true, 00:17:21.146 "compare": false, 00:17:21.146 "compare_and_write": false, 00:17:21.146 "abort": true, 00:17:21.146 "nvme_admin": false, 00:17:21.146 "nvme_io": false 00:17:21.146 }, 00:17:21.146 "memory_domains": [ 00:17:21.146 { 00:17:21.146 "dma_device_id": "system", 00:17:21.146 "dma_device_type": 1 00:17:21.146 }, 00:17:21.146 { 00:17:21.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.146 "dma_device_type": 2 00:17:21.146 } 00:17:21.146 ], 00:17:21.146 "driver_specific": {} 00:17:21.146 } 00:17:21.146 ] 00:17:21.146 07:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:21.146 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:21.146 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:21.146 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:21.146 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:21.146 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:21.146 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:21.146 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:21.146 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:21.146 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.146 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.146 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.146 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.146 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.146 07:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.404 07:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.404 "name": "Existed_Raid", 00:17:21.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.404 "strip_size_kb": 64, 00:17:21.404 "state": "configuring", 00:17:21.404 "raid_level": "raid0", 00:17:21.404 "superblock": false, 00:17:21.404 "num_base_bdevs": 3, 00:17:21.404 "num_base_bdevs_discovered": 2, 00:17:21.404 "num_base_bdevs_operational": 3, 00:17:21.404 "base_bdevs_list": [ 00:17:21.404 { 00:17:21.404 "name": "BaseBdev1", 00:17:21.404 "uuid": "321c5d04-849e-457e-884d-e1720ab55282", 00:17:21.404 "is_configured": true, 00:17:21.404 "data_offset": 0, 00:17:21.404 "data_size": 65536 00:17:21.404 }, 00:17:21.404 { 00:17:21.404 "name": "BaseBdev2", 00:17:21.404 "uuid": "30b4f779-9b7c-4e83-ac1e-8332c74a376e", 00:17:21.404 "is_configured": true, 00:17:21.404 "data_offset": 0, 00:17:21.404 "data_size": 65536 00:17:21.404 }, 00:17:21.404 { 00:17:21.404 "name": "BaseBdev3", 00:17:21.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.404 "is_configured": false, 00:17:21.404 "data_offset": 0, 00:17:21.404 "data_size": 0 00:17:21.404 } 00:17:21.404 ] 00:17:21.404 }' 00:17:21.405 07:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.405 07:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.971 07:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:22.232 [2024-07-12 07:27:56.048628] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:22.232 [2024-07-12 07:27:56.048695] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:17:22.232 [2024-07-12 07:27:56.048704] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:22.232 [2024-07-12 07:27:56.048863] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:17:22.232 [2024-07-12 07:27:56.049241] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:17:22.232 [2024-07-12 07:27:56.049261] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:17:22.232 [2024-07-12 07:27:56.049545] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.232 BaseBdev3 00:17:22.232 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:22.232 07:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:22.232 07:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:22.232 07:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:22.232 07:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:22.232 07:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:22.232 07:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:22.491 07:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:22.750 [ 00:17:22.750 { 00:17:22.750 "name": "BaseBdev3", 00:17:22.750 "aliases": [ 00:17:22.750 "d2bbea86-c1a5-461d-9a0b-7213813a7048" 00:17:22.750 ], 00:17:22.750 "product_name": "Malloc disk", 00:17:22.750 "block_size": 512, 00:17:22.750 "num_blocks": 65536, 00:17:22.750 "uuid": "d2bbea86-c1a5-461d-9a0b-7213813a7048", 00:17:22.750 "assigned_rate_limits": { 00:17:22.750 "rw_ios_per_sec": 0, 00:17:22.750 "rw_mbytes_per_sec": 0, 00:17:22.750 "r_mbytes_per_sec": 0, 00:17:22.750 "w_mbytes_per_sec": 0 00:17:22.750 }, 00:17:22.750 "claimed": true, 00:17:22.750 "claim_type": "exclusive_write", 00:17:22.750 "zoned": false, 00:17:22.750 "supported_io_types": { 00:17:22.750 "read": true, 00:17:22.750 "write": true, 00:17:22.750 "unmap": true, 00:17:22.750 "write_zeroes": true, 00:17:22.750 "flush": true, 00:17:22.750 "reset": true, 00:17:22.750 "compare": false, 00:17:22.750 "compare_and_write": false, 00:17:22.750 "abort": true, 00:17:22.750 "nvme_admin": false, 00:17:22.750 "nvme_io": false 00:17:22.750 }, 00:17:22.750 "memory_domains": [ 00:17:22.750 { 00:17:22.750 "dma_device_id": "system", 00:17:22.750 "dma_device_type": 1 00:17:22.750 }, 00:17:22.750 { 00:17:22.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.750 "dma_device_type": 2 00:17:22.750 } 00:17:22.750 ], 00:17:22.750 "driver_specific": {} 00:17:22.750 } 00:17:22.750 ] 00:17:22.750 07:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:22.750 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:22.750 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:22.750 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:22.750 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:22.750 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:22.750 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:22.750 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:22.750 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:22.751 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.751 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.751 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.751 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.751 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.751 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.010 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.010 "name": "Existed_Raid", 00:17:23.010 "uuid": "c4d8bad3-aa27-4507-9f6e-7876593dab60", 00:17:23.010 "strip_size_kb": 64, 00:17:23.010 "state": "online", 00:17:23.010 "raid_level": "raid0", 00:17:23.010 "superblock": false, 00:17:23.010 "num_base_bdevs": 3, 00:17:23.010 "num_base_bdevs_discovered": 3, 00:17:23.010 "num_base_bdevs_operational": 3, 00:17:23.010 "base_bdevs_list": [ 00:17:23.010 { 00:17:23.010 "name": "BaseBdev1", 00:17:23.010 "uuid": "321c5d04-849e-457e-884d-e1720ab55282", 00:17:23.010 "is_configured": true, 00:17:23.010 "data_offset": 0, 00:17:23.010 "data_size": 65536 00:17:23.010 }, 00:17:23.010 { 00:17:23.010 "name": "BaseBdev2", 00:17:23.010 "uuid": "30b4f779-9b7c-4e83-ac1e-8332c74a376e", 00:17:23.010 "is_configured": true, 00:17:23.010 "data_offset": 0, 00:17:23.010 "data_size": 65536 00:17:23.010 }, 00:17:23.010 { 00:17:23.010 "name": "BaseBdev3", 00:17:23.010 "uuid": "d2bbea86-c1a5-461d-9a0b-7213813a7048", 00:17:23.010 "is_configured": true, 00:17:23.010 "data_offset": 0, 00:17:23.010 "data_size": 65536 00:17:23.010 } 00:17:23.010 ] 00:17:23.010 }' 00:17:23.010 07:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.010 07:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.578 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:23.578 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:23.578 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:23.578 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:23.578 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:23.578 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:23.578 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:23.578 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:23.836 [2024-07-12 07:27:57.629293] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.836 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:23.836 "name": "Existed_Raid", 00:17:23.836 "aliases": [ 00:17:23.836 "c4d8bad3-aa27-4507-9f6e-7876593dab60" 00:17:23.836 ], 00:17:23.836 "product_name": "Raid Volume", 00:17:23.836 "block_size": 512, 00:17:23.836 "num_blocks": 196608, 00:17:23.836 "uuid": "c4d8bad3-aa27-4507-9f6e-7876593dab60", 00:17:23.836 "assigned_rate_limits": { 00:17:23.836 "rw_ios_per_sec": 0, 00:17:23.836 "rw_mbytes_per_sec": 0, 00:17:23.836 "r_mbytes_per_sec": 0, 00:17:23.836 "w_mbytes_per_sec": 0 00:17:23.836 }, 00:17:23.836 "claimed": false, 00:17:23.836 "zoned": false, 00:17:23.836 "supported_io_types": { 00:17:23.836 "read": true, 00:17:23.836 "write": true, 00:17:23.836 "unmap": true, 00:17:23.836 "write_zeroes": true, 00:17:23.836 "flush": true, 00:17:23.836 "reset": true, 00:17:23.836 "compare": false, 00:17:23.836 "compare_and_write": false, 00:17:23.836 "abort": false, 00:17:23.836 "nvme_admin": false, 00:17:23.836 "nvme_io": false 00:17:23.836 }, 00:17:23.836 "memory_domains": [ 00:17:23.836 { 00:17:23.836 "dma_device_id": "system", 00:17:23.836 "dma_device_type": 1 00:17:23.836 }, 00:17:23.836 { 00:17:23.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.836 "dma_device_type": 2 00:17:23.836 }, 00:17:23.836 { 00:17:23.836 "dma_device_id": "system", 00:17:23.836 "dma_device_type": 1 00:17:23.836 }, 00:17:23.836 { 00:17:23.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.836 "dma_device_type": 2 00:17:23.836 }, 00:17:23.836 { 00:17:23.836 "dma_device_id": "system", 00:17:23.836 "dma_device_type": 1 00:17:23.836 }, 00:17:23.836 { 00:17:23.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.836 "dma_device_type": 2 00:17:23.836 } 00:17:23.836 ], 00:17:23.836 "driver_specific": { 00:17:23.836 "raid": { 00:17:23.836 "uuid": "c4d8bad3-aa27-4507-9f6e-7876593dab60", 00:17:23.836 "strip_size_kb": 64, 00:17:23.836 "state": "online", 00:17:23.836 "raid_level": "raid0", 00:17:23.836 "superblock": false, 00:17:23.836 "num_base_bdevs": 3, 00:17:23.836 "num_base_bdevs_discovered": 3, 00:17:23.836 "num_base_bdevs_operational": 3, 00:17:23.836 "base_bdevs_list": [ 00:17:23.836 { 00:17:23.836 "name": "BaseBdev1", 00:17:23.836 "uuid": "321c5d04-849e-457e-884d-e1720ab55282", 00:17:23.836 "is_configured": true, 00:17:23.836 "data_offset": 0, 00:17:23.836 "data_size": 65536 00:17:23.836 }, 00:17:23.836 { 00:17:23.836 "name": "BaseBdev2", 00:17:23.836 "uuid": "30b4f779-9b7c-4e83-ac1e-8332c74a376e", 00:17:23.836 "is_configured": true, 00:17:23.836 "data_offset": 0, 00:17:23.836 "data_size": 65536 00:17:23.836 }, 00:17:23.836 { 00:17:23.836 "name": "BaseBdev3", 00:17:23.836 "uuid": "d2bbea86-c1a5-461d-9a0b-7213813a7048", 00:17:23.836 "is_configured": true, 00:17:23.836 "data_offset": 0, 00:17:23.836 "data_size": 65536 00:17:23.836 } 00:17:23.836 ] 00:17:23.836 } 00:17:23.836 } 00:17:23.836 }' 00:17:23.836 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:23.836 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:23.836 BaseBdev2 00:17:23.837 BaseBdev3' 00:17:23.837 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:23.837 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:23.837 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:24.095 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:24.095 "name": "BaseBdev1", 00:17:24.095 "aliases": [ 00:17:24.095 "321c5d04-849e-457e-884d-e1720ab55282" 00:17:24.095 ], 00:17:24.095 "product_name": "Malloc disk", 00:17:24.095 "block_size": 512, 00:17:24.095 "num_blocks": 65536, 00:17:24.095 "uuid": "321c5d04-849e-457e-884d-e1720ab55282", 00:17:24.095 "assigned_rate_limits": { 00:17:24.095 "rw_ios_per_sec": 0, 00:17:24.095 "rw_mbytes_per_sec": 0, 00:17:24.095 "r_mbytes_per_sec": 0, 00:17:24.095 "w_mbytes_per_sec": 0 00:17:24.095 }, 00:17:24.095 "claimed": true, 00:17:24.095 "claim_type": "exclusive_write", 00:17:24.095 "zoned": false, 00:17:24.095 "supported_io_types": { 00:17:24.095 "read": true, 00:17:24.095 "write": true, 00:17:24.095 "unmap": true, 00:17:24.095 "write_zeroes": true, 00:17:24.095 "flush": true, 00:17:24.095 "reset": true, 00:17:24.095 "compare": false, 00:17:24.095 "compare_and_write": false, 00:17:24.095 "abort": true, 00:17:24.095 "nvme_admin": false, 00:17:24.095 "nvme_io": false 00:17:24.095 }, 00:17:24.095 "memory_domains": [ 00:17:24.095 { 00:17:24.095 "dma_device_id": "system", 00:17:24.095 "dma_device_type": 1 00:17:24.095 }, 00:17:24.095 { 00:17:24.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.095 "dma_device_type": 2 00:17:24.095 } 00:17:24.095 ], 00:17:24.095 "driver_specific": {} 00:17:24.095 }' 00:17:24.095 07:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:24.353 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:24.353 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:24.353 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:24.353 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:24.353 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:24.353 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:24.353 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:24.353 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:24.353 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:24.612 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:24.612 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:24.612 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:24.612 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:24.612 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:24.869 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:24.869 "name": "BaseBdev2", 00:17:24.869 "aliases": [ 00:17:24.869 "30b4f779-9b7c-4e83-ac1e-8332c74a376e" 00:17:24.869 ], 00:17:24.869 "product_name": "Malloc disk", 00:17:24.869 "block_size": 512, 00:17:24.869 "num_blocks": 65536, 00:17:24.869 "uuid": "30b4f779-9b7c-4e83-ac1e-8332c74a376e", 00:17:24.869 "assigned_rate_limits": { 00:17:24.869 "rw_ios_per_sec": 0, 00:17:24.869 "rw_mbytes_per_sec": 0, 00:17:24.869 "r_mbytes_per_sec": 0, 00:17:24.869 "w_mbytes_per_sec": 0 00:17:24.869 }, 00:17:24.869 "claimed": true, 00:17:24.869 "claim_type": "exclusive_write", 00:17:24.869 "zoned": false, 00:17:24.869 "supported_io_types": { 00:17:24.869 "read": true, 00:17:24.869 "write": true, 00:17:24.869 "unmap": true, 00:17:24.869 "write_zeroes": true, 00:17:24.869 "flush": true, 00:17:24.869 "reset": true, 00:17:24.869 "compare": false, 00:17:24.869 "compare_and_write": false, 00:17:24.869 "abort": true, 00:17:24.869 "nvme_admin": false, 00:17:24.869 "nvme_io": false 00:17:24.869 }, 00:17:24.869 "memory_domains": [ 00:17:24.869 { 00:17:24.869 "dma_device_id": "system", 00:17:24.869 "dma_device_type": 1 00:17:24.869 }, 00:17:24.869 { 00:17:24.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.869 "dma_device_type": 2 00:17:24.869 } 00:17:24.869 ], 00:17:24.869 "driver_specific": {} 00:17:24.869 }' 00:17:24.869 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:24.869 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:24.869 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:24.869 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:24.869 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:25.128 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:25.128 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:25.128 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:25.128 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:25.128 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:25.128 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:25.128 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:25.128 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:25.128 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:25.128 07:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:25.387 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:25.387 "name": "BaseBdev3", 00:17:25.387 "aliases": [ 00:17:25.387 "d2bbea86-c1a5-461d-9a0b-7213813a7048" 00:17:25.387 ], 00:17:25.387 "product_name": "Malloc disk", 00:17:25.387 "block_size": 512, 00:17:25.387 "num_blocks": 65536, 00:17:25.387 "uuid": "d2bbea86-c1a5-461d-9a0b-7213813a7048", 00:17:25.387 "assigned_rate_limits": { 00:17:25.387 "rw_ios_per_sec": 0, 00:17:25.387 "rw_mbytes_per_sec": 0, 00:17:25.387 "r_mbytes_per_sec": 0, 00:17:25.387 "w_mbytes_per_sec": 0 00:17:25.387 }, 00:17:25.387 "claimed": true, 00:17:25.387 "claim_type": "exclusive_write", 00:17:25.387 "zoned": false, 00:17:25.387 "supported_io_types": { 00:17:25.387 "read": true, 00:17:25.387 "write": true, 00:17:25.387 "unmap": true, 00:17:25.387 "write_zeroes": true, 00:17:25.387 "flush": true, 00:17:25.387 "reset": true, 00:17:25.387 "compare": false, 00:17:25.387 "compare_and_write": false, 00:17:25.387 "abort": true, 00:17:25.387 "nvme_admin": false, 00:17:25.387 "nvme_io": false 00:17:25.387 }, 00:17:25.387 "memory_domains": [ 00:17:25.387 { 00:17:25.387 "dma_device_id": "system", 00:17:25.387 "dma_device_type": 1 00:17:25.387 }, 00:17:25.387 { 00:17:25.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.387 "dma_device_type": 2 00:17:25.387 } 00:17:25.387 ], 00:17:25.387 "driver_specific": {} 00:17:25.387 }' 00:17:25.387 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:25.387 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:25.387 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:25.387 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:25.646 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:25.646 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:25.646 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:25.646 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:25.646 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:25.646 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:25.646 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:25.646 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:25.646 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:26.212 [2024-07-12 07:27:59.826198] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.212 [2024-07-12 07:27:59.826250] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.212 [2024-07-12 07:27:59.826347] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.212 07:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.469 07:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:26.469 "name": "Existed_Raid", 00:17:26.469 "uuid": "c4d8bad3-aa27-4507-9f6e-7876593dab60", 00:17:26.469 "strip_size_kb": 64, 00:17:26.469 "state": "offline", 00:17:26.469 "raid_level": "raid0", 00:17:26.469 "superblock": false, 00:17:26.469 "num_base_bdevs": 3, 00:17:26.469 "num_base_bdevs_discovered": 2, 00:17:26.469 "num_base_bdevs_operational": 2, 00:17:26.469 "base_bdevs_list": [ 00:17:26.469 { 00:17:26.469 "name": null, 00:17:26.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.469 "is_configured": false, 00:17:26.469 "data_offset": 0, 00:17:26.469 "data_size": 65536 00:17:26.469 }, 00:17:26.469 { 00:17:26.469 "name": "BaseBdev2", 00:17:26.469 "uuid": "30b4f779-9b7c-4e83-ac1e-8332c74a376e", 00:17:26.469 "is_configured": true, 00:17:26.469 "data_offset": 0, 00:17:26.469 "data_size": 65536 00:17:26.469 }, 00:17:26.469 { 00:17:26.469 "name": "BaseBdev3", 00:17:26.469 "uuid": "d2bbea86-c1a5-461d-9a0b-7213813a7048", 00:17:26.469 "is_configured": true, 00:17:26.469 "data_offset": 0, 00:17:26.469 "data_size": 65536 00:17:26.469 } 00:17:26.469 ] 00:17:26.469 }' 00:17:26.469 07:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:26.469 07:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.035 07:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:27.035 07:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:27.035 07:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.035 07:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:27.035 07:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:27.035 07:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.035 07:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:27.293 [2024-07-12 07:28:01.110873] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:27.293 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:27.293 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:27.293 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.293 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:27.552 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:27.552 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.552 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:27.815 [2024-07-12 07:28:01.598981] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:27.815 [2024-07-12 07:28:01.599052] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:17:27.815 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:27.815 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:27.815 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.815 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:28.097 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:28.097 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:28.097 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:28.097 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:28.097 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:28.097 07:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:28.356 BaseBdev2 00:17:28.356 07:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:28.356 07:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:28.356 07:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:28.356 07:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:28.356 07:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:28.356 07:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:28.356 07:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:28.614 07:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:28.614 [ 00:17:28.614 { 00:17:28.614 "name": "BaseBdev2", 00:17:28.614 "aliases": [ 00:17:28.614 "1f95b94d-d1d7-41b5-b466-a6187e34d869" 00:17:28.614 ], 00:17:28.614 "product_name": "Malloc disk", 00:17:28.614 "block_size": 512, 00:17:28.614 "num_blocks": 65536, 00:17:28.614 "uuid": "1f95b94d-d1d7-41b5-b466-a6187e34d869", 00:17:28.614 "assigned_rate_limits": { 00:17:28.614 "rw_ios_per_sec": 0, 00:17:28.614 "rw_mbytes_per_sec": 0, 00:17:28.614 "r_mbytes_per_sec": 0, 00:17:28.614 "w_mbytes_per_sec": 0 00:17:28.614 }, 00:17:28.614 "claimed": false, 00:17:28.614 "zoned": false, 00:17:28.614 "supported_io_types": { 00:17:28.614 "read": true, 00:17:28.614 "write": true, 00:17:28.614 "unmap": true, 00:17:28.614 "write_zeroes": true, 00:17:28.614 "flush": true, 00:17:28.614 "reset": true, 00:17:28.614 "compare": false, 00:17:28.614 "compare_and_write": false, 00:17:28.614 "abort": true, 00:17:28.614 "nvme_admin": false, 00:17:28.614 "nvme_io": false 00:17:28.614 }, 00:17:28.614 "memory_domains": [ 00:17:28.614 { 00:17:28.614 "dma_device_id": "system", 00:17:28.614 "dma_device_type": 1 00:17:28.614 }, 00:17:28.614 { 00:17:28.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.614 "dma_device_type": 2 00:17:28.614 } 00:17:28.614 ], 00:17:28.614 "driver_specific": {} 00:17:28.614 } 00:17:28.614 ] 00:17:28.614 07:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:28.614 07:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:28.614 07:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:28.614 07:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:28.873 BaseBdev3 00:17:29.132 07:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:29.132 07:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:29.132 07:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:29.132 07:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:29.132 07:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:29.132 07:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:29.132 07:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:29.132 07:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:29.390 [ 00:17:29.390 { 00:17:29.390 "name": "BaseBdev3", 00:17:29.390 "aliases": [ 00:17:29.390 "afdccb7b-14fb-47f0-a1fb-58fa95d554eb" 00:17:29.390 ], 00:17:29.390 "product_name": "Malloc disk", 00:17:29.390 "block_size": 512, 00:17:29.390 "num_blocks": 65536, 00:17:29.390 "uuid": "afdccb7b-14fb-47f0-a1fb-58fa95d554eb", 00:17:29.390 "assigned_rate_limits": { 00:17:29.390 "rw_ios_per_sec": 0, 00:17:29.390 "rw_mbytes_per_sec": 0, 00:17:29.390 "r_mbytes_per_sec": 0, 00:17:29.390 "w_mbytes_per_sec": 0 00:17:29.390 }, 00:17:29.390 "claimed": false, 00:17:29.390 "zoned": false, 00:17:29.390 "supported_io_types": { 00:17:29.390 "read": true, 00:17:29.390 "write": true, 00:17:29.390 "unmap": true, 00:17:29.390 "write_zeroes": true, 00:17:29.390 "flush": true, 00:17:29.390 "reset": true, 00:17:29.390 "compare": false, 00:17:29.390 "compare_and_write": false, 00:17:29.390 "abort": true, 00:17:29.390 "nvme_admin": false, 00:17:29.390 "nvme_io": false 00:17:29.390 }, 00:17:29.390 "memory_domains": [ 00:17:29.391 { 00:17:29.391 "dma_device_id": "system", 00:17:29.391 "dma_device_type": 1 00:17:29.391 }, 00:17:29.391 { 00:17:29.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.391 "dma_device_type": 2 00:17:29.391 } 00:17:29.391 ], 00:17:29.391 "driver_specific": {} 00:17:29.391 } 00:17:29.391 ] 00:17:29.391 07:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:29.391 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:29.391 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:29.391 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:29.648 [2024-07-12 07:28:03.357722] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:29.648 [2024-07-12 07:28:03.357851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:29.648 [2024-07-12 07:28:03.357887] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:29.648 [2024-07-12 07:28:03.360423] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:29.648 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:29.648 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:29.648 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:29.648 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:29.648 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:29.648 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:29.648 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:29.648 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:29.648 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:29.648 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:29.648 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.648 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.906 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.906 "name": "Existed_Raid", 00:17:29.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.906 "strip_size_kb": 64, 00:17:29.906 "state": "configuring", 00:17:29.906 "raid_level": "raid0", 00:17:29.906 "superblock": false, 00:17:29.906 "num_base_bdevs": 3, 00:17:29.906 "num_base_bdevs_discovered": 2, 00:17:29.906 "num_base_bdevs_operational": 3, 00:17:29.906 "base_bdevs_list": [ 00:17:29.906 { 00:17:29.906 "name": "BaseBdev1", 00:17:29.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.906 "is_configured": false, 00:17:29.906 "data_offset": 0, 00:17:29.906 "data_size": 0 00:17:29.906 }, 00:17:29.906 { 00:17:29.906 "name": "BaseBdev2", 00:17:29.906 "uuid": "1f95b94d-d1d7-41b5-b466-a6187e34d869", 00:17:29.906 "is_configured": true, 00:17:29.906 "data_offset": 0, 00:17:29.906 "data_size": 65536 00:17:29.906 }, 00:17:29.906 { 00:17:29.906 "name": "BaseBdev3", 00:17:29.906 "uuid": "afdccb7b-14fb-47f0-a1fb-58fa95d554eb", 00:17:29.906 "is_configured": true, 00:17:29.906 "data_offset": 0, 00:17:29.906 "data_size": 65536 00:17:29.906 } 00:17:29.906 ] 00:17:29.906 }' 00:17:29.906 07:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.906 07:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.474 07:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:30.474 [2024-07-12 07:28:04.345861] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:30.733 07:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:30.733 07:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:30.733 07:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:30.733 07:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:30.733 07:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:30.733 07:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:30.733 07:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.733 07:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.733 07:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.733 07:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.733 07:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.733 07:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.733 07:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:30.733 "name": "Existed_Raid", 00:17:30.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.733 "strip_size_kb": 64, 00:17:30.733 "state": "configuring", 00:17:30.733 "raid_level": "raid0", 00:17:30.733 "superblock": false, 00:17:30.733 "num_base_bdevs": 3, 00:17:30.733 "num_base_bdevs_discovered": 1, 00:17:30.733 "num_base_bdevs_operational": 3, 00:17:30.733 "base_bdevs_list": [ 00:17:30.733 { 00:17:30.733 "name": "BaseBdev1", 00:17:30.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.733 "is_configured": false, 00:17:30.733 "data_offset": 0, 00:17:30.733 "data_size": 0 00:17:30.733 }, 00:17:30.733 { 00:17:30.733 "name": null, 00:17:30.733 "uuid": "1f95b94d-d1d7-41b5-b466-a6187e34d869", 00:17:30.733 "is_configured": false, 00:17:30.733 "data_offset": 0, 00:17:30.733 "data_size": 65536 00:17:30.733 }, 00:17:30.733 { 00:17:30.733 "name": "BaseBdev3", 00:17:30.733 "uuid": "afdccb7b-14fb-47f0-a1fb-58fa95d554eb", 00:17:30.733 "is_configured": true, 00:17:30.733 "data_offset": 0, 00:17:30.733 "data_size": 65536 00:17:30.733 } 00:17:30.733 ] 00:17:30.733 }' 00:17:30.733 07:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:30.733 07:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.301 07:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:31.301 07:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.559 07:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:31.559 07:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:31.818 [2024-07-12 07:28:05.612036] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:31.818 BaseBdev1 00:17:31.818 07:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:31.818 07:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:31.818 07:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:31.818 07:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:31.818 07:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:31.818 07:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:31.818 07:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:32.077 07:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:32.335 [ 00:17:32.335 { 00:17:32.335 "name": "BaseBdev1", 00:17:32.335 "aliases": [ 00:17:32.335 "25e9f4da-ff2a-4baa-baef-2a7db2f57509" 00:17:32.335 ], 00:17:32.335 "product_name": "Malloc disk", 00:17:32.335 "block_size": 512, 00:17:32.335 "num_blocks": 65536, 00:17:32.335 "uuid": "25e9f4da-ff2a-4baa-baef-2a7db2f57509", 00:17:32.335 "assigned_rate_limits": { 00:17:32.335 "rw_ios_per_sec": 0, 00:17:32.335 "rw_mbytes_per_sec": 0, 00:17:32.335 "r_mbytes_per_sec": 0, 00:17:32.335 "w_mbytes_per_sec": 0 00:17:32.335 }, 00:17:32.335 "claimed": true, 00:17:32.335 "claim_type": "exclusive_write", 00:17:32.335 "zoned": false, 00:17:32.335 "supported_io_types": { 00:17:32.335 "read": true, 00:17:32.335 "write": true, 00:17:32.335 "unmap": true, 00:17:32.335 "write_zeroes": true, 00:17:32.335 "flush": true, 00:17:32.335 "reset": true, 00:17:32.335 "compare": false, 00:17:32.335 "compare_and_write": false, 00:17:32.335 "abort": true, 00:17:32.335 "nvme_admin": false, 00:17:32.335 "nvme_io": false 00:17:32.335 }, 00:17:32.335 "memory_domains": [ 00:17:32.335 { 00:17:32.335 "dma_device_id": "system", 00:17:32.335 "dma_device_type": 1 00:17:32.335 }, 00:17:32.335 { 00:17:32.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.335 "dma_device_type": 2 00:17:32.335 } 00:17:32.335 ], 00:17:32.335 "driver_specific": {} 00:17:32.335 } 00:17:32.335 ] 00:17:32.335 07:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:32.335 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:32.335 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:32.335 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:32.335 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:32.335 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:32.335 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:32.335 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.335 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.335 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.335 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.335 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.335 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.594 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.594 "name": "Existed_Raid", 00:17:32.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.594 "strip_size_kb": 64, 00:17:32.594 "state": "configuring", 00:17:32.594 "raid_level": "raid0", 00:17:32.594 "superblock": false, 00:17:32.594 "num_base_bdevs": 3, 00:17:32.594 "num_base_bdevs_discovered": 2, 00:17:32.594 "num_base_bdevs_operational": 3, 00:17:32.594 "base_bdevs_list": [ 00:17:32.594 { 00:17:32.594 "name": "BaseBdev1", 00:17:32.594 "uuid": "25e9f4da-ff2a-4baa-baef-2a7db2f57509", 00:17:32.594 "is_configured": true, 00:17:32.594 "data_offset": 0, 00:17:32.594 "data_size": 65536 00:17:32.594 }, 00:17:32.594 { 00:17:32.594 "name": null, 00:17:32.594 "uuid": "1f95b94d-d1d7-41b5-b466-a6187e34d869", 00:17:32.594 "is_configured": false, 00:17:32.594 "data_offset": 0, 00:17:32.594 "data_size": 65536 00:17:32.594 }, 00:17:32.594 { 00:17:32.594 "name": "BaseBdev3", 00:17:32.594 "uuid": "afdccb7b-14fb-47f0-a1fb-58fa95d554eb", 00:17:32.594 "is_configured": true, 00:17:32.594 "data_offset": 0, 00:17:32.594 "data_size": 65536 00:17:32.594 } 00:17:32.594 ] 00:17:32.594 }' 00:17:32.594 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.594 07:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.162 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.162 07:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:33.421 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:33.421 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:33.679 [2024-07-12 07:28:07.386164] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:33.679 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:33.679 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:33.679 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:33.679 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:33.679 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:33.679 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:33.679 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.679 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.679 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.679 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.679 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.679 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.938 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.938 "name": "Existed_Raid", 00:17:33.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.938 "strip_size_kb": 64, 00:17:33.938 "state": "configuring", 00:17:33.938 "raid_level": "raid0", 00:17:33.938 "superblock": false, 00:17:33.938 "num_base_bdevs": 3, 00:17:33.938 "num_base_bdevs_discovered": 1, 00:17:33.938 "num_base_bdevs_operational": 3, 00:17:33.938 "base_bdevs_list": [ 00:17:33.938 { 00:17:33.938 "name": "BaseBdev1", 00:17:33.938 "uuid": "25e9f4da-ff2a-4baa-baef-2a7db2f57509", 00:17:33.938 "is_configured": true, 00:17:33.938 "data_offset": 0, 00:17:33.938 "data_size": 65536 00:17:33.938 }, 00:17:33.938 { 00:17:33.938 "name": null, 00:17:33.938 "uuid": "1f95b94d-d1d7-41b5-b466-a6187e34d869", 00:17:33.938 "is_configured": false, 00:17:33.938 "data_offset": 0, 00:17:33.938 "data_size": 65536 00:17:33.938 }, 00:17:33.938 { 00:17:33.938 "name": null, 00:17:33.938 "uuid": "afdccb7b-14fb-47f0-a1fb-58fa95d554eb", 00:17:33.938 "is_configured": false, 00:17:33.938 "data_offset": 0, 00:17:33.938 "data_size": 65536 00:17:33.938 } 00:17:33.938 ] 00:17:33.938 }' 00:17:33.938 07:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.938 07:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.505 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:34.505 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.764 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:34.764 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:35.023 [2024-07-12 07:28:08.742397] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:35.023 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:35.023 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:35.023 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:35.023 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:35.023 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:35.023 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:35.023 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.023 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.023 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.023 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.023 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.023 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.281 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.281 "name": "Existed_Raid", 00:17:35.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.281 "strip_size_kb": 64, 00:17:35.281 "state": "configuring", 00:17:35.281 "raid_level": "raid0", 00:17:35.281 "superblock": false, 00:17:35.281 "num_base_bdevs": 3, 00:17:35.281 "num_base_bdevs_discovered": 2, 00:17:35.281 "num_base_bdevs_operational": 3, 00:17:35.281 "base_bdevs_list": [ 00:17:35.281 { 00:17:35.281 "name": "BaseBdev1", 00:17:35.281 "uuid": "25e9f4da-ff2a-4baa-baef-2a7db2f57509", 00:17:35.281 "is_configured": true, 00:17:35.281 "data_offset": 0, 00:17:35.281 "data_size": 65536 00:17:35.281 }, 00:17:35.281 { 00:17:35.281 "name": null, 00:17:35.281 "uuid": "1f95b94d-d1d7-41b5-b466-a6187e34d869", 00:17:35.281 "is_configured": false, 00:17:35.281 "data_offset": 0, 00:17:35.281 "data_size": 65536 00:17:35.281 }, 00:17:35.281 { 00:17:35.281 "name": "BaseBdev3", 00:17:35.281 "uuid": "afdccb7b-14fb-47f0-a1fb-58fa95d554eb", 00:17:35.281 "is_configured": true, 00:17:35.281 "data_offset": 0, 00:17:35.281 "data_size": 65536 00:17:35.281 } 00:17:35.281 ] 00:17:35.281 }' 00:17:35.281 07:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.281 07:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.849 07:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.849 07:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:36.107 07:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:36.107 07:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:36.107 [2024-07-12 07:28:09.978668] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:36.366 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:36.366 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:36.366 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:36.366 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:36.366 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:36.366 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:36.366 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.366 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.366 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.366 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.366 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.366 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.366 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.366 "name": "Existed_Raid", 00:17:36.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.366 "strip_size_kb": 64, 00:17:36.366 "state": "configuring", 00:17:36.366 "raid_level": "raid0", 00:17:36.366 "superblock": false, 00:17:36.366 "num_base_bdevs": 3, 00:17:36.366 "num_base_bdevs_discovered": 1, 00:17:36.366 "num_base_bdevs_operational": 3, 00:17:36.366 "base_bdevs_list": [ 00:17:36.366 { 00:17:36.366 "name": null, 00:17:36.366 "uuid": "25e9f4da-ff2a-4baa-baef-2a7db2f57509", 00:17:36.366 "is_configured": false, 00:17:36.366 "data_offset": 0, 00:17:36.366 "data_size": 65536 00:17:36.366 }, 00:17:36.366 { 00:17:36.366 "name": null, 00:17:36.366 "uuid": "1f95b94d-d1d7-41b5-b466-a6187e34d869", 00:17:36.366 "is_configured": false, 00:17:36.366 "data_offset": 0, 00:17:36.366 "data_size": 65536 00:17:36.366 }, 00:17:36.366 { 00:17:36.366 "name": "BaseBdev3", 00:17:36.366 "uuid": "afdccb7b-14fb-47f0-a1fb-58fa95d554eb", 00:17:36.366 "is_configured": true, 00:17:36.366 "data_offset": 0, 00:17:36.366 "data_size": 65536 00:17:36.366 } 00:17:36.366 ] 00:17:36.366 }' 00:17:36.366 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.366 07:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.301 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.301 07:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:37.301 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:37.301 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:37.560 [2024-07-12 07:28:11.315082] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:37.560 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:37.560 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:37.560 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:37.560 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:37.560 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:37.560 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:37.560 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.560 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.560 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.560 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.560 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.560 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.819 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.819 "name": "Existed_Raid", 00:17:37.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.819 "strip_size_kb": 64, 00:17:37.819 "state": "configuring", 00:17:37.819 "raid_level": "raid0", 00:17:37.819 "superblock": false, 00:17:37.819 "num_base_bdevs": 3, 00:17:37.819 "num_base_bdevs_discovered": 2, 00:17:37.819 "num_base_bdevs_operational": 3, 00:17:37.819 "base_bdevs_list": [ 00:17:37.819 { 00:17:37.819 "name": null, 00:17:37.819 "uuid": "25e9f4da-ff2a-4baa-baef-2a7db2f57509", 00:17:37.819 "is_configured": false, 00:17:37.819 "data_offset": 0, 00:17:37.819 "data_size": 65536 00:17:37.819 }, 00:17:37.819 { 00:17:37.819 "name": "BaseBdev2", 00:17:37.819 "uuid": "1f95b94d-d1d7-41b5-b466-a6187e34d869", 00:17:37.819 "is_configured": true, 00:17:37.819 "data_offset": 0, 00:17:37.819 "data_size": 65536 00:17:37.819 }, 00:17:37.819 { 00:17:37.819 "name": "BaseBdev3", 00:17:37.819 "uuid": "afdccb7b-14fb-47f0-a1fb-58fa95d554eb", 00:17:37.819 "is_configured": true, 00:17:37.819 "data_offset": 0, 00:17:37.819 "data_size": 65536 00:17:37.819 } 00:17:37.819 ] 00:17:37.819 }' 00:17:37.819 07:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.819 07:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.387 07:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.387 07:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:38.650 07:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:38.651 07:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:38.651 07:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.909 07:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 25e9f4da-ff2a-4baa-baef-2a7db2f57509 00:17:39.167 [2024-07-12 07:28:12.823317] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:39.167 [2024-07-12 07:28:12.823376] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:39.167 [2024-07-12 07:28:12.823385] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:39.167 [2024-07-12 07:28:12.823479] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:39.167 [2024-07-12 07:28:12.823803] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:39.167 [2024-07-12 07:28:12.823832] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:17:39.167 [2024-07-12 07:28:12.824030] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.167 NewBaseBdev 00:17:39.167 07:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:39.167 07:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:17:39.167 07:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:39.167 07:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:39.167 07:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:39.167 07:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:39.167 07:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:39.425 07:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:39.425 [ 00:17:39.425 { 00:17:39.425 "name": "NewBaseBdev", 00:17:39.425 "aliases": [ 00:17:39.425 "25e9f4da-ff2a-4baa-baef-2a7db2f57509" 00:17:39.425 ], 00:17:39.425 "product_name": "Malloc disk", 00:17:39.425 "block_size": 512, 00:17:39.425 "num_blocks": 65536, 00:17:39.425 "uuid": "25e9f4da-ff2a-4baa-baef-2a7db2f57509", 00:17:39.425 "assigned_rate_limits": { 00:17:39.425 "rw_ios_per_sec": 0, 00:17:39.425 "rw_mbytes_per_sec": 0, 00:17:39.425 "r_mbytes_per_sec": 0, 00:17:39.425 "w_mbytes_per_sec": 0 00:17:39.425 }, 00:17:39.425 "claimed": true, 00:17:39.425 "claim_type": "exclusive_write", 00:17:39.425 "zoned": false, 00:17:39.425 "supported_io_types": { 00:17:39.425 "read": true, 00:17:39.425 "write": true, 00:17:39.425 "unmap": true, 00:17:39.425 "write_zeroes": true, 00:17:39.425 "flush": true, 00:17:39.425 "reset": true, 00:17:39.425 "compare": false, 00:17:39.425 "compare_and_write": false, 00:17:39.425 "abort": true, 00:17:39.425 "nvme_admin": false, 00:17:39.425 "nvme_io": false 00:17:39.425 }, 00:17:39.425 "memory_domains": [ 00:17:39.425 { 00:17:39.425 "dma_device_id": "system", 00:17:39.425 "dma_device_type": 1 00:17:39.425 }, 00:17:39.425 { 00:17:39.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.425 "dma_device_type": 2 00:17:39.426 } 00:17:39.426 ], 00:17:39.426 "driver_specific": {} 00:17:39.426 } 00:17:39.426 ] 00:17:39.426 07:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:39.426 07:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:39.426 07:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:39.426 07:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:39.426 07:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:39.426 07:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:39.426 07:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:39.426 07:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:39.683 07:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:39.683 07:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:39.683 07:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:39.683 07:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.683 07:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.683 07:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:39.683 "name": "Existed_Raid", 00:17:39.683 "uuid": "193f76d8-5946-4256-bc57-804957fde5ab", 00:17:39.683 "strip_size_kb": 64, 00:17:39.683 "state": "online", 00:17:39.683 "raid_level": "raid0", 00:17:39.683 "superblock": false, 00:17:39.683 "num_base_bdevs": 3, 00:17:39.683 "num_base_bdevs_discovered": 3, 00:17:39.683 "num_base_bdevs_operational": 3, 00:17:39.683 "base_bdevs_list": [ 00:17:39.683 { 00:17:39.683 "name": "NewBaseBdev", 00:17:39.683 "uuid": "25e9f4da-ff2a-4baa-baef-2a7db2f57509", 00:17:39.683 "is_configured": true, 00:17:39.683 "data_offset": 0, 00:17:39.683 "data_size": 65536 00:17:39.683 }, 00:17:39.683 { 00:17:39.683 "name": "BaseBdev2", 00:17:39.683 "uuid": "1f95b94d-d1d7-41b5-b466-a6187e34d869", 00:17:39.683 "is_configured": true, 00:17:39.683 "data_offset": 0, 00:17:39.683 "data_size": 65536 00:17:39.683 }, 00:17:39.683 { 00:17:39.683 "name": "BaseBdev3", 00:17:39.683 "uuid": "afdccb7b-14fb-47f0-a1fb-58fa95d554eb", 00:17:39.683 "is_configured": true, 00:17:39.683 "data_offset": 0, 00:17:39.683 "data_size": 65536 00:17:39.683 } 00:17:39.683 ] 00:17:39.683 }' 00:17:39.683 07:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:39.683 07:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.248 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:40.248 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:40.248 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:40.248 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:40.248 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:40.248 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:40.248 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:40.248 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:40.505 [2024-07-12 07:28:14.291987] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.505 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:40.505 "name": "Existed_Raid", 00:17:40.505 "aliases": [ 00:17:40.505 "193f76d8-5946-4256-bc57-804957fde5ab" 00:17:40.505 ], 00:17:40.505 "product_name": "Raid Volume", 00:17:40.505 "block_size": 512, 00:17:40.505 "num_blocks": 196608, 00:17:40.505 "uuid": "193f76d8-5946-4256-bc57-804957fde5ab", 00:17:40.505 "assigned_rate_limits": { 00:17:40.505 "rw_ios_per_sec": 0, 00:17:40.505 "rw_mbytes_per_sec": 0, 00:17:40.505 "r_mbytes_per_sec": 0, 00:17:40.505 "w_mbytes_per_sec": 0 00:17:40.505 }, 00:17:40.505 "claimed": false, 00:17:40.505 "zoned": false, 00:17:40.505 "supported_io_types": { 00:17:40.505 "read": true, 00:17:40.505 "write": true, 00:17:40.505 "unmap": true, 00:17:40.505 "write_zeroes": true, 00:17:40.505 "flush": true, 00:17:40.505 "reset": true, 00:17:40.505 "compare": false, 00:17:40.505 "compare_and_write": false, 00:17:40.505 "abort": false, 00:17:40.505 "nvme_admin": false, 00:17:40.505 "nvme_io": false 00:17:40.505 }, 00:17:40.505 "memory_domains": [ 00:17:40.505 { 00:17:40.506 "dma_device_id": "system", 00:17:40.506 "dma_device_type": 1 00:17:40.506 }, 00:17:40.506 { 00:17:40.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.506 "dma_device_type": 2 00:17:40.506 }, 00:17:40.506 { 00:17:40.506 "dma_device_id": "system", 00:17:40.506 "dma_device_type": 1 00:17:40.506 }, 00:17:40.506 { 00:17:40.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.506 "dma_device_type": 2 00:17:40.506 }, 00:17:40.506 { 00:17:40.506 "dma_device_id": "system", 00:17:40.506 "dma_device_type": 1 00:17:40.506 }, 00:17:40.506 { 00:17:40.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.506 "dma_device_type": 2 00:17:40.506 } 00:17:40.506 ], 00:17:40.506 "driver_specific": { 00:17:40.506 "raid": { 00:17:40.506 "uuid": "193f76d8-5946-4256-bc57-804957fde5ab", 00:17:40.506 "strip_size_kb": 64, 00:17:40.506 "state": "online", 00:17:40.506 "raid_level": "raid0", 00:17:40.506 "superblock": false, 00:17:40.506 "num_base_bdevs": 3, 00:17:40.506 "num_base_bdevs_discovered": 3, 00:17:40.506 "num_base_bdevs_operational": 3, 00:17:40.506 "base_bdevs_list": [ 00:17:40.506 { 00:17:40.506 "name": "NewBaseBdev", 00:17:40.506 "uuid": "25e9f4da-ff2a-4baa-baef-2a7db2f57509", 00:17:40.506 "is_configured": true, 00:17:40.506 "data_offset": 0, 00:17:40.506 "data_size": 65536 00:17:40.506 }, 00:17:40.506 { 00:17:40.506 "name": "BaseBdev2", 00:17:40.506 "uuid": "1f95b94d-d1d7-41b5-b466-a6187e34d869", 00:17:40.506 "is_configured": true, 00:17:40.506 "data_offset": 0, 00:17:40.506 "data_size": 65536 00:17:40.506 }, 00:17:40.506 { 00:17:40.506 "name": "BaseBdev3", 00:17:40.506 "uuid": "afdccb7b-14fb-47f0-a1fb-58fa95d554eb", 00:17:40.506 "is_configured": true, 00:17:40.506 "data_offset": 0, 00:17:40.506 "data_size": 65536 00:17:40.506 } 00:17:40.506 ] 00:17:40.506 } 00:17:40.506 } 00:17:40.506 }' 00:17:40.506 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:40.506 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:40.506 BaseBdev2 00:17:40.506 BaseBdev3' 00:17:40.506 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:40.506 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:40.506 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:40.764 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:40.764 "name": "NewBaseBdev", 00:17:40.764 "aliases": [ 00:17:40.764 "25e9f4da-ff2a-4baa-baef-2a7db2f57509" 00:17:40.764 ], 00:17:40.764 "product_name": "Malloc disk", 00:17:40.764 "block_size": 512, 00:17:40.764 "num_blocks": 65536, 00:17:40.764 "uuid": "25e9f4da-ff2a-4baa-baef-2a7db2f57509", 00:17:40.764 "assigned_rate_limits": { 00:17:40.764 "rw_ios_per_sec": 0, 00:17:40.764 "rw_mbytes_per_sec": 0, 00:17:40.764 "r_mbytes_per_sec": 0, 00:17:40.764 "w_mbytes_per_sec": 0 00:17:40.764 }, 00:17:40.764 "claimed": true, 00:17:40.764 "claim_type": "exclusive_write", 00:17:40.764 "zoned": false, 00:17:40.764 "supported_io_types": { 00:17:40.764 "read": true, 00:17:40.764 "write": true, 00:17:40.764 "unmap": true, 00:17:40.764 "write_zeroes": true, 00:17:40.764 "flush": true, 00:17:40.764 "reset": true, 00:17:40.764 "compare": false, 00:17:40.764 "compare_and_write": false, 00:17:40.764 "abort": true, 00:17:40.764 "nvme_admin": false, 00:17:40.764 "nvme_io": false 00:17:40.764 }, 00:17:40.764 "memory_domains": [ 00:17:40.764 { 00:17:40.764 "dma_device_id": "system", 00:17:40.764 "dma_device_type": 1 00:17:40.764 }, 00:17:40.764 { 00:17:40.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.764 "dma_device_type": 2 00:17:40.764 } 00:17:40.764 ], 00:17:40.764 "driver_specific": {} 00:17:40.764 }' 00:17:40.764 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.023 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.023 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:41.023 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.023 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.023 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:41.023 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.023 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.023 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:41.023 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.281 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.281 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:41.281 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:41.281 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:41.281 07:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:41.540 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:41.540 "name": "BaseBdev2", 00:17:41.540 "aliases": [ 00:17:41.540 "1f95b94d-d1d7-41b5-b466-a6187e34d869" 00:17:41.540 ], 00:17:41.540 "product_name": "Malloc disk", 00:17:41.540 "block_size": 512, 00:17:41.540 "num_blocks": 65536, 00:17:41.540 "uuid": "1f95b94d-d1d7-41b5-b466-a6187e34d869", 00:17:41.540 "assigned_rate_limits": { 00:17:41.540 "rw_ios_per_sec": 0, 00:17:41.540 "rw_mbytes_per_sec": 0, 00:17:41.540 "r_mbytes_per_sec": 0, 00:17:41.540 "w_mbytes_per_sec": 0 00:17:41.540 }, 00:17:41.540 "claimed": true, 00:17:41.540 "claim_type": "exclusive_write", 00:17:41.540 "zoned": false, 00:17:41.540 "supported_io_types": { 00:17:41.540 "read": true, 00:17:41.540 "write": true, 00:17:41.540 "unmap": true, 00:17:41.540 "write_zeroes": true, 00:17:41.541 "flush": true, 00:17:41.541 "reset": true, 00:17:41.541 "compare": false, 00:17:41.541 "compare_and_write": false, 00:17:41.541 "abort": true, 00:17:41.541 "nvme_admin": false, 00:17:41.541 "nvme_io": false 00:17:41.541 }, 00:17:41.541 "memory_domains": [ 00:17:41.541 { 00:17:41.541 "dma_device_id": "system", 00:17:41.541 "dma_device_type": 1 00:17:41.541 }, 00:17:41.541 { 00:17:41.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.541 "dma_device_type": 2 00:17:41.541 } 00:17:41.541 ], 00:17:41.541 "driver_specific": {} 00:17:41.541 }' 00:17:41.541 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.541 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.541 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:41.541 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.541 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.541 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:41.541 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.800 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.800 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:41.800 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.800 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.800 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:41.800 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:41.800 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:41.800 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:42.060 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:42.060 "name": "BaseBdev3", 00:17:42.060 "aliases": [ 00:17:42.060 "afdccb7b-14fb-47f0-a1fb-58fa95d554eb" 00:17:42.060 ], 00:17:42.060 "product_name": "Malloc disk", 00:17:42.060 "block_size": 512, 00:17:42.060 "num_blocks": 65536, 00:17:42.060 "uuid": "afdccb7b-14fb-47f0-a1fb-58fa95d554eb", 00:17:42.060 "assigned_rate_limits": { 00:17:42.060 "rw_ios_per_sec": 0, 00:17:42.060 "rw_mbytes_per_sec": 0, 00:17:42.060 "r_mbytes_per_sec": 0, 00:17:42.060 "w_mbytes_per_sec": 0 00:17:42.060 }, 00:17:42.060 "claimed": true, 00:17:42.060 "claim_type": "exclusive_write", 00:17:42.060 "zoned": false, 00:17:42.060 "supported_io_types": { 00:17:42.060 "read": true, 00:17:42.060 "write": true, 00:17:42.060 "unmap": true, 00:17:42.060 "write_zeroes": true, 00:17:42.060 "flush": true, 00:17:42.060 "reset": true, 00:17:42.060 "compare": false, 00:17:42.060 "compare_and_write": false, 00:17:42.060 "abort": true, 00:17:42.060 "nvme_admin": false, 00:17:42.060 "nvme_io": false 00:17:42.060 }, 00:17:42.060 "memory_domains": [ 00:17:42.060 { 00:17:42.060 "dma_device_id": "system", 00:17:42.060 "dma_device_type": 1 00:17:42.060 }, 00:17:42.060 { 00:17:42.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.060 "dma_device_type": 2 00:17:42.060 } 00:17:42.060 ], 00:17:42.060 "driver_specific": {} 00:17:42.060 }' 00:17:42.060 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.060 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.060 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:42.060 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.060 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.060 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:42.060 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.319 07:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.319 07:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:42.319 07:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.319 07:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.319 07:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:42.319 07:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:42.578 [2024-07-12 07:28:16.352144] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:42.578 [2024-07-12 07:28:16.352387] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.578 [2024-07-12 07:28:16.352626] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.578 [2024-07-12 07:28:16.352821] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.578 [2024-07-12 07:28:16.352928] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:42.578 07:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 135574 00:17:42.578 07:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 135574 ']' 00:17:42.578 07:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 135574 00:17:42.578 07:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:17:42.578 07:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:42.578 07:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 135574 00:17:42.578 killing process with pid 135574 00:17:42.578 07:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:42.578 07:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:42.578 07:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 135574' 00:17:42.578 07:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 135574 00:17:42.578 07:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 135574 00:17:42.578 [2024-07-12 07:28:16.398949] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:42.578 [2024-07-12 07:28:16.459867] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:43.148 00:17:43.148 real 0m28.106s 00:17:43.148 user 0m51.660s 00:17:43.148 sys 0m4.657s 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:43.148 ************************************ 00:17:43.148 END TEST raid_state_function_test 00:17:43.148 ************************************ 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.148 07:28:16 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:17:43.148 07:28:16 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:43.148 07:28:16 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:43.148 07:28:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:43.148 ************************************ 00:17:43.148 START TEST raid_state_function_test_sb 00:17:43.148 ************************************ 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 true 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=136532 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 136532' 00:17:43.148 Process raid pid: 136532 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 136532 /var/tmp/spdk-raid.sock 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 136532 ']' 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:43.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:43.148 07:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.407 [2024-07-12 07:28:17.039968] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:43.407 [2024-07-12 07:28:17.040510] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.407 [2024-07-12 07:28:17.203392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.665 [2024-07-12 07:28:17.303879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.665 [2024-07-12 07:28:17.387536] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.230 07:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:44.230 07:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:17:44.230 07:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:44.488 [2024-07-12 07:28:18.221281] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:44.488 [2024-07-12 07:28:18.221672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:44.488 [2024-07-12 07:28:18.221763] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.488 [2024-07-12 07:28:18.221818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.488 [2024-07-12 07:28:18.221908] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.488 [2024-07-12 07:28:18.221982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.488 07:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:44.488 07:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:44.488 07:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:44.488 07:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:44.488 07:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:44.488 07:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:44.488 07:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.488 07:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.488 07:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.488 07:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.488 07:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.488 07:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.746 07:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:44.746 "name": "Existed_Raid", 00:17:44.746 "uuid": "4912869f-800c-45a1-87ae-b5110c18eb9b", 00:17:44.746 "strip_size_kb": 64, 00:17:44.746 "state": "configuring", 00:17:44.746 "raid_level": "raid0", 00:17:44.746 "superblock": true, 00:17:44.746 "num_base_bdevs": 3, 00:17:44.746 "num_base_bdevs_discovered": 0, 00:17:44.746 "num_base_bdevs_operational": 3, 00:17:44.746 "base_bdevs_list": [ 00:17:44.746 { 00:17:44.746 "name": "BaseBdev1", 00:17:44.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.746 "is_configured": false, 00:17:44.746 "data_offset": 0, 00:17:44.746 "data_size": 0 00:17:44.746 }, 00:17:44.746 { 00:17:44.746 "name": "BaseBdev2", 00:17:44.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.746 "is_configured": false, 00:17:44.746 "data_offset": 0, 00:17:44.746 "data_size": 0 00:17:44.746 }, 00:17:44.746 { 00:17:44.746 "name": "BaseBdev3", 00:17:44.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.746 "is_configured": false, 00:17:44.746 "data_offset": 0, 00:17:44.746 "data_size": 0 00:17:44.746 } 00:17:44.746 ] 00:17:44.746 }' 00:17:44.746 07:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:44.746 07:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.313 07:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:45.313 [2024-07-12 07:28:19.177263] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:45.313 [2024-07-12 07:28:19.177610] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:45.570 07:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:45.570 [2024-07-12 07:28:19.377348] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.571 [2024-07-12 07:28:19.377727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.571 [2024-07-12 07:28:19.377843] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.571 [2024-07-12 07:28:19.377904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.571 [2024-07-12 07:28:19.377986] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:45.571 [2024-07-12 07:28:19.378042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:45.571 07:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:45.828 [2024-07-12 07:28:19.593979] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.828 BaseBdev1 00:17:45.828 07:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:45.828 07:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:45.828 07:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:45.828 07:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:45.828 07:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:45.828 07:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:45.828 07:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:46.087 07:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:46.345 [ 00:17:46.345 { 00:17:46.345 "name": "BaseBdev1", 00:17:46.345 "aliases": [ 00:17:46.345 "4478edac-5e0e-4b45-8b72-bb373bec7817" 00:17:46.345 ], 00:17:46.345 "product_name": "Malloc disk", 00:17:46.345 "block_size": 512, 00:17:46.345 "num_blocks": 65536, 00:17:46.345 "uuid": "4478edac-5e0e-4b45-8b72-bb373bec7817", 00:17:46.345 "assigned_rate_limits": { 00:17:46.345 "rw_ios_per_sec": 0, 00:17:46.345 "rw_mbytes_per_sec": 0, 00:17:46.345 "r_mbytes_per_sec": 0, 00:17:46.345 "w_mbytes_per_sec": 0 00:17:46.345 }, 00:17:46.345 "claimed": true, 00:17:46.345 "claim_type": "exclusive_write", 00:17:46.345 "zoned": false, 00:17:46.345 "supported_io_types": { 00:17:46.345 "read": true, 00:17:46.345 "write": true, 00:17:46.345 "unmap": true, 00:17:46.345 "write_zeroes": true, 00:17:46.345 "flush": true, 00:17:46.345 "reset": true, 00:17:46.345 "compare": false, 00:17:46.345 "compare_and_write": false, 00:17:46.345 "abort": true, 00:17:46.345 "nvme_admin": false, 00:17:46.345 "nvme_io": false 00:17:46.345 }, 00:17:46.345 "memory_domains": [ 00:17:46.345 { 00:17:46.345 "dma_device_id": "system", 00:17:46.345 "dma_device_type": 1 00:17:46.345 }, 00:17:46.345 { 00:17:46.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.345 "dma_device_type": 2 00:17:46.345 } 00:17:46.345 ], 00:17:46.345 "driver_specific": {} 00:17:46.345 } 00:17:46.345 ] 00:17:46.345 07:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:46.345 07:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:46.345 07:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:46.345 07:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:46.345 07:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:46.345 07:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:46.345 07:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:46.345 07:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.345 07:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.345 07:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.345 07:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.345 07:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.345 07:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.603 07:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.603 "name": "Existed_Raid", 00:17:46.603 "uuid": "8197257c-10c6-493f-8a20-45154491891d", 00:17:46.603 "strip_size_kb": 64, 00:17:46.603 "state": "configuring", 00:17:46.603 "raid_level": "raid0", 00:17:46.603 "superblock": true, 00:17:46.603 "num_base_bdevs": 3, 00:17:46.603 "num_base_bdevs_discovered": 1, 00:17:46.603 "num_base_bdevs_operational": 3, 00:17:46.603 "base_bdevs_list": [ 00:17:46.603 { 00:17:46.603 "name": "BaseBdev1", 00:17:46.603 "uuid": "4478edac-5e0e-4b45-8b72-bb373bec7817", 00:17:46.603 "is_configured": true, 00:17:46.603 "data_offset": 2048, 00:17:46.603 "data_size": 63488 00:17:46.603 }, 00:17:46.603 { 00:17:46.603 "name": "BaseBdev2", 00:17:46.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.603 "is_configured": false, 00:17:46.603 "data_offset": 0, 00:17:46.603 "data_size": 0 00:17:46.603 }, 00:17:46.603 { 00:17:46.603 "name": "BaseBdev3", 00:17:46.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.603 "is_configured": false, 00:17:46.603 "data_offset": 0, 00:17:46.603 "data_size": 0 00:17:46.603 } 00:17:46.603 ] 00:17:46.603 }' 00:17:46.603 07:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.603 07:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.169 07:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:47.426 [2024-07-12 07:28:21.254390] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:47.427 [2024-07-12 07:28:21.254746] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:47.427 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:47.685 [2024-07-12 07:28:21.518517] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.685 [2024-07-12 07:28:21.521325] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:47.685 [2024-07-12 07:28:21.521534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:47.685 [2024-07-12 07:28:21.521650] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:47.685 [2024-07-12 07:28:21.521758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:47.685 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:47.685 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:47.685 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:47.685 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:47.685 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:47.685 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:47.685 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:47.685 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:47.685 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.685 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.685 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.685 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.685 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.685 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.943 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.943 "name": "Existed_Raid", 00:17:47.943 "uuid": "a51b8d4d-2432-4f7b-911e-51d74ec19412", 00:17:47.943 "strip_size_kb": 64, 00:17:47.943 "state": "configuring", 00:17:47.943 "raid_level": "raid0", 00:17:47.943 "superblock": true, 00:17:47.943 "num_base_bdevs": 3, 00:17:47.943 "num_base_bdevs_discovered": 1, 00:17:47.943 "num_base_bdevs_operational": 3, 00:17:47.943 "base_bdevs_list": [ 00:17:47.943 { 00:17:47.943 "name": "BaseBdev1", 00:17:47.943 "uuid": "4478edac-5e0e-4b45-8b72-bb373bec7817", 00:17:47.943 "is_configured": true, 00:17:47.943 "data_offset": 2048, 00:17:47.943 "data_size": 63488 00:17:47.943 }, 00:17:47.943 { 00:17:47.943 "name": "BaseBdev2", 00:17:47.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.943 "is_configured": false, 00:17:47.943 "data_offset": 0, 00:17:47.943 "data_size": 0 00:17:47.943 }, 00:17:47.943 { 00:17:47.943 "name": "BaseBdev3", 00:17:47.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.943 "is_configured": false, 00:17:47.943 "data_offset": 0, 00:17:47.943 "data_size": 0 00:17:47.943 } 00:17:47.943 ] 00:17:47.943 }' 00:17:47.943 07:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.943 07:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.510 07:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:48.768 [2024-07-12 07:28:22.595259] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.768 BaseBdev2 00:17:48.768 07:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:48.768 07:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:48.768 07:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:48.768 07:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:48.768 07:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:48.768 07:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:48.768 07:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:49.026 07:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:49.286 [ 00:17:49.286 { 00:17:49.286 "name": "BaseBdev2", 00:17:49.286 "aliases": [ 00:17:49.286 "a803f543-1f86-433e-977d-7a4cbc46f772" 00:17:49.286 ], 00:17:49.286 "product_name": "Malloc disk", 00:17:49.286 "block_size": 512, 00:17:49.286 "num_blocks": 65536, 00:17:49.286 "uuid": "a803f543-1f86-433e-977d-7a4cbc46f772", 00:17:49.286 "assigned_rate_limits": { 00:17:49.286 "rw_ios_per_sec": 0, 00:17:49.286 "rw_mbytes_per_sec": 0, 00:17:49.286 "r_mbytes_per_sec": 0, 00:17:49.286 "w_mbytes_per_sec": 0 00:17:49.286 }, 00:17:49.286 "claimed": true, 00:17:49.286 "claim_type": "exclusive_write", 00:17:49.286 "zoned": false, 00:17:49.286 "supported_io_types": { 00:17:49.286 "read": true, 00:17:49.286 "write": true, 00:17:49.286 "unmap": true, 00:17:49.286 "write_zeroes": true, 00:17:49.286 "flush": true, 00:17:49.286 "reset": true, 00:17:49.286 "compare": false, 00:17:49.286 "compare_and_write": false, 00:17:49.286 "abort": true, 00:17:49.286 "nvme_admin": false, 00:17:49.286 "nvme_io": false 00:17:49.286 }, 00:17:49.286 "memory_domains": [ 00:17:49.286 { 00:17:49.286 "dma_device_id": "system", 00:17:49.286 "dma_device_type": 1 00:17:49.286 }, 00:17:49.286 { 00:17:49.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.286 "dma_device_type": 2 00:17:49.286 } 00:17:49.286 ], 00:17:49.286 "driver_specific": {} 00:17:49.286 } 00:17:49.286 ] 00:17:49.286 07:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:49.286 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:49.286 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:49.286 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:49.286 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:49.286 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:49.286 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:49.286 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:49.286 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:49.286 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.286 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.286 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.286 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.286 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.286 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.597 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.597 "name": "Existed_Raid", 00:17:49.597 "uuid": "a51b8d4d-2432-4f7b-911e-51d74ec19412", 00:17:49.597 "strip_size_kb": 64, 00:17:49.597 "state": "configuring", 00:17:49.597 "raid_level": "raid0", 00:17:49.597 "superblock": true, 00:17:49.597 "num_base_bdevs": 3, 00:17:49.597 "num_base_bdevs_discovered": 2, 00:17:49.597 "num_base_bdevs_operational": 3, 00:17:49.597 "base_bdevs_list": [ 00:17:49.597 { 00:17:49.597 "name": "BaseBdev1", 00:17:49.597 "uuid": "4478edac-5e0e-4b45-8b72-bb373bec7817", 00:17:49.597 "is_configured": true, 00:17:49.597 "data_offset": 2048, 00:17:49.597 "data_size": 63488 00:17:49.597 }, 00:17:49.597 { 00:17:49.597 "name": "BaseBdev2", 00:17:49.597 "uuid": "a803f543-1f86-433e-977d-7a4cbc46f772", 00:17:49.597 "is_configured": true, 00:17:49.597 "data_offset": 2048, 00:17:49.597 "data_size": 63488 00:17:49.597 }, 00:17:49.597 { 00:17:49.597 "name": "BaseBdev3", 00:17:49.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.597 "is_configured": false, 00:17:49.597 "data_offset": 0, 00:17:49.597 "data_size": 0 00:17:49.597 } 00:17:49.597 ] 00:17:49.597 }' 00:17:49.597 07:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.597 07:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.176 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:50.434 [2024-07-12 07:28:24.221316] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:50.434 [2024-07-12 07:28:24.221835] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:17:50.434 [2024-07-12 07:28:24.221969] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:50.434 [2024-07-12 07:28:24.222194] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:17:50.434 [2024-07-12 07:28:24.222685] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:17:50.434 [2024-07-12 07:28:24.222828] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:17:50.434 [2024-07-12 07:28:24.223111] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.434 BaseBdev3 00:17:50.434 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:50.434 07:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:50.434 07:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:50.434 07:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:50.434 07:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:50.434 07:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:50.434 07:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:50.693 07:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:50.950 [ 00:17:50.950 { 00:17:50.950 "name": "BaseBdev3", 00:17:50.950 "aliases": [ 00:17:50.950 "58f0d167-574b-49ab-ac63-6d68d552b1f8" 00:17:50.950 ], 00:17:50.950 "product_name": "Malloc disk", 00:17:50.950 "block_size": 512, 00:17:50.950 "num_blocks": 65536, 00:17:50.950 "uuid": "58f0d167-574b-49ab-ac63-6d68d552b1f8", 00:17:50.950 "assigned_rate_limits": { 00:17:50.950 "rw_ios_per_sec": 0, 00:17:50.950 "rw_mbytes_per_sec": 0, 00:17:50.950 "r_mbytes_per_sec": 0, 00:17:50.950 "w_mbytes_per_sec": 0 00:17:50.950 }, 00:17:50.950 "claimed": true, 00:17:50.950 "claim_type": "exclusive_write", 00:17:50.950 "zoned": false, 00:17:50.950 "supported_io_types": { 00:17:50.950 "read": true, 00:17:50.950 "write": true, 00:17:50.950 "unmap": true, 00:17:50.950 "write_zeroes": true, 00:17:50.950 "flush": true, 00:17:50.950 "reset": true, 00:17:50.950 "compare": false, 00:17:50.950 "compare_and_write": false, 00:17:50.950 "abort": true, 00:17:50.950 "nvme_admin": false, 00:17:50.950 "nvme_io": false 00:17:50.950 }, 00:17:50.950 "memory_domains": [ 00:17:50.950 { 00:17:50.950 "dma_device_id": "system", 00:17:50.950 "dma_device_type": 1 00:17:50.950 }, 00:17:50.950 { 00:17:50.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.950 "dma_device_type": 2 00:17:50.950 } 00:17:50.950 ], 00:17:50.950 "driver_specific": {} 00:17:50.950 } 00:17:50.950 ] 00:17:50.950 07:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:50.950 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:50.950 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:50.950 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:50.950 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:50.950 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:50.950 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:50.950 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:50.950 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:50.950 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:50.950 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:50.950 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:50.950 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:50.950 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.950 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.206 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:51.206 "name": "Existed_Raid", 00:17:51.206 "uuid": "a51b8d4d-2432-4f7b-911e-51d74ec19412", 00:17:51.206 "strip_size_kb": 64, 00:17:51.206 "state": "online", 00:17:51.206 "raid_level": "raid0", 00:17:51.206 "superblock": true, 00:17:51.206 "num_base_bdevs": 3, 00:17:51.206 "num_base_bdevs_discovered": 3, 00:17:51.206 "num_base_bdevs_operational": 3, 00:17:51.206 "base_bdevs_list": [ 00:17:51.206 { 00:17:51.206 "name": "BaseBdev1", 00:17:51.206 "uuid": "4478edac-5e0e-4b45-8b72-bb373bec7817", 00:17:51.206 "is_configured": true, 00:17:51.206 "data_offset": 2048, 00:17:51.206 "data_size": 63488 00:17:51.206 }, 00:17:51.206 { 00:17:51.206 "name": "BaseBdev2", 00:17:51.206 "uuid": "a803f543-1f86-433e-977d-7a4cbc46f772", 00:17:51.206 "is_configured": true, 00:17:51.206 "data_offset": 2048, 00:17:51.206 "data_size": 63488 00:17:51.206 }, 00:17:51.206 { 00:17:51.206 "name": "BaseBdev3", 00:17:51.206 "uuid": "58f0d167-574b-49ab-ac63-6d68d552b1f8", 00:17:51.206 "is_configured": true, 00:17:51.206 "data_offset": 2048, 00:17:51.206 "data_size": 63488 00:17:51.206 } 00:17:51.206 ] 00:17:51.206 }' 00:17:51.206 07:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:51.206 07:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.770 07:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:51.770 07:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:51.770 07:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:51.770 07:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:51.770 07:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:51.770 07:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:51.770 07:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:51.770 07:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:52.028 [2024-07-12 07:28:25.766047] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.028 07:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:52.028 "name": "Existed_Raid", 00:17:52.028 "aliases": [ 00:17:52.028 "a51b8d4d-2432-4f7b-911e-51d74ec19412" 00:17:52.028 ], 00:17:52.028 "product_name": "Raid Volume", 00:17:52.028 "block_size": 512, 00:17:52.028 "num_blocks": 190464, 00:17:52.028 "uuid": "a51b8d4d-2432-4f7b-911e-51d74ec19412", 00:17:52.028 "assigned_rate_limits": { 00:17:52.028 "rw_ios_per_sec": 0, 00:17:52.028 "rw_mbytes_per_sec": 0, 00:17:52.028 "r_mbytes_per_sec": 0, 00:17:52.028 "w_mbytes_per_sec": 0 00:17:52.028 }, 00:17:52.028 "claimed": false, 00:17:52.028 "zoned": false, 00:17:52.028 "supported_io_types": { 00:17:52.028 "read": true, 00:17:52.028 "write": true, 00:17:52.028 "unmap": true, 00:17:52.028 "write_zeroes": true, 00:17:52.028 "flush": true, 00:17:52.028 "reset": true, 00:17:52.028 "compare": false, 00:17:52.028 "compare_and_write": false, 00:17:52.028 "abort": false, 00:17:52.028 "nvme_admin": false, 00:17:52.028 "nvme_io": false 00:17:52.028 }, 00:17:52.028 "memory_domains": [ 00:17:52.028 { 00:17:52.028 "dma_device_id": "system", 00:17:52.028 "dma_device_type": 1 00:17:52.028 }, 00:17:52.028 { 00:17:52.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.028 "dma_device_type": 2 00:17:52.028 }, 00:17:52.028 { 00:17:52.028 "dma_device_id": "system", 00:17:52.028 "dma_device_type": 1 00:17:52.028 }, 00:17:52.028 { 00:17:52.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.028 "dma_device_type": 2 00:17:52.028 }, 00:17:52.028 { 00:17:52.028 "dma_device_id": "system", 00:17:52.028 "dma_device_type": 1 00:17:52.028 }, 00:17:52.028 { 00:17:52.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.028 "dma_device_type": 2 00:17:52.028 } 00:17:52.028 ], 00:17:52.028 "driver_specific": { 00:17:52.028 "raid": { 00:17:52.028 "uuid": "a51b8d4d-2432-4f7b-911e-51d74ec19412", 00:17:52.028 "strip_size_kb": 64, 00:17:52.028 "state": "online", 00:17:52.028 "raid_level": "raid0", 00:17:52.028 "superblock": true, 00:17:52.028 "num_base_bdevs": 3, 00:17:52.028 "num_base_bdevs_discovered": 3, 00:17:52.028 "num_base_bdevs_operational": 3, 00:17:52.028 "base_bdevs_list": [ 00:17:52.028 { 00:17:52.028 "name": "BaseBdev1", 00:17:52.028 "uuid": "4478edac-5e0e-4b45-8b72-bb373bec7817", 00:17:52.028 "is_configured": true, 00:17:52.028 "data_offset": 2048, 00:17:52.028 "data_size": 63488 00:17:52.028 }, 00:17:52.028 { 00:17:52.028 "name": "BaseBdev2", 00:17:52.028 "uuid": "a803f543-1f86-433e-977d-7a4cbc46f772", 00:17:52.028 "is_configured": true, 00:17:52.028 "data_offset": 2048, 00:17:52.028 "data_size": 63488 00:17:52.028 }, 00:17:52.028 { 00:17:52.028 "name": "BaseBdev3", 00:17:52.028 "uuid": "58f0d167-574b-49ab-ac63-6d68d552b1f8", 00:17:52.028 "is_configured": true, 00:17:52.028 "data_offset": 2048, 00:17:52.028 "data_size": 63488 00:17:52.028 } 00:17:52.028 ] 00:17:52.028 } 00:17:52.028 } 00:17:52.028 }' 00:17:52.028 07:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:52.028 07:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:52.028 BaseBdev2 00:17:52.028 BaseBdev3' 00:17:52.028 07:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:52.028 07:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:52.028 07:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:52.287 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:52.287 "name": "BaseBdev1", 00:17:52.287 "aliases": [ 00:17:52.287 "4478edac-5e0e-4b45-8b72-bb373bec7817" 00:17:52.287 ], 00:17:52.287 "product_name": "Malloc disk", 00:17:52.287 "block_size": 512, 00:17:52.287 "num_blocks": 65536, 00:17:52.287 "uuid": "4478edac-5e0e-4b45-8b72-bb373bec7817", 00:17:52.287 "assigned_rate_limits": { 00:17:52.287 "rw_ios_per_sec": 0, 00:17:52.287 "rw_mbytes_per_sec": 0, 00:17:52.287 "r_mbytes_per_sec": 0, 00:17:52.287 "w_mbytes_per_sec": 0 00:17:52.287 }, 00:17:52.287 "claimed": true, 00:17:52.287 "claim_type": "exclusive_write", 00:17:52.287 "zoned": false, 00:17:52.287 "supported_io_types": { 00:17:52.287 "read": true, 00:17:52.287 "write": true, 00:17:52.287 "unmap": true, 00:17:52.287 "write_zeroes": true, 00:17:52.287 "flush": true, 00:17:52.287 "reset": true, 00:17:52.287 "compare": false, 00:17:52.287 "compare_and_write": false, 00:17:52.287 "abort": true, 00:17:52.287 "nvme_admin": false, 00:17:52.287 "nvme_io": false 00:17:52.287 }, 00:17:52.287 "memory_domains": [ 00:17:52.287 { 00:17:52.287 "dma_device_id": "system", 00:17:52.287 "dma_device_type": 1 00:17:52.287 }, 00:17:52.287 { 00:17:52.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.287 "dma_device_type": 2 00:17:52.287 } 00:17:52.287 ], 00:17:52.287 "driver_specific": {} 00:17:52.287 }' 00:17:52.287 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:52.287 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:52.545 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:52.545 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:52.545 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:52.545 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:52.545 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:52.545 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:52.545 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:52.545 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:52.545 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:52.802 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:52.802 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:52.802 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:52.802 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:53.060 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:53.060 "name": "BaseBdev2", 00:17:53.060 "aliases": [ 00:17:53.060 "a803f543-1f86-433e-977d-7a4cbc46f772" 00:17:53.060 ], 00:17:53.060 "product_name": "Malloc disk", 00:17:53.060 "block_size": 512, 00:17:53.060 "num_blocks": 65536, 00:17:53.060 "uuid": "a803f543-1f86-433e-977d-7a4cbc46f772", 00:17:53.060 "assigned_rate_limits": { 00:17:53.060 "rw_ios_per_sec": 0, 00:17:53.060 "rw_mbytes_per_sec": 0, 00:17:53.060 "r_mbytes_per_sec": 0, 00:17:53.060 "w_mbytes_per_sec": 0 00:17:53.060 }, 00:17:53.060 "claimed": true, 00:17:53.060 "claim_type": "exclusive_write", 00:17:53.060 "zoned": false, 00:17:53.060 "supported_io_types": { 00:17:53.060 "read": true, 00:17:53.060 "write": true, 00:17:53.060 "unmap": true, 00:17:53.060 "write_zeroes": true, 00:17:53.060 "flush": true, 00:17:53.060 "reset": true, 00:17:53.060 "compare": false, 00:17:53.060 "compare_and_write": false, 00:17:53.060 "abort": true, 00:17:53.060 "nvme_admin": false, 00:17:53.060 "nvme_io": false 00:17:53.060 }, 00:17:53.060 "memory_domains": [ 00:17:53.060 { 00:17:53.060 "dma_device_id": "system", 00:17:53.060 "dma_device_type": 1 00:17:53.060 }, 00:17:53.060 { 00:17:53.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.060 "dma_device_type": 2 00:17:53.060 } 00:17:53.060 ], 00:17:53.060 "driver_specific": {} 00:17:53.060 }' 00:17:53.060 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.060 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.060 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:53.060 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.060 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.060 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:53.060 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.060 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.318 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:53.318 07:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.318 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.318 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:53.318 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:53.318 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:53.318 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:53.577 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:53.577 "name": "BaseBdev3", 00:17:53.577 "aliases": [ 00:17:53.577 "58f0d167-574b-49ab-ac63-6d68d552b1f8" 00:17:53.577 ], 00:17:53.577 "product_name": "Malloc disk", 00:17:53.577 "block_size": 512, 00:17:53.577 "num_blocks": 65536, 00:17:53.577 "uuid": "58f0d167-574b-49ab-ac63-6d68d552b1f8", 00:17:53.577 "assigned_rate_limits": { 00:17:53.577 "rw_ios_per_sec": 0, 00:17:53.577 "rw_mbytes_per_sec": 0, 00:17:53.577 "r_mbytes_per_sec": 0, 00:17:53.577 "w_mbytes_per_sec": 0 00:17:53.577 }, 00:17:53.577 "claimed": true, 00:17:53.577 "claim_type": "exclusive_write", 00:17:53.577 "zoned": false, 00:17:53.577 "supported_io_types": { 00:17:53.577 "read": true, 00:17:53.577 "write": true, 00:17:53.577 "unmap": true, 00:17:53.577 "write_zeroes": true, 00:17:53.577 "flush": true, 00:17:53.577 "reset": true, 00:17:53.577 "compare": false, 00:17:53.577 "compare_and_write": false, 00:17:53.577 "abort": true, 00:17:53.577 "nvme_admin": false, 00:17:53.577 "nvme_io": false 00:17:53.577 }, 00:17:53.577 "memory_domains": [ 00:17:53.577 { 00:17:53.577 "dma_device_id": "system", 00:17:53.577 "dma_device_type": 1 00:17:53.577 }, 00:17:53.577 { 00:17:53.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.577 "dma_device_type": 2 00:17:53.577 } 00:17:53.577 ], 00:17:53.577 "driver_specific": {} 00:17:53.577 }' 00:17:53.577 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.577 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.577 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:53.577 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.835 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.835 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:53.835 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.835 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.835 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:53.835 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.835 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.835 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:53.835 07:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:54.094 [2024-07-12 07:28:27.974270] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:54.094 [2024-07-12 07:28:27.974529] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:54.094 [2024-07-12 07:28:27.974788] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.353 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.613 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.613 "name": "Existed_Raid", 00:17:54.613 "uuid": "a51b8d4d-2432-4f7b-911e-51d74ec19412", 00:17:54.613 "strip_size_kb": 64, 00:17:54.613 "state": "offline", 00:17:54.613 "raid_level": "raid0", 00:17:54.613 "superblock": true, 00:17:54.613 "num_base_bdevs": 3, 00:17:54.613 "num_base_bdevs_discovered": 2, 00:17:54.613 "num_base_bdevs_operational": 2, 00:17:54.613 "base_bdevs_list": [ 00:17:54.613 { 00:17:54.613 "name": null, 00:17:54.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.613 "is_configured": false, 00:17:54.613 "data_offset": 2048, 00:17:54.613 "data_size": 63488 00:17:54.613 }, 00:17:54.613 { 00:17:54.613 "name": "BaseBdev2", 00:17:54.613 "uuid": "a803f543-1f86-433e-977d-7a4cbc46f772", 00:17:54.613 "is_configured": true, 00:17:54.613 "data_offset": 2048, 00:17:54.613 "data_size": 63488 00:17:54.613 }, 00:17:54.613 { 00:17:54.613 "name": "BaseBdev3", 00:17:54.613 "uuid": "58f0d167-574b-49ab-ac63-6d68d552b1f8", 00:17:54.613 "is_configured": true, 00:17:54.613 "data_offset": 2048, 00:17:54.613 "data_size": 63488 00:17:54.613 } 00:17:54.613 ] 00:17:54.613 }' 00:17:54.613 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.613 07:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.182 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:55.182 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:55.182 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.182 07:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:55.441 07:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:55.441 07:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:55.441 07:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:55.700 [2024-07-12 07:28:29.401818] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:55.700 07:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:55.700 07:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:55.700 07:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.700 07:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:55.959 07:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:55.959 07:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:55.959 07:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:56.230 [2024-07-12 07:28:29.879160] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:56.230 [2024-07-12 07:28:29.879641] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:17:56.230 07:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:56.230 07:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:56.230 07:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.230 07:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:56.492 07:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:56.492 07:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:56.492 07:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:56.492 07:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:56.492 07:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:56.492 07:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:56.492 BaseBdev2 00:17:56.492 07:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:56.492 07:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:56.492 07:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:56.492 07:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:56.492 07:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:56.492 07:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:56.492 07:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.750 07:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:57.009 [ 00:17:57.009 { 00:17:57.009 "name": "BaseBdev2", 00:17:57.009 "aliases": [ 00:17:57.009 "4660f42c-6b42-4d12-a4cf-7406e769ecb4" 00:17:57.009 ], 00:17:57.009 "product_name": "Malloc disk", 00:17:57.009 "block_size": 512, 00:17:57.009 "num_blocks": 65536, 00:17:57.010 "uuid": "4660f42c-6b42-4d12-a4cf-7406e769ecb4", 00:17:57.010 "assigned_rate_limits": { 00:17:57.010 "rw_ios_per_sec": 0, 00:17:57.010 "rw_mbytes_per_sec": 0, 00:17:57.010 "r_mbytes_per_sec": 0, 00:17:57.010 "w_mbytes_per_sec": 0 00:17:57.010 }, 00:17:57.010 "claimed": false, 00:17:57.010 "zoned": false, 00:17:57.010 "supported_io_types": { 00:17:57.010 "read": true, 00:17:57.010 "write": true, 00:17:57.010 "unmap": true, 00:17:57.010 "write_zeroes": true, 00:17:57.010 "flush": true, 00:17:57.010 "reset": true, 00:17:57.010 "compare": false, 00:17:57.010 "compare_and_write": false, 00:17:57.010 "abort": true, 00:17:57.010 "nvme_admin": false, 00:17:57.010 "nvme_io": false 00:17:57.010 }, 00:17:57.010 "memory_domains": [ 00:17:57.010 { 00:17:57.010 "dma_device_id": "system", 00:17:57.010 "dma_device_type": 1 00:17:57.010 }, 00:17:57.010 { 00:17:57.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.010 "dma_device_type": 2 00:17:57.010 } 00:17:57.010 ], 00:17:57.010 "driver_specific": {} 00:17:57.010 } 00:17:57.010 ] 00:17:57.010 07:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:57.010 07:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:57.010 07:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:57.010 07:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:57.269 BaseBdev3 00:17:57.269 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:57.269 07:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:57.269 07:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:57.269 07:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:57.269 07:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:57.269 07:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:57.269 07:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:57.527 07:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:57.785 [ 00:17:57.785 { 00:17:57.785 "name": "BaseBdev3", 00:17:57.785 "aliases": [ 00:17:57.786 "39397154-8091-4758-881a-a008369e6a5d" 00:17:57.786 ], 00:17:57.786 "product_name": "Malloc disk", 00:17:57.786 "block_size": 512, 00:17:57.786 "num_blocks": 65536, 00:17:57.786 "uuid": "39397154-8091-4758-881a-a008369e6a5d", 00:17:57.786 "assigned_rate_limits": { 00:17:57.786 "rw_ios_per_sec": 0, 00:17:57.786 "rw_mbytes_per_sec": 0, 00:17:57.786 "r_mbytes_per_sec": 0, 00:17:57.786 "w_mbytes_per_sec": 0 00:17:57.786 }, 00:17:57.786 "claimed": false, 00:17:57.786 "zoned": false, 00:17:57.786 "supported_io_types": { 00:17:57.786 "read": true, 00:17:57.786 "write": true, 00:17:57.786 "unmap": true, 00:17:57.786 "write_zeroes": true, 00:17:57.786 "flush": true, 00:17:57.786 "reset": true, 00:17:57.786 "compare": false, 00:17:57.786 "compare_and_write": false, 00:17:57.786 "abort": true, 00:17:57.786 "nvme_admin": false, 00:17:57.786 "nvme_io": false 00:17:57.786 }, 00:17:57.786 "memory_domains": [ 00:17:57.786 { 00:17:57.786 "dma_device_id": "system", 00:17:57.786 "dma_device_type": 1 00:17:57.786 }, 00:17:57.786 { 00:17:57.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.786 "dma_device_type": 2 00:17:57.786 } 00:17:57.786 ], 00:17:57.786 "driver_specific": {} 00:17:57.786 } 00:17:57.786 ] 00:17:57.786 07:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:57.786 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:57.786 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:57.786 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:57.786 [2024-07-12 07:28:31.661775] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:57.786 [2024-07-12 07:28:31.662234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:57.786 [2024-07-12 07:28:31.662430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.786 [2024-07-12 07:28:31.665987] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:58.044 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:58.044 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:58.044 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:58.044 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:58.044 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:58.044 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:58.044 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.044 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.044 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.044 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.044 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.044 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.303 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.303 "name": "Existed_Raid", 00:17:58.303 "uuid": "810c8e12-a656-411a-87e7-79317952dc1c", 00:17:58.303 "strip_size_kb": 64, 00:17:58.303 "state": "configuring", 00:17:58.303 "raid_level": "raid0", 00:17:58.303 "superblock": true, 00:17:58.303 "num_base_bdevs": 3, 00:17:58.303 "num_base_bdevs_discovered": 2, 00:17:58.303 "num_base_bdevs_operational": 3, 00:17:58.303 "base_bdevs_list": [ 00:17:58.303 { 00:17:58.303 "name": "BaseBdev1", 00:17:58.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.303 "is_configured": false, 00:17:58.303 "data_offset": 0, 00:17:58.303 "data_size": 0 00:17:58.303 }, 00:17:58.303 { 00:17:58.303 "name": "BaseBdev2", 00:17:58.303 "uuid": "4660f42c-6b42-4d12-a4cf-7406e769ecb4", 00:17:58.303 "is_configured": true, 00:17:58.303 "data_offset": 2048, 00:17:58.303 "data_size": 63488 00:17:58.303 }, 00:17:58.303 { 00:17:58.303 "name": "BaseBdev3", 00:17:58.303 "uuid": "39397154-8091-4758-881a-a008369e6a5d", 00:17:58.303 "is_configured": true, 00:17:58.303 "data_offset": 2048, 00:17:58.303 "data_size": 63488 00:17:58.303 } 00:17:58.303 ] 00:17:58.303 }' 00:17:58.303 07:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.303 07:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.870 07:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:58.870 [2024-07-12 07:28:32.750679] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:59.128 07:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:59.128 07:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:59.128 07:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:59.128 07:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:59.128 07:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:59.128 07:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:59.128 07:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:59.128 07:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:59.128 07:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:59.128 07:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:59.128 07:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.128 07:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.128 07:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:59.128 "name": "Existed_Raid", 00:17:59.129 "uuid": "810c8e12-a656-411a-87e7-79317952dc1c", 00:17:59.129 "strip_size_kb": 64, 00:17:59.129 "state": "configuring", 00:17:59.129 "raid_level": "raid0", 00:17:59.129 "superblock": true, 00:17:59.129 "num_base_bdevs": 3, 00:17:59.129 "num_base_bdevs_discovered": 1, 00:17:59.129 "num_base_bdevs_operational": 3, 00:17:59.129 "base_bdevs_list": [ 00:17:59.129 { 00:17:59.129 "name": "BaseBdev1", 00:17:59.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.129 "is_configured": false, 00:17:59.129 "data_offset": 0, 00:17:59.129 "data_size": 0 00:17:59.129 }, 00:17:59.129 { 00:17:59.129 "name": null, 00:17:59.129 "uuid": "4660f42c-6b42-4d12-a4cf-7406e769ecb4", 00:17:59.129 "is_configured": false, 00:17:59.129 "data_offset": 2048, 00:17:59.129 "data_size": 63488 00:17:59.129 }, 00:17:59.129 { 00:17:59.129 "name": "BaseBdev3", 00:17:59.129 "uuid": "39397154-8091-4758-881a-a008369e6a5d", 00:17:59.129 "is_configured": true, 00:17:59.129 "data_offset": 2048, 00:17:59.129 "data_size": 63488 00:17:59.129 } 00:17:59.129 ] 00:17:59.129 }' 00:17:59.129 07:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:59.129 07:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.703 07:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.703 07:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:59.977 07:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:59.977 07:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:00.236 [2024-07-12 07:28:33.979460] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:00.236 BaseBdev1 00:18:00.236 07:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:00.236 07:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:18:00.236 07:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:00.236 07:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:00.236 07:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:00.236 07:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:00.236 07:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:00.495 07:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:00.754 [ 00:18:00.754 { 00:18:00.754 "name": "BaseBdev1", 00:18:00.754 "aliases": [ 00:18:00.754 "05b105f7-72b4-499f-b41f-972a98e24b65" 00:18:00.754 ], 00:18:00.754 "product_name": "Malloc disk", 00:18:00.754 "block_size": 512, 00:18:00.754 "num_blocks": 65536, 00:18:00.754 "uuid": "05b105f7-72b4-499f-b41f-972a98e24b65", 00:18:00.754 "assigned_rate_limits": { 00:18:00.754 "rw_ios_per_sec": 0, 00:18:00.754 "rw_mbytes_per_sec": 0, 00:18:00.754 "r_mbytes_per_sec": 0, 00:18:00.754 "w_mbytes_per_sec": 0 00:18:00.754 }, 00:18:00.754 "claimed": true, 00:18:00.754 "claim_type": "exclusive_write", 00:18:00.754 "zoned": false, 00:18:00.754 "supported_io_types": { 00:18:00.754 "read": true, 00:18:00.754 "write": true, 00:18:00.754 "unmap": true, 00:18:00.754 "write_zeroes": true, 00:18:00.754 "flush": true, 00:18:00.754 "reset": true, 00:18:00.754 "compare": false, 00:18:00.754 "compare_and_write": false, 00:18:00.754 "abort": true, 00:18:00.754 "nvme_admin": false, 00:18:00.754 "nvme_io": false 00:18:00.754 }, 00:18:00.754 "memory_domains": [ 00:18:00.754 { 00:18:00.754 "dma_device_id": "system", 00:18:00.754 "dma_device_type": 1 00:18:00.754 }, 00:18:00.754 { 00:18:00.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.754 "dma_device_type": 2 00:18:00.754 } 00:18:00.754 ], 00:18:00.754 "driver_specific": {} 00:18:00.754 } 00:18:00.754 ] 00:18:00.754 07:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:00.754 07:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:00.754 07:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:00.755 07:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:00.755 07:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:00.755 07:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:00.755 07:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:00.755 07:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.755 07:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.755 07:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.755 07:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.755 07:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.755 07:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.014 07:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:01.014 "name": "Existed_Raid", 00:18:01.014 "uuid": "810c8e12-a656-411a-87e7-79317952dc1c", 00:18:01.014 "strip_size_kb": 64, 00:18:01.014 "state": "configuring", 00:18:01.014 "raid_level": "raid0", 00:18:01.014 "superblock": true, 00:18:01.014 "num_base_bdevs": 3, 00:18:01.014 "num_base_bdevs_discovered": 2, 00:18:01.014 "num_base_bdevs_operational": 3, 00:18:01.014 "base_bdevs_list": [ 00:18:01.014 { 00:18:01.014 "name": "BaseBdev1", 00:18:01.014 "uuid": "05b105f7-72b4-499f-b41f-972a98e24b65", 00:18:01.014 "is_configured": true, 00:18:01.014 "data_offset": 2048, 00:18:01.014 "data_size": 63488 00:18:01.014 }, 00:18:01.014 { 00:18:01.014 "name": null, 00:18:01.014 "uuid": "4660f42c-6b42-4d12-a4cf-7406e769ecb4", 00:18:01.014 "is_configured": false, 00:18:01.014 "data_offset": 2048, 00:18:01.014 "data_size": 63488 00:18:01.014 }, 00:18:01.014 { 00:18:01.014 "name": "BaseBdev3", 00:18:01.014 "uuid": "39397154-8091-4758-881a-a008369e6a5d", 00:18:01.014 "is_configured": true, 00:18:01.014 "data_offset": 2048, 00:18:01.014 "data_size": 63488 00:18:01.014 } 00:18:01.014 ] 00:18:01.014 }' 00:18:01.014 07:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:01.014 07:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.583 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.583 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:01.842 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:01.842 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:01.842 [2024-07-12 07:28:35.691890] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:01.842 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:01.842 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:01.842 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:01.842 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:01.842 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:01.842 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:01.842 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.842 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.842 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.842 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.842 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.842 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.101 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:02.101 "name": "Existed_Raid", 00:18:02.101 "uuid": "810c8e12-a656-411a-87e7-79317952dc1c", 00:18:02.101 "strip_size_kb": 64, 00:18:02.101 "state": "configuring", 00:18:02.101 "raid_level": "raid0", 00:18:02.101 "superblock": true, 00:18:02.101 "num_base_bdevs": 3, 00:18:02.101 "num_base_bdevs_discovered": 1, 00:18:02.101 "num_base_bdevs_operational": 3, 00:18:02.101 "base_bdevs_list": [ 00:18:02.101 { 00:18:02.101 "name": "BaseBdev1", 00:18:02.101 "uuid": "05b105f7-72b4-499f-b41f-972a98e24b65", 00:18:02.101 "is_configured": true, 00:18:02.101 "data_offset": 2048, 00:18:02.101 "data_size": 63488 00:18:02.101 }, 00:18:02.101 { 00:18:02.101 "name": null, 00:18:02.101 "uuid": "4660f42c-6b42-4d12-a4cf-7406e769ecb4", 00:18:02.101 "is_configured": false, 00:18:02.101 "data_offset": 2048, 00:18:02.101 "data_size": 63488 00:18:02.101 }, 00:18:02.101 { 00:18:02.101 "name": null, 00:18:02.101 "uuid": "39397154-8091-4758-881a-a008369e6a5d", 00:18:02.101 "is_configured": false, 00:18:02.101 "data_offset": 2048, 00:18:02.101 "data_size": 63488 00:18:02.101 } 00:18:02.101 ] 00:18:02.101 }' 00:18:02.101 07:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:02.101 07:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.668 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.668 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:02.927 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:02.927 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:03.186 [2024-07-12 07:28:36.909917] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:03.186 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:03.186 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:03.186 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:03.186 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:03.186 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:03.186 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:03.186 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.186 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.186 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.186 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.186 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.186 07:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.445 07:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.445 "name": "Existed_Raid", 00:18:03.445 "uuid": "810c8e12-a656-411a-87e7-79317952dc1c", 00:18:03.445 "strip_size_kb": 64, 00:18:03.445 "state": "configuring", 00:18:03.445 "raid_level": "raid0", 00:18:03.445 "superblock": true, 00:18:03.445 "num_base_bdevs": 3, 00:18:03.445 "num_base_bdevs_discovered": 2, 00:18:03.445 "num_base_bdevs_operational": 3, 00:18:03.445 "base_bdevs_list": [ 00:18:03.445 { 00:18:03.445 "name": "BaseBdev1", 00:18:03.445 "uuid": "05b105f7-72b4-499f-b41f-972a98e24b65", 00:18:03.445 "is_configured": true, 00:18:03.445 "data_offset": 2048, 00:18:03.445 "data_size": 63488 00:18:03.445 }, 00:18:03.445 { 00:18:03.445 "name": null, 00:18:03.445 "uuid": "4660f42c-6b42-4d12-a4cf-7406e769ecb4", 00:18:03.445 "is_configured": false, 00:18:03.445 "data_offset": 2048, 00:18:03.445 "data_size": 63488 00:18:03.445 }, 00:18:03.445 { 00:18:03.445 "name": "BaseBdev3", 00:18:03.445 "uuid": "39397154-8091-4758-881a-a008369e6a5d", 00:18:03.445 "is_configured": true, 00:18:03.445 "data_offset": 2048, 00:18:03.445 "data_size": 63488 00:18:03.445 } 00:18:03.445 ] 00:18:03.445 }' 00:18:03.445 07:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.445 07:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.014 07:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.014 07:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:04.273 07:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:04.273 07:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:04.531 [2024-07-12 07:28:38.218432] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:04.531 07:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:04.531 07:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:04.531 07:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:04.531 07:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:04.531 07:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:04.531 07:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:04.531 07:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:04.531 07:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:04.531 07:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:04.531 07:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:04.531 07:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.531 07:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.789 07:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:04.789 "name": "Existed_Raid", 00:18:04.789 "uuid": "810c8e12-a656-411a-87e7-79317952dc1c", 00:18:04.789 "strip_size_kb": 64, 00:18:04.789 "state": "configuring", 00:18:04.789 "raid_level": "raid0", 00:18:04.789 "superblock": true, 00:18:04.789 "num_base_bdevs": 3, 00:18:04.789 "num_base_bdevs_discovered": 1, 00:18:04.789 "num_base_bdevs_operational": 3, 00:18:04.789 "base_bdevs_list": [ 00:18:04.789 { 00:18:04.789 "name": null, 00:18:04.789 "uuid": "05b105f7-72b4-499f-b41f-972a98e24b65", 00:18:04.789 "is_configured": false, 00:18:04.789 "data_offset": 2048, 00:18:04.789 "data_size": 63488 00:18:04.789 }, 00:18:04.789 { 00:18:04.789 "name": null, 00:18:04.789 "uuid": "4660f42c-6b42-4d12-a4cf-7406e769ecb4", 00:18:04.789 "is_configured": false, 00:18:04.789 "data_offset": 2048, 00:18:04.789 "data_size": 63488 00:18:04.789 }, 00:18:04.789 { 00:18:04.789 "name": "BaseBdev3", 00:18:04.789 "uuid": "39397154-8091-4758-881a-a008369e6a5d", 00:18:04.789 "is_configured": true, 00:18:04.789 "data_offset": 2048, 00:18:04.789 "data_size": 63488 00:18:04.789 } 00:18:04.789 ] 00:18:04.789 }' 00:18:04.789 07:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:04.789 07:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.356 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.356 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:05.615 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:05.615 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:05.873 [2024-07-12 07:28:39.598607] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:05.873 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:05.873 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:05.873 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:05.873 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:05.873 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:05.873 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:05.874 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:05.874 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:05.874 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:05.874 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:05.874 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.874 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.133 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:06.133 "name": "Existed_Raid", 00:18:06.133 "uuid": "810c8e12-a656-411a-87e7-79317952dc1c", 00:18:06.133 "strip_size_kb": 64, 00:18:06.133 "state": "configuring", 00:18:06.133 "raid_level": "raid0", 00:18:06.133 "superblock": true, 00:18:06.133 "num_base_bdevs": 3, 00:18:06.133 "num_base_bdevs_discovered": 2, 00:18:06.133 "num_base_bdevs_operational": 3, 00:18:06.133 "base_bdevs_list": [ 00:18:06.133 { 00:18:06.133 "name": null, 00:18:06.133 "uuid": "05b105f7-72b4-499f-b41f-972a98e24b65", 00:18:06.133 "is_configured": false, 00:18:06.133 "data_offset": 2048, 00:18:06.133 "data_size": 63488 00:18:06.133 }, 00:18:06.133 { 00:18:06.133 "name": "BaseBdev2", 00:18:06.133 "uuid": "4660f42c-6b42-4d12-a4cf-7406e769ecb4", 00:18:06.133 "is_configured": true, 00:18:06.133 "data_offset": 2048, 00:18:06.133 "data_size": 63488 00:18:06.133 }, 00:18:06.133 { 00:18:06.133 "name": "BaseBdev3", 00:18:06.133 "uuid": "39397154-8091-4758-881a-a008369e6a5d", 00:18:06.133 "is_configured": true, 00:18:06.133 "data_offset": 2048, 00:18:06.133 "data_size": 63488 00:18:06.133 } 00:18:06.133 ] 00:18:06.133 }' 00:18:06.133 07:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:06.133 07:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.699 07:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.699 07:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:06.957 07:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:06.957 07:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.957 07:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:07.214 07:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 05b105f7-72b4-499f-b41f-972a98e24b65 00:18:07.472 [2024-07-12 07:28:41.252585] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:07.472 [2024-07-12 07:28:41.253089] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:18:07.472 [2024-07-12 07:28:41.253226] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:07.472 [2024-07-12 07:28:41.253376] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:18:07.472 [2024-07-12 07:28:41.253879] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:18:07.472 [2024-07-12 07:28:41.253920] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:18:07.472 [2024-07-12 07:28:41.254128] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.472 NewBaseBdev 00:18:07.472 07:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:07.472 07:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:18:07.472 07:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:07.472 07:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:07.472 07:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:07.472 07:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:07.472 07:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:07.731 07:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:07.990 [ 00:18:07.990 { 00:18:07.990 "name": "NewBaseBdev", 00:18:07.990 "aliases": [ 00:18:07.990 "05b105f7-72b4-499f-b41f-972a98e24b65" 00:18:07.990 ], 00:18:07.990 "product_name": "Malloc disk", 00:18:07.990 "block_size": 512, 00:18:07.990 "num_blocks": 65536, 00:18:07.990 "uuid": "05b105f7-72b4-499f-b41f-972a98e24b65", 00:18:07.990 "assigned_rate_limits": { 00:18:07.990 "rw_ios_per_sec": 0, 00:18:07.990 "rw_mbytes_per_sec": 0, 00:18:07.990 "r_mbytes_per_sec": 0, 00:18:07.990 "w_mbytes_per_sec": 0 00:18:07.990 }, 00:18:07.990 "claimed": true, 00:18:07.990 "claim_type": "exclusive_write", 00:18:07.990 "zoned": false, 00:18:07.990 "supported_io_types": { 00:18:07.990 "read": true, 00:18:07.990 "write": true, 00:18:07.990 "unmap": true, 00:18:07.990 "write_zeroes": true, 00:18:07.990 "flush": true, 00:18:07.990 "reset": true, 00:18:07.990 "compare": false, 00:18:07.990 "compare_and_write": false, 00:18:07.990 "abort": true, 00:18:07.990 "nvme_admin": false, 00:18:07.990 "nvme_io": false 00:18:07.990 }, 00:18:07.990 "memory_domains": [ 00:18:07.990 { 00:18:07.990 "dma_device_id": "system", 00:18:07.990 "dma_device_type": 1 00:18:07.990 }, 00:18:07.990 { 00:18:07.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.990 "dma_device_type": 2 00:18:07.990 } 00:18:07.990 ], 00:18:07.990 "driver_specific": {} 00:18:07.990 } 00:18:07.990 ] 00:18:07.990 07:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:07.990 07:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:07.990 07:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:07.990 07:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:07.990 07:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:07.990 07:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:07.990 07:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:07.990 07:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.990 07:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.990 07:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.990 07:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.990 07:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.990 07:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.248 07:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.248 "name": "Existed_Raid", 00:18:08.248 "uuid": "810c8e12-a656-411a-87e7-79317952dc1c", 00:18:08.248 "strip_size_kb": 64, 00:18:08.248 "state": "online", 00:18:08.248 "raid_level": "raid0", 00:18:08.248 "superblock": true, 00:18:08.248 "num_base_bdevs": 3, 00:18:08.248 "num_base_bdevs_discovered": 3, 00:18:08.248 "num_base_bdevs_operational": 3, 00:18:08.248 "base_bdevs_list": [ 00:18:08.248 { 00:18:08.248 "name": "NewBaseBdev", 00:18:08.248 "uuid": "05b105f7-72b4-499f-b41f-972a98e24b65", 00:18:08.248 "is_configured": true, 00:18:08.248 "data_offset": 2048, 00:18:08.248 "data_size": 63488 00:18:08.248 }, 00:18:08.248 { 00:18:08.248 "name": "BaseBdev2", 00:18:08.248 "uuid": "4660f42c-6b42-4d12-a4cf-7406e769ecb4", 00:18:08.248 "is_configured": true, 00:18:08.248 "data_offset": 2048, 00:18:08.248 "data_size": 63488 00:18:08.248 }, 00:18:08.248 { 00:18:08.248 "name": "BaseBdev3", 00:18:08.248 "uuid": "39397154-8091-4758-881a-a008369e6a5d", 00:18:08.248 "is_configured": true, 00:18:08.248 "data_offset": 2048, 00:18:08.248 "data_size": 63488 00:18:08.248 } 00:18:08.248 ] 00:18:08.248 }' 00:18:08.248 07:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.248 07:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.815 07:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:08.815 07:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:08.815 07:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:08.815 07:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:08.815 07:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:08.815 07:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:08.815 07:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:08.815 07:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:09.074 [2024-07-12 07:28:42.798137] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.074 07:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:09.074 "name": "Existed_Raid", 00:18:09.074 "aliases": [ 00:18:09.074 "810c8e12-a656-411a-87e7-79317952dc1c" 00:18:09.074 ], 00:18:09.074 "product_name": "Raid Volume", 00:18:09.074 "block_size": 512, 00:18:09.074 "num_blocks": 190464, 00:18:09.074 "uuid": "810c8e12-a656-411a-87e7-79317952dc1c", 00:18:09.074 "assigned_rate_limits": { 00:18:09.074 "rw_ios_per_sec": 0, 00:18:09.074 "rw_mbytes_per_sec": 0, 00:18:09.074 "r_mbytes_per_sec": 0, 00:18:09.074 "w_mbytes_per_sec": 0 00:18:09.074 }, 00:18:09.074 "claimed": false, 00:18:09.074 "zoned": false, 00:18:09.074 "supported_io_types": { 00:18:09.074 "read": true, 00:18:09.074 "write": true, 00:18:09.074 "unmap": true, 00:18:09.074 "write_zeroes": true, 00:18:09.074 "flush": true, 00:18:09.074 "reset": true, 00:18:09.074 "compare": false, 00:18:09.074 "compare_and_write": false, 00:18:09.074 "abort": false, 00:18:09.074 "nvme_admin": false, 00:18:09.074 "nvme_io": false 00:18:09.074 }, 00:18:09.074 "memory_domains": [ 00:18:09.074 { 00:18:09.074 "dma_device_id": "system", 00:18:09.074 "dma_device_type": 1 00:18:09.074 }, 00:18:09.074 { 00:18:09.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.074 "dma_device_type": 2 00:18:09.074 }, 00:18:09.074 { 00:18:09.074 "dma_device_id": "system", 00:18:09.074 "dma_device_type": 1 00:18:09.074 }, 00:18:09.074 { 00:18:09.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.074 "dma_device_type": 2 00:18:09.074 }, 00:18:09.074 { 00:18:09.074 "dma_device_id": "system", 00:18:09.074 "dma_device_type": 1 00:18:09.074 }, 00:18:09.074 { 00:18:09.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.074 "dma_device_type": 2 00:18:09.074 } 00:18:09.074 ], 00:18:09.074 "driver_specific": { 00:18:09.074 "raid": { 00:18:09.074 "uuid": "810c8e12-a656-411a-87e7-79317952dc1c", 00:18:09.074 "strip_size_kb": 64, 00:18:09.074 "state": "online", 00:18:09.074 "raid_level": "raid0", 00:18:09.074 "superblock": true, 00:18:09.074 "num_base_bdevs": 3, 00:18:09.074 "num_base_bdevs_discovered": 3, 00:18:09.074 "num_base_bdevs_operational": 3, 00:18:09.074 "base_bdevs_list": [ 00:18:09.074 { 00:18:09.074 "name": "NewBaseBdev", 00:18:09.074 "uuid": "05b105f7-72b4-499f-b41f-972a98e24b65", 00:18:09.074 "is_configured": true, 00:18:09.074 "data_offset": 2048, 00:18:09.074 "data_size": 63488 00:18:09.074 }, 00:18:09.074 { 00:18:09.074 "name": "BaseBdev2", 00:18:09.074 "uuid": "4660f42c-6b42-4d12-a4cf-7406e769ecb4", 00:18:09.074 "is_configured": true, 00:18:09.074 "data_offset": 2048, 00:18:09.074 "data_size": 63488 00:18:09.074 }, 00:18:09.074 { 00:18:09.074 "name": "BaseBdev3", 00:18:09.074 "uuid": "39397154-8091-4758-881a-a008369e6a5d", 00:18:09.074 "is_configured": true, 00:18:09.074 "data_offset": 2048, 00:18:09.074 "data_size": 63488 00:18:09.074 } 00:18:09.074 ] 00:18:09.074 } 00:18:09.074 } 00:18:09.074 }' 00:18:09.074 07:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:09.074 07:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:09.074 BaseBdev2 00:18:09.074 BaseBdev3' 00:18:09.074 07:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:09.074 07:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:09.074 07:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:09.333 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:09.333 "name": "NewBaseBdev", 00:18:09.333 "aliases": [ 00:18:09.333 "05b105f7-72b4-499f-b41f-972a98e24b65" 00:18:09.333 ], 00:18:09.333 "product_name": "Malloc disk", 00:18:09.333 "block_size": 512, 00:18:09.333 "num_blocks": 65536, 00:18:09.333 "uuid": "05b105f7-72b4-499f-b41f-972a98e24b65", 00:18:09.333 "assigned_rate_limits": { 00:18:09.333 "rw_ios_per_sec": 0, 00:18:09.333 "rw_mbytes_per_sec": 0, 00:18:09.333 "r_mbytes_per_sec": 0, 00:18:09.333 "w_mbytes_per_sec": 0 00:18:09.333 }, 00:18:09.333 "claimed": true, 00:18:09.333 "claim_type": "exclusive_write", 00:18:09.333 "zoned": false, 00:18:09.333 "supported_io_types": { 00:18:09.333 "read": true, 00:18:09.333 "write": true, 00:18:09.333 "unmap": true, 00:18:09.333 "write_zeroes": true, 00:18:09.333 "flush": true, 00:18:09.333 "reset": true, 00:18:09.333 "compare": false, 00:18:09.333 "compare_and_write": false, 00:18:09.333 "abort": true, 00:18:09.333 "nvme_admin": false, 00:18:09.333 "nvme_io": false 00:18:09.333 }, 00:18:09.333 "memory_domains": [ 00:18:09.333 { 00:18:09.333 "dma_device_id": "system", 00:18:09.333 "dma_device_type": 1 00:18:09.333 }, 00:18:09.333 { 00:18:09.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.333 "dma_device_type": 2 00:18:09.333 } 00:18:09.333 ], 00:18:09.333 "driver_specific": {} 00:18:09.333 }' 00:18:09.333 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:09.333 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:09.624 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:09.624 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:09.624 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:09.625 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:09.625 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:09.625 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:09.625 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:09.625 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:09.625 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:09.625 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:09.625 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:09.625 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:09.625 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:10.191 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:10.191 "name": "BaseBdev2", 00:18:10.191 "aliases": [ 00:18:10.191 "4660f42c-6b42-4d12-a4cf-7406e769ecb4" 00:18:10.191 ], 00:18:10.191 "product_name": "Malloc disk", 00:18:10.191 "block_size": 512, 00:18:10.191 "num_blocks": 65536, 00:18:10.191 "uuid": "4660f42c-6b42-4d12-a4cf-7406e769ecb4", 00:18:10.191 "assigned_rate_limits": { 00:18:10.191 "rw_ios_per_sec": 0, 00:18:10.191 "rw_mbytes_per_sec": 0, 00:18:10.191 "r_mbytes_per_sec": 0, 00:18:10.191 "w_mbytes_per_sec": 0 00:18:10.191 }, 00:18:10.191 "claimed": true, 00:18:10.191 "claim_type": "exclusive_write", 00:18:10.191 "zoned": false, 00:18:10.191 "supported_io_types": { 00:18:10.191 "read": true, 00:18:10.191 "write": true, 00:18:10.191 "unmap": true, 00:18:10.191 "write_zeroes": true, 00:18:10.191 "flush": true, 00:18:10.191 "reset": true, 00:18:10.191 "compare": false, 00:18:10.191 "compare_and_write": false, 00:18:10.191 "abort": true, 00:18:10.191 "nvme_admin": false, 00:18:10.191 "nvme_io": false 00:18:10.191 }, 00:18:10.191 "memory_domains": [ 00:18:10.191 { 00:18:10.191 "dma_device_id": "system", 00:18:10.191 "dma_device_type": 1 00:18:10.191 }, 00:18:10.191 { 00:18:10.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.191 "dma_device_type": 2 00:18:10.191 } 00:18:10.191 ], 00:18:10.191 "driver_specific": {} 00:18:10.191 }' 00:18:10.191 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.191 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.191 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:10.191 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.191 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.191 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:10.191 07:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.191 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.191 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:10.191 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.449 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.449 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:10.449 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:10.449 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:10.449 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:10.708 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:10.708 "name": "BaseBdev3", 00:18:10.708 "aliases": [ 00:18:10.708 "39397154-8091-4758-881a-a008369e6a5d" 00:18:10.708 ], 00:18:10.708 "product_name": "Malloc disk", 00:18:10.708 "block_size": 512, 00:18:10.708 "num_blocks": 65536, 00:18:10.708 "uuid": "39397154-8091-4758-881a-a008369e6a5d", 00:18:10.708 "assigned_rate_limits": { 00:18:10.708 "rw_ios_per_sec": 0, 00:18:10.708 "rw_mbytes_per_sec": 0, 00:18:10.708 "r_mbytes_per_sec": 0, 00:18:10.708 "w_mbytes_per_sec": 0 00:18:10.708 }, 00:18:10.708 "claimed": true, 00:18:10.708 "claim_type": "exclusive_write", 00:18:10.708 "zoned": false, 00:18:10.708 "supported_io_types": { 00:18:10.708 "read": true, 00:18:10.708 "write": true, 00:18:10.708 "unmap": true, 00:18:10.708 "write_zeroes": true, 00:18:10.708 "flush": true, 00:18:10.708 "reset": true, 00:18:10.708 "compare": false, 00:18:10.708 "compare_and_write": false, 00:18:10.708 "abort": true, 00:18:10.708 "nvme_admin": false, 00:18:10.708 "nvme_io": false 00:18:10.708 }, 00:18:10.708 "memory_domains": [ 00:18:10.708 { 00:18:10.708 "dma_device_id": "system", 00:18:10.708 "dma_device_type": 1 00:18:10.708 }, 00:18:10.708 { 00:18:10.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.708 "dma_device_type": 2 00:18:10.708 } 00:18:10.708 ], 00:18:10.708 "driver_specific": {} 00:18:10.708 }' 00:18:10.708 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.708 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:10.708 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:10.708 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.708 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:10.708 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:10.708 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.708 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:10.967 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:10.967 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.967 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:10.967 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:10.967 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:11.226 [2024-07-12 07:28:44.882044] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:11.226 [2024-07-12 07:28:44.882329] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.226 [2024-07-12 07:28:44.882522] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.226 [2024-07-12 07:28:44.882622] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.226 [2024-07-12 07:28:44.882768] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:18:11.226 07:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 136532 00:18:11.226 07:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 136532 ']' 00:18:11.226 07:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 136532 00:18:11.226 07:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:18:11.226 07:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:11.226 07:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 136532 00:18:11.226 07:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:11.226 07:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:11.226 07:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 136532' 00:18:11.226 killing process with pid 136532 00:18:11.226 07:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 136532 00:18:11.226 07:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 136532 00:18:11.226 [2024-07-12 07:28:44.933572] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:11.226 [2024-07-12 07:28:44.991207] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.796 07:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:11.796 00:18:11.796 real 0m28.431s 00:18:11.796 user 0m51.828s 00:18:11.796 sys 0m5.107s 00:18:11.796 07:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:11.796 07:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.796 ************************************ 00:18:11.796 END TEST raid_state_function_test_sb 00:18:11.796 ************************************ 00:18:11.796 07:28:45 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:18:11.796 07:28:45 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:18:11.796 07:28:45 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:11.796 07:28:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.796 ************************************ 00:18:11.796 START TEST raid_superblock_test 00:18:11.796 ************************************ 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 3 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=137504 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 137504 /var/tmp/spdk-raid.sock 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 137504 ']' 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:11.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:11.796 07:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.796 [2024-07-12 07:28:45.524831] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:11.796 [2024-07-12 07:28:45.525308] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137504 ] 00:18:11.796 [2024-07-12 07:28:45.674824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.056 [2024-07-12 07:28:45.768939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.056 [2024-07-12 07:28:45.855251] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.622 07:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:12.622 07:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:18:12.622 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:12.622 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:12.622 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:12.622 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:12.622 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:12.622 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:12.622 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:12.622 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:12.622 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:12.881 malloc1 00:18:12.881 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:13.140 [2024-07-12 07:28:46.935712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:13.140 [2024-07-12 07:28:46.936077] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.140 [2024-07-12 07:28:46.936160] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:18:13.140 [2024-07-12 07:28:46.936443] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.140 [2024-07-12 07:28:46.939519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.140 [2024-07-12 07:28:46.939722] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:13.140 pt1 00:18:13.140 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:13.140 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:13.140 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:13.140 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:13.140 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:13.140 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:13.140 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:13.141 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:13.141 07:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:13.400 malloc2 00:18:13.400 07:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:13.658 [2024-07-12 07:28:47.395834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:13.658 [2024-07-12 07:28:47.396126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.658 [2024-07-12 07:28:47.396203] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:18:13.658 [2024-07-12 07:28:47.396345] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.658 [2024-07-12 07:28:47.399190] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.658 [2024-07-12 07:28:47.399366] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:13.658 pt2 00:18:13.658 07:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:13.658 07:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:13.658 07:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:18:13.658 07:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:18:13.658 07:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:13.658 07:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:13.658 07:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:13.658 07:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:13.658 07:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:13.917 malloc3 00:18:13.917 07:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:14.176 [2024-07-12 07:28:47.854014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:14.176 [2024-07-12 07:28:47.854334] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.176 [2024-07-12 07:28:47.854418] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:14.176 [2024-07-12 07:28:47.854678] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.176 [2024-07-12 07:28:47.857509] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.176 [2024-07-12 07:28:47.857684] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:14.176 pt3 00:18:14.176 07:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:14.176 07:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:14.176 07:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:14.176 [2024-07-12 07:28:48.054124] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:14.176 [2024-07-12 07:28:48.056930] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:14.176 [2024-07-12 07:28:48.057155] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:14.176 [2024-07-12 07:28:48.057427] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:18:14.176 [2024-07-12 07:28:48.057555] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:14.176 [2024-07-12 07:28:48.057811] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:18:14.176 [2024-07-12 07:28:48.058305] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:18:14.176 [2024-07-12 07:28:48.058414] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:18:14.176 [2024-07-12 07:28:48.058730] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.434 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:14.434 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:14.434 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:14.434 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:14.434 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:14.434 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:14.434 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:14.434 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:14.434 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:14.434 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:14.434 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.434 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.434 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.434 "name": "raid_bdev1", 00:18:14.434 "uuid": "210befdb-83e2-4964-be2a-7a410deaa70c", 00:18:14.434 "strip_size_kb": 64, 00:18:14.434 "state": "online", 00:18:14.434 "raid_level": "raid0", 00:18:14.434 "superblock": true, 00:18:14.434 "num_base_bdevs": 3, 00:18:14.434 "num_base_bdevs_discovered": 3, 00:18:14.434 "num_base_bdevs_operational": 3, 00:18:14.434 "base_bdevs_list": [ 00:18:14.434 { 00:18:14.434 "name": "pt1", 00:18:14.434 "uuid": "779b55d3-6c6b-5b94-b085-d84f289146a9", 00:18:14.434 "is_configured": true, 00:18:14.434 "data_offset": 2048, 00:18:14.434 "data_size": 63488 00:18:14.434 }, 00:18:14.434 { 00:18:14.434 "name": "pt2", 00:18:14.434 "uuid": "9181c9d6-b197-5d33-ac1e-27736b711f16", 00:18:14.434 "is_configured": true, 00:18:14.434 "data_offset": 2048, 00:18:14.434 "data_size": 63488 00:18:14.434 }, 00:18:14.434 { 00:18:14.434 "name": "pt3", 00:18:14.434 "uuid": "13179808-666d-5de9-acc4-e701e814c5df", 00:18:14.434 "is_configured": true, 00:18:14.434 "data_offset": 2048, 00:18:14.434 "data_size": 63488 00:18:14.434 } 00:18:14.434 ] 00:18:14.434 }' 00:18:14.434 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.434 07:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.002 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:15.002 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:15.002 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:15.002 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:15.002 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:15.002 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:15.002 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:15.002 07:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:15.261 [2024-07-12 07:28:49.127186] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.519 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:15.519 "name": "raid_bdev1", 00:18:15.519 "aliases": [ 00:18:15.519 "210befdb-83e2-4964-be2a-7a410deaa70c" 00:18:15.519 ], 00:18:15.519 "product_name": "Raid Volume", 00:18:15.519 "block_size": 512, 00:18:15.519 "num_blocks": 190464, 00:18:15.519 "uuid": "210befdb-83e2-4964-be2a-7a410deaa70c", 00:18:15.519 "assigned_rate_limits": { 00:18:15.519 "rw_ios_per_sec": 0, 00:18:15.519 "rw_mbytes_per_sec": 0, 00:18:15.519 "r_mbytes_per_sec": 0, 00:18:15.519 "w_mbytes_per_sec": 0 00:18:15.519 }, 00:18:15.519 "claimed": false, 00:18:15.519 "zoned": false, 00:18:15.519 "supported_io_types": { 00:18:15.520 "read": true, 00:18:15.520 "write": true, 00:18:15.520 "unmap": true, 00:18:15.520 "write_zeroes": true, 00:18:15.520 "flush": true, 00:18:15.520 "reset": true, 00:18:15.520 "compare": false, 00:18:15.520 "compare_and_write": false, 00:18:15.520 "abort": false, 00:18:15.520 "nvme_admin": false, 00:18:15.520 "nvme_io": false 00:18:15.520 }, 00:18:15.520 "memory_domains": [ 00:18:15.520 { 00:18:15.520 "dma_device_id": "system", 00:18:15.520 "dma_device_type": 1 00:18:15.520 }, 00:18:15.520 { 00:18:15.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.520 "dma_device_type": 2 00:18:15.520 }, 00:18:15.520 { 00:18:15.520 "dma_device_id": "system", 00:18:15.520 "dma_device_type": 1 00:18:15.520 }, 00:18:15.520 { 00:18:15.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.520 "dma_device_type": 2 00:18:15.520 }, 00:18:15.520 { 00:18:15.520 "dma_device_id": "system", 00:18:15.520 "dma_device_type": 1 00:18:15.520 }, 00:18:15.520 { 00:18:15.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.520 "dma_device_type": 2 00:18:15.520 } 00:18:15.520 ], 00:18:15.520 "driver_specific": { 00:18:15.520 "raid": { 00:18:15.520 "uuid": "210befdb-83e2-4964-be2a-7a410deaa70c", 00:18:15.520 "strip_size_kb": 64, 00:18:15.520 "state": "online", 00:18:15.520 "raid_level": "raid0", 00:18:15.520 "superblock": true, 00:18:15.520 "num_base_bdevs": 3, 00:18:15.520 "num_base_bdevs_discovered": 3, 00:18:15.520 "num_base_bdevs_operational": 3, 00:18:15.520 "base_bdevs_list": [ 00:18:15.520 { 00:18:15.520 "name": "pt1", 00:18:15.520 "uuid": "779b55d3-6c6b-5b94-b085-d84f289146a9", 00:18:15.520 "is_configured": true, 00:18:15.520 "data_offset": 2048, 00:18:15.520 "data_size": 63488 00:18:15.520 }, 00:18:15.520 { 00:18:15.520 "name": "pt2", 00:18:15.520 "uuid": "9181c9d6-b197-5d33-ac1e-27736b711f16", 00:18:15.520 "is_configured": true, 00:18:15.520 "data_offset": 2048, 00:18:15.520 "data_size": 63488 00:18:15.520 }, 00:18:15.520 { 00:18:15.520 "name": "pt3", 00:18:15.520 "uuid": "13179808-666d-5de9-acc4-e701e814c5df", 00:18:15.520 "is_configured": true, 00:18:15.520 "data_offset": 2048, 00:18:15.520 "data_size": 63488 00:18:15.520 } 00:18:15.520 ] 00:18:15.520 } 00:18:15.520 } 00:18:15.520 }' 00:18:15.520 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:15.520 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:15.520 pt2 00:18:15.520 pt3' 00:18:15.520 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:15.520 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:15.520 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:15.778 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:15.778 "name": "pt1", 00:18:15.778 "aliases": [ 00:18:15.778 "779b55d3-6c6b-5b94-b085-d84f289146a9" 00:18:15.778 ], 00:18:15.778 "product_name": "passthru", 00:18:15.778 "block_size": 512, 00:18:15.778 "num_blocks": 65536, 00:18:15.778 "uuid": "779b55d3-6c6b-5b94-b085-d84f289146a9", 00:18:15.778 "assigned_rate_limits": { 00:18:15.778 "rw_ios_per_sec": 0, 00:18:15.778 "rw_mbytes_per_sec": 0, 00:18:15.778 "r_mbytes_per_sec": 0, 00:18:15.778 "w_mbytes_per_sec": 0 00:18:15.778 }, 00:18:15.778 "claimed": true, 00:18:15.778 "claim_type": "exclusive_write", 00:18:15.778 "zoned": false, 00:18:15.778 "supported_io_types": { 00:18:15.778 "read": true, 00:18:15.778 "write": true, 00:18:15.778 "unmap": true, 00:18:15.778 "write_zeroes": true, 00:18:15.778 "flush": true, 00:18:15.778 "reset": true, 00:18:15.778 "compare": false, 00:18:15.778 "compare_and_write": false, 00:18:15.778 "abort": true, 00:18:15.778 "nvme_admin": false, 00:18:15.778 "nvme_io": false 00:18:15.778 }, 00:18:15.778 "memory_domains": [ 00:18:15.778 { 00:18:15.778 "dma_device_id": "system", 00:18:15.778 "dma_device_type": 1 00:18:15.778 }, 00:18:15.778 { 00:18:15.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.778 "dma_device_type": 2 00:18:15.778 } 00:18:15.778 ], 00:18:15.778 "driver_specific": { 00:18:15.778 "passthru": { 00:18:15.778 "name": "pt1", 00:18:15.778 "base_bdev_name": "malloc1" 00:18:15.778 } 00:18:15.778 } 00:18:15.778 }' 00:18:15.778 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.778 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.778 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:15.778 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.778 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.778 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:15.778 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.778 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.036 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:16.036 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.036 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.036 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:16.036 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:16.036 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:16.036 07:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:16.294 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:16.294 "name": "pt2", 00:18:16.294 "aliases": [ 00:18:16.294 "9181c9d6-b197-5d33-ac1e-27736b711f16" 00:18:16.294 ], 00:18:16.294 "product_name": "passthru", 00:18:16.294 "block_size": 512, 00:18:16.294 "num_blocks": 65536, 00:18:16.294 "uuid": "9181c9d6-b197-5d33-ac1e-27736b711f16", 00:18:16.294 "assigned_rate_limits": { 00:18:16.294 "rw_ios_per_sec": 0, 00:18:16.294 "rw_mbytes_per_sec": 0, 00:18:16.294 "r_mbytes_per_sec": 0, 00:18:16.295 "w_mbytes_per_sec": 0 00:18:16.295 }, 00:18:16.295 "claimed": true, 00:18:16.295 "claim_type": "exclusive_write", 00:18:16.295 "zoned": false, 00:18:16.295 "supported_io_types": { 00:18:16.295 "read": true, 00:18:16.295 "write": true, 00:18:16.295 "unmap": true, 00:18:16.295 "write_zeroes": true, 00:18:16.295 "flush": true, 00:18:16.295 "reset": true, 00:18:16.295 "compare": false, 00:18:16.295 "compare_and_write": false, 00:18:16.295 "abort": true, 00:18:16.295 "nvme_admin": false, 00:18:16.295 "nvme_io": false 00:18:16.295 }, 00:18:16.295 "memory_domains": [ 00:18:16.295 { 00:18:16.295 "dma_device_id": "system", 00:18:16.295 "dma_device_type": 1 00:18:16.295 }, 00:18:16.295 { 00:18:16.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.295 "dma_device_type": 2 00:18:16.295 } 00:18:16.295 ], 00:18:16.295 "driver_specific": { 00:18:16.295 "passthru": { 00:18:16.295 "name": "pt2", 00:18:16.295 "base_bdev_name": "malloc2" 00:18:16.295 } 00:18:16.295 } 00:18:16.295 }' 00:18:16.295 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.295 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.295 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:16.295 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.553 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.553 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:16.553 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.553 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.553 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:16.553 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.553 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.553 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:16.553 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:16.553 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:16.553 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:16.811 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:16.811 "name": "pt3", 00:18:16.811 "aliases": [ 00:18:16.811 "13179808-666d-5de9-acc4-e701e814c5df" 00:18:16.811 ], 00:18:16.812 "product_name": "passthru", 00:18:16.812 "block_size": 512, 00:18:16.812 "num_blocks": 65536, 00:18:16.812 "uuid": "13179808-666d-5de9-acc4-e701e814c5df", 00:18:16.812 "assigned_rate_limits": { 00:18:16.812 "rw_ios_per_sec": 0, 00:18:16.812 "rw_mbytes_per_sec": 0, 00:18:16.812 "r_mbytes_per_sec": 0, 00:18:16.812 "w_mbytes_per_sec": 0 00:18:16.812 }, 00:18:16.812 "claimed": true, 00:18:16.812 "claim_type": "exclusive_write", 00:18:16.812 "zoned": false, 00:18:16.812 "supported_io_types": { 00:18:16.812 "read": true, 00:18:16.812 "write": true, 00:18:16.812 "unmap": true, 00:18:16.812 "write_zeroes": true, 00:18:16.812 "flush": true, 00:18:16.812 "reset": true, 00:18:16.812 "compare": false, 00:18:16.812 "compare_and_write": false, 00:18:16.812 "abort": true, 00:18:16.812 "nvme_admin": false, 00:18:16.812 "nvme_io": false 00:18:16.812 }, 00:18:16.812 "memory_domains": [ 00:18:16.812 { 00:18:16.812 "dma_device_id": "system", 00:18:16.812 "dma_device_type": 1 00:18:16.812 }, 00:18:16.812 { 00:18:16.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.812 "dma_device_type": 2 00:18:16.812 } 00:18:16.812 ], 00:18:16.812 "driver_specific": { 00:18:16.812 "passthru": { 00:18:16.812 "name": "pt3", 00:18:16.812 "base_bdev_name": "malloc3" 00:18:16.812 } 00:18:16.812 } 00:18:16.812 }' 00:18:16.812 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:17.070 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:17.070 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:17.070 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:17.070 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:17.070 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:17.070 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:17.070 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:17.070 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:17.070 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:17.070 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:17.328 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:17.328 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:17.328 07:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:17.328 [2024-07-12 07:28:51.171572] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.328 07:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=210befdb-83e2-4964-be2a-7a410deaa70c 00:18:17.328 07:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 210befdb-83e2-4964-be2a-7a410deaa70c ']' 00:18:17.328 07:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:17.586 [2024-07-12 07:28:51.367484] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.586 [2024-07-12 07:28:51.367667] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.586 [2024-07-12 07:28:51.367964] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.586 [2024-07-12 07:28:51.368112] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.586 [2024-07-12 07:28:51.368188] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:18:17.586 07:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:17.586 07:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.843 07:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:17.843 07:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:17.843 07:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:17.843 07:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:18.101 07:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:18.101 07:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:18.359 07:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:18.359 07:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:18.617 07:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:18.617 07:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:18.874 07:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:18.874 07:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:18.874 07:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:18:18.874 07:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:18.874 07:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.874 07:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:18.874 07:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.874 07:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:18.874 07:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.874 07:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:18.874 07:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.874 07:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:18.874 07:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:19.131 [2024-07-12 07:28:52.757889] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:19.131 [2024-07-12 07:28:52.760692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:19.131 [2024-07-12 07:28:52.760894] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:19.131 [2024-07-12 07:28:52.760988] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:19.131 [2024-07-12 07:28:52.761802] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:19.131 [2024-07-12 07:28:52.762085] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:19.131 [2024-07-12 07:28:52.762350] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.131 [2024-07-12 07:28:52.762466] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:18:19.131 request: 00:18:19.131 { 00:18:19.131 "name": "raid_bdev1", 00:18:19.131 "raid_level": "raid0", 00:18:19.131 "base_bdevs": [ 00:18:19.131 "malloc1", 00:18:19.131 "malloc2", 00:18:19.131 "malloc3" 00:18:19.131 ], 00:18:19.131 "superblock": false, 00:18:19.131 "strip_size_kb": 64, 00:18:19.131 "method": "bdev_raid_create", 00:18:19.131 "req_id": 1 00:18:19.131 } 00:18:19.131 Got JSON-RPC error response 00:18:19.131 response: 00:18:19.131 { 00:18:19.131 "code": -17, 00:18:19.131 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:19.131 } 00:18:19.131 07:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:18:19.131 07:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:19.131 07:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:19.131 07:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:19.131 07:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.131 07:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:19.131 07:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:19.131 07:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:19.131 07:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:19.388 [2024-07-12 07:28:53.182904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:19.388 [2024-07-12 07:28:53.183518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.388 [2024-07-12 07:28:53.183790] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:19.388 [2024-07-12 07:28:53.184001] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.388 [2024-07-12 07:28:53.187058] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.388 [2024-07-12 07:28:53.187322] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:19.388 [2024-07-12 07:28:53.187647] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:19.388 [2024-07-12 07:28:53.187820] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:19.388 pt1 00:18:19.388 07:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:18:19.388 07:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:19.388 07:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:19.388 07:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:19.388 07:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:19.388 07:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:19.388 07:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:19.388 07:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:19.388 07:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:19.388 07:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:19.389 07:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.389 07:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.646 07:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:19.646 "name": "raid_bdev1", 00:18:19.646 "uuid": "210befdb-83e2-4964-be2a-7a410deaa70c", 00:18:19.646 "strip_size_kb": 64, 00:18:19.646 "state": "configuring", 00:18:19.646 "raid_level": "raid0", 00:18:19.646 "superblock": true, 00:18:19.646 "num_base_bdevs": 3, 00:18:19.646 "num_base_bdevs_discovered": 1, 00:18:19.646 "num_base_bdevs_operational": 3, 00:18:19.646 "base_bdevs_list": [ 00:18:19.646 { 00:18:19.646 "name": "pt1", 00:18:19.646 "uuid": "779b55d3-6c6b-5b94-b085-d84f289146a9", 00:18:19.646 "is_configured": true, 00:18:19.646 "data_offset": 2048, 00:18:19.646 "data_size": 63488 00:18:19.646 }, 00:18:19.646 { 00:18:19.646 "name": null, 00:18:19.646 "uuid": "9181c9d6-b197-5d33-ac1e-27736b711f16", 00:18:19.646 "is_configured": false, 00:18:19.646 "data_offset": 2048, 00:18:19.646 "data_size": 63488 00:18:19.646 }, 00:18:19.646 { 00:18:19.646 "name": null, 00:18:19.646 "uuid": "13179808-666d-5de9-acc4-e701e814c5df", 00:18:19.646 "is_configured": false, 00:18:19.646 "data_offset": 2048, 00:18:19.646 "data_size": 63488 00:18:19.646 } 00:18:19.646 ] 00:18:19.646 }' 00:18:19.646 07:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:19.646 07:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.211 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:18:20.211 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.468 [2024-07-12 07:28:54.236049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.468 [2024-07-12 07:28:54.236907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.468 [2024-07-12 07:28:54.237217] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:20.468 [2024-07-12 07:28:54.237476] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.468 [2024-07-12 07:28:54.238188] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.468 [2024-07-12 07:28:54.238430] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.468 [2024-07-12 07:28:54.238779] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:20.468 [2024-07-12 07:28:54.238934] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.468 pt2 00:18:20.468 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:20.730 [2024-07-12 07:28:54.520146] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:20.730 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:18:20.730 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:20.730 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:20.730 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:20.730 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:20.730 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:20.730 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:20.730 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:20.730 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:20.730 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:20.730 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.730 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.987 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.987 "name": "raid_bdev1", 00:18:20.987 "uuid": "210befdb-83e2-4964-be2a-7a410deaa70c", 00:18:20.988 "strip_size_kb": 64, 00:18:20.988 "state": "configuring", 00:18:20.988 "raid_level": "raid0", 00:18:20.988 "superblock": true, 00:18:20.988 "num_base_bdevs": 3, 00:18:20.988 "num_base_bdevs_discovered": 1, 00:18:20.988 "num_base_bdevs_operational": 3, 00:18:20.988 "base_bdevs_list": [ 00:18:20.988 { 00:18:20.988 "name": "pt1", 00:18:20.988 "uuid": "779b55d3-6c6b-5b94-b085-d84f289146a9", 00:18:20.988 "is_configured": true, 00:18:20.988 "data_offset": 2048, 00:18:20.988 "data_size": 63488 00:18:20.988 }, 00:18:20.988 { 00:18:20.988 "name": null, 00:18:20.988 "uuid": "9181c9d6-b197-5d33-ac1e-27736b711f16", 00:18:20.988 "is_configured": false, 00:18:20.988 "data_offset": 2048, 00:18:20.988 "data_size": 63488 00:18:20.988 }, 00:18:20.988 { 00:18:20.988 "name": null, 00:18:20.988 "uuid": "13179808-666d-5de9-acc4-e701e814c5df", 00:18:20.988 "is_configured": false, 00:18:20.988 "data_offset": 2048, 00:18:20.988 "data_size": 63488 00:18:20.988 } 00:18:20.988 ] 00:18:20.988 }' 00:18:20.988 07:28:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:20.988 07:28:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.552 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:21.552 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:21.552 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:21.810 [2024-07-12 07:28:55.516267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:21.810 [2024-07-12 07:28:55.517123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.810 [2024-07-12 07:28:55.517407] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:21.810 [2024-07-12 07:28:55.517661] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.810 [2024-07-12 07:28:55.518362] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.810 [2024-07-12 07:28:55.518610] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:21.810 [2024-07-12 07:28:55.518953] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:21.810 [2024-07-12 07:28:55.519091] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:21.810 pt2 00:18:21.810 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:21.810 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:21.810 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:22.069 [2024-07-12 07:28:55.764286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:22.069 [2024-07-12 07:28:55.764940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.069 [2024-07-12 07:28:55.765200] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:22.069 [2024-07-12 07:28:55.765466] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.069 [2024-07-12 07:28:55.766190] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.069 [2024-07-12 07:28:55.766533] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:22.069 [2024-07-12 07:28:55.766914] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:22.069 [2024-07-12 07:28:55.767052] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:22.069 [2024-07-12 07:28:55.767247] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:18:22.069 [2024-07-12 07:28:55.767341] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:22.069 [2024-07-12 07:28:55.767471] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:18:22.069 [2024-07-12 07:28:55.767899] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:18:22.069 [2024-07-12 07:28:55.768009] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:18:22.069 [2024-07-12 07:28:55.768238] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.069 pt3 00:18:22.069 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:22.069 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:22.069 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:22.069 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:22.069 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:22.069 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:22.069 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:22.069 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:22.069 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.069 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.069 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.069 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.069 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.069 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.327 07:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:22.327 "name": "raid_bdev1", 00:18:22.327 "uuid": "210befdb-83e2-4964-be2a-7a410deaa70c", 00:18:22.327 "strip_size_kb": 64, 00:18:22.327 "state": "online", 00:18:22.327 "raid_level": "raid0", 00:18:22.327 "superblock": true, 00:18:22.327 "num_base_bdevs": 3, 00:18:22.327 "num_base_bdevs_discovered": 3, 00:18:22.327 "num_base_bdevs_operational": 3, 00:18:22.327 "base_bdevs_list": [ 00:18:22.327 { 00:18:22.327 "name": "pt1", 00:18:22.327 "uuid": "779b55d3-6c6b-5b94-b085-d84f289146a9", 00:18:22.327 "is_configured": true, 00:18:22.327 "data_offset": 2048, 00:18:22.327 "data_size": 63488 00:18:22.327 }, 00:18:22.327 { 00:18:22.327 "name": "pt2", 00:18:22.327 "uuid": "9181c9d6-b197-5d33-ac1e-27736b711f16", 00:18:22.327 "is_configured": true, 00:18:22.327 "data_offset": 2048, 00:18:22.327 "data_size": 63488 00:18:22.327 }, 00:18:22.327 { 00:18:22.327 "name": "pt3", 00:18:22.327 "uuid": "13179808-666d-5de9-acc4-e701e814c5df", 00:18:22.327 "is_configured": true, 00:18:22.327 "data_offset": 2048, 00:18:22.327 "data_size": 63488 00:18:22.327 } 00:18:22.327 ] 00:18:22.327 }' 00:18:22.327 07:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:22.327 07:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.893 07:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:22.893 07:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:22.893 07:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:22.893 07:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:22.893 07:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:22.893 07:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:22.893 07:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:22.893 07:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:22.893 [2024-07-12 07:28:56.740841] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.893 07:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:22.893 "name": "raid_bdev1", 00:18:22.893 "aliases": [ 00:18:22.893 "210befdb-83e2-4964-be2a-7a410deaa70c" 00:18:22.893 ], 00:18:22.893 "product_name": "Raid Volume", 00:18:22.893 "block_size": 512, 00:18:22.893 "num_blocks": 190464, 00:18:22.893 "uuid": "210befdb-83e2-4964-be2a-7a410deaa70c", 00:18:22.893 "assigned_rate_limits": { 00:18:22.893 "rw_ios_per_sec": 0, 00:18:22.893 "rw_mbytes_per_sec": 0, 00:18:22.893 "r_mbytes_per_sec": 0, 00:18:22.893 "w_mbytes_per_sec": 0 00:18:22.893 }, 00:18:22.893 "claimed": false, 00:18:22.893 "zoned": false, 00:18:22.893 "supported_io_types": { 00:18:22.893 "read": true, 00:18:22.893 "write": true, 00:18:22.893 "unmap": true, 00:18:22.893 "write_zeroes": true, 00:18:22.893 "flush": true, 00:18:22.893 "reset": true, 00:18:22.893 "compare": false, 00:18:22.893 "compare_and_write": false, 00:18:22.893 "abort": false, 00:18:22.893 "nvme_admin": false, 00:18:22.893 "nvme_io": false 00:18:22.893 }, 00:18:22.893 "memory_domains": [ 00:18:22.893 { 00:18:22.893 "dma_device_id": "system", 00:18:22.893 "dma_device_type": 1 00:18:22.893 }, 00:18:22.893 { 00:18:22.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.893 "dma_device_type": 2 00:18:22.893 }, 00:18:22.893 { 00:18:22.893 "dma_device_id": "system", 00:18:22.893 "dma_device_type": 1 00:18:22.893 }, 00:18:22.893 { 00:18:22.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.893 "dma_device_type": 2 00:18:22.893 }, 00:18:22.893 { 00:18:22.893 "dma_device_id": "system", 00:18:22.893 "dma_device_type": 1 00:18:22.893 }, 00:18:22.893 { 00:18:22.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.893 "dma_device_type": 2 00:18:22.893 } 00:18:22.893 ], 00:18:22.893 "driver_specific": { 00:18:22.893 "raid": { 00:18:22.893 "uuid": "210befdb-83e2-4964-be2a-7a410deaa70c", 00:18:22.893 "strip_size_kb": 64, 00:18:22.893 "state": "online", 00:18:22.893 "raid_level": "raid0", 00:18:22.893 "superblock": true, 00:18:22.893 "num_base_bdevs": 3, 00:18:22.893 "num_base_bdevs_discovered": 3, 00:18:22.893 "num_base_bdevs_operational": 3, 00:18:22.893 "base_bdevs_list": [ 00:18:22.893 { 00:18:22.893 "name": "pt1", 00:18:22.893 "uuid": "779b55d3-6c6b-5b94-b085-d84f289146a9", 00:18:22.893 "is_configured": true, 00:18:22.893 "data_offset": 2048, 00:18:22.893 "data_size": 63488 00:18:22.893 }, 00:18:22.893 { 00:18:22.893 "name": "pt2", 00:18:22.893 "uuid": "9181c9d6-b197-5d33-ac1e-27736b711f16", 00:18:22.893 "is_configured": true, 00:18:22.893 "data_offset": 2048, 00:18:22.893 "data_size": 63488 00:18:22.893 }, 00:18:22.893 { 00:18:22.893 "name": "pt3", 00:18:22.893 "uuid": "13179808-666d-5de9-acc4-e701e814c5df", 00:18:22.893 "is_configured": true, 00:18:22.893 "data_offset": 2048, 00:18:22.893 "data_size": 63488 00:18:22.893 } 00:18:22.893 ] 00:18:22.893 } 00:18:22.893 } 00:18:22.893 }' 00:18:22.893 07:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:23.150 07:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:23.150 pt2 00:18:23.150 pt3' 00:18:23.150 07:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:23.150 07:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:23.150 07:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:23.150 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:23.150 "name": "pt1", 00:18:23.150 "aliases": [ 00:18:23.150 "779b55d3-6c6b-5b94-b085-d84f289146a9" 00:18:23.151 ], 00:18:23.151 "product_name": "passthru", 00:18:23.151 "block_size": 512, 00:18:23.151 "num_blocks": 65536, 00:18:23.151 "uuid": "779b55d3-6c6b-5b94-b085-d84f289146a9", 00:18:23.151 "assigned_rate_limits": { 00:18:23.151 "rw_ios_per_sec": 0, 00:18:23.151 "rw_mbytes_per_sec": 0, 00:18:23.151 "r_mbytes_per_sec": 0, 00:18:23.151 "w_mbytes_per_sec": 0 00:18:23.151 }, 00:18:23.151 "claimed": true, 00:18:23.151 "claim_type": "exclusive_write", 00:18:23.151 "zoned": false, 00:18:23.151 "supported_io_types": { 00:18:23.151 "read": true, 00:18:23.151 "write": true, 00:18:23.151 "unmap": true, 00:18:23.151 "write_zeroes": true, 00:18:23.151 "flush": true, 00:18:23.151 "reset": true, 00:18:23.151 "compare": false, 00:18:23.151 "compare_and_write": false, 00:18:23.151 "abort": true, 00:18:23.151 "nvme_admin": false, 00:18:23.151 "nvme_io": false 00:18:23.151 }, 00:18:23.151 "memory_domains": [ 00:18:23.151 { 00:18:23.151 "dma_device_id": "system", 00:18:23.151 "dma_device_type": 1 00:18:23.151 }, 00:18:23.151 { 00:18:23.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.151 "dma_device_type": 2 00:18:23.151 } 00:18:23.151 ], 00:18:23.151 "driver_specific": { 00:18:23.151 "passthru": { 00:18:23.151 "name": "pt1", 00:18:23.151 "base_bdev_name": "malloc1" 00:18:23.151 } 00:18:23.151 } 00:18:23.151 }' 00:18:23.151 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:23.408 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:23.408 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:23.408 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:23.408 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:23.408 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:23.408 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:23.408 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:23.408 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:23.408 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:23.665 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:23.665 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:23.665 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:23.665 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:23.665 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:23.922 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:23.922 "name": "pt2", 00:18:23.922 "aliases": [ 00:18:23.922 "9181c9d6-b197-5d33-ac1e-27736b711f16" 00:18:23.922 ], 00:18:23.922 "product_name": "passthru", 00:18:23.922 "block_size": 512, 00:18:23.922 "num_blocks": 65536, 00:18:23.922 "uuid": "9181c9d6-b197-5d33-ac1e-27736b711f16", 00:18:23.922 "assigned_rate_limits": { 00:18:23.922 "rw_ios_per_sec": 0, 00:18:23.922 "rw_mbytes_per_sec": 0, 00:18:23.922 "r_mbytes_per_sec": 0, 00:18:23.922 "w_mbytes_per_sec": 0 00:18:23.922 }, 00:18:23.922 "claimed": true, 00:18:23.922 "claim_type": "exclusive_write", 00:18:23.922 "zoned": false, 00:18:23.922 "supported_io_types": { 00:18:23.922 "read": true, 00:18:23.922 "write": true, 00:18:23.922 "unmap": true, 00:18:23.922 "write_zeroes": true, 00:18:23.922 "flush": true, 00:18:23.922 "reset": true, 00:18:23.922 "compare": false, 00:18:23.922 "compare_and_write": false, 00:18:23.922 "abort": true, 00:18:23.922 "nvme_admin": false, 00:18:23.922 "nvme_io": false 00:18:23.922 }, 00:18:23.923 "memory_domains": [ 00:18:23.923 { 00:18:23.923 "dma_device_id": "system", 00:18:23.923 "dma_device_type": 1 00:18:23.923 }, 00:18:23.923 { 00:18:23.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.923 "dma_device_type": 2 00:18:23.923 } 00:18:23.923 ], 00:18:23.923 "driver_specific": { 00:18:23.923 "passthru": { 00:18:23.923 "name": "pt2", 00:18:23.923 "base_bdev_name": "malloc2" 00:18:23.923 } 00:18:23.923 } 00:18:23.923 }' 00:18:23.923 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:23.923 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:23.923 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:23.923 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:23.923 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:23.923 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:23.923 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:23.923 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:24.180 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:24.180 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:24.180 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:24.180 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:24.180 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:24.180 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:24.180 07:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:24.438 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:24.438 "name": "pt3", 00:18:24.438 "aliases": [ 00:18:24.438 "13179808-666d-5de9-acc4-e701e814c5df" 00:18:24.438 ], 00:18:24.438 "product_name": "passthru", 00:18:24.438 "block_size": 512, 00:18:24.439 "num_blocks": 65536, 00:18:24.439 "uuid": "13179808-666d-5de9-acc4-e701e814c5df", 00:18:24.439 "assigned_rate_limits": { 00:18:24.439 "rw_ios_per_sec": 0, 00:18:24.439 "rw_mbytes_per_sec": 0, 00:18:24.439 "r_mbytes_per_sec": 0, 00:18:24.439 "w_mbytes_per_sec": 0 00:18:24.439 }, 00:18:24.439 "claimed": true, 00:18:24.439 "claim_type": "exclusive_write", 00:18:24.439 "zoned": false, 00:18:24.439 "supported_io_types": { 00:18:24.439 "read": true, 00:18:24.439 "write": true, 00:18:24.439 "unmap": true, 00:18:24.439 "write_zeroes": true, 00:18:24.439 "flush": true, 00:18:24.439 "reset": true, 00:18:24.439 "compare": false, 00:18:24.439 "compare_and_write": false, 00:18:24.439 "abort": true, 00:18:24.439 "nvme_admin": false, 00:18:24.439 "nvme_io": false 00:18:24.439 }, 00:18:24.439 "memory_domains": [ 00:18:24.439 { 00:18:24.439 "dma_device_id": "system", 00:18:24.439 "dma_device_type": 1 00:18:24.439 }, 00:18:24.439 { 00:18:24.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.439 "dma_device_type": 2 00:18:24.439 } 00:18:24.439 ], 00:18:24.439 "driver_specific": { 00:18:24.439 "passthru": { 00:18:24.439 "name": "pt3", 00:18:24.439 "base_bdev_name": "malloc3" 00:18:24.439 } 00:18:24.439 } 00:18:24.439 }' 00:18:24.439 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:24.439 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:24.439 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:24.439 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:24.439 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:24.696 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:24.696 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:24.696 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:24.696 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:24.696 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:24.696 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:24.696 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:24.696 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:24.696 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:24.955 [2024-07-12 07:28:58.777182] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.955 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 210befdb-83e2-4964-be2a-7a410deaa70c '!=' 210befdb-83e2-4964-be2a-7a410deaa70c ']' 00:18:24.955 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:18:24.955 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:24.955 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:24.955 07:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 137504 00:18:24.955 07:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 137504 ']' 00:18:24.955 07:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 137504 00:18:24.955 07:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:18:24.955 07:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:24.955 07:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 137504 00:18:24.955 killing process with pid 137504 00:18:24.955 07:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:24.955 07:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:24.955 07:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 137504' 00:18:24.955 07:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 137504 00:18:24.955 [2024-07-12 07:28:58.830019] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.955 07:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 137504 00:18:24.955 [2024-07-12 07:28:58.830128] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.955 [2024-07-12 07:28:58.830198] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.955 [2024-07-12 07:28:58.830207] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:18:25.213 [2024-07-12 07:28:58.890452] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.472 07:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:25.472 00:18:25.472 real 0m13.825s 00:18:25.472 user 0m24.608s 00:18:25.472 ************************************ 00:18:25.472 END TEST raid_superblock_test 00:18:25.472 ************************************ 00:18:25.472 sys 0m2.415s 00:18:25.472 07:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:25.472 07:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.472 07:28:59 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:18:25.472 07:28:59 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:25.472 07:28:59 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:25.472 07:28:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.472 ************************************ 00:18:25.472 START TEST raid_read_error_test 00:18:25.472 ************************************ 00:18:25.472 07:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 3 read 00:18:25.472 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:18:25.472 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:18:25.472 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.hwLPEkQnuv 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=137967 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 137967 /var/tmp/spdk-raid.sock 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 137967 ']' 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:25.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:25.731 07:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.731 [2024-07-12 07:28:59.441607] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:25.731 [2024-07-12 07:28:59.442685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137967 ] 00:18:25.731 [2024-07-12 07:28:59.593162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.990 [2024-07-12 07:28:59.685625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.990 [2024-07-12 07:28:59.768267] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.557 07:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:26.557 07:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:18:26.557 07:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:26.557 07:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:26.815 BaseBdev1_malloc 00:18:26.815 07:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:27.074 true 00:18:27.074 07:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:27.332 [2024-07-12 07:29:01.008802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:27.332 [2024-07-12 07:29:01.009189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.332 [2024-07-12 07:29:01.009297] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:18:27.332 [2024-07-12 07:29:01.009538] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.332 [2024-07-12 07:29:01.012694] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.332 [2024-07-12 07:29:01.012884] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:27.332 BaseBdev1 00:18:27.332 07:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:27.332 07:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:27.591 BaseBdev2_malloc 00:18:27.591 07:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:27.591 true 00:18:27.591 07:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:27.850 [2024-07-12 07:29:01.621272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:27.850 [2024-07-12 07:29:01.621673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.850 [2024-07-12 07:29:01.621762] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:27.850 [2024-07-12 07:29:01.621918] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.850 [2024-07-12 07:29:01.624833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.850 [2024-07-12 07:29:01.625002] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:27.850 BaseBdev2 00:18:27.850 07:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:27.850 07:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:28.108 BaseBdev3_malloc 00:18:28.108 07:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:28.368 true 00:18:28.368 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:28.627 [2024-07-12 07:29:02.301489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:28.627 [2024-07-12 07:29:02.301859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.627 [2024-07-12 07:29:02.301948] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:18:28.627 [2024-07-12 07:29:02.302100] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.627 [2024-07-12 07:29:02.304972] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.627 [2024-07-12 07:29:02.305169] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:28.627 BaseBdev3 00:18:28.627 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:28.627 [2024-07-12 07:29:02.501629] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.627 [2024-07-12 07:29:02.504484] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:28.627 [2024-07-12 07:29:02.504728] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:28.627 [2024-07-12 07:29:02.505017] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:18:28.627 [2024-07-12 07:29:02.505064] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:28.627 [2024-07-12 07:29:02.505329] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:18:28.627 [2024-07-12 07:29:02.505874] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:18:28.627 [2024-07-12 07:29:02.505988] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:18:28.627 [2024-07-12 07:29:02.506316] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.886 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:28.886 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:28.886 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:28.886 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:28.887 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:28.887 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:28.887 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:28.887 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:28.887 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:28.887 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:28.887 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.887 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.887 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:28.887 "name": "raid_bdev1", 00:18:28.887 "uuid": "4f658e35-4095-4873-9a21-0e80877bd64c", 00:18:28.887 "strip_size_kb": 64, 00:18:28.887 "state": "online", 00:18:28.887 "raid_level": "raid0", 00:18:28.887 "superblock": true, 00:18:28.887 "num_base_bdevs": 3, 00:18:28.887 "num_base_bdevs_discovered": 3, 00:18:28.887 "num_base_bdevs_operational": 3, 00:18:28.887 "base_bdevs_list": [ 00:18:28.887 { 00:18:28.887 "name": "BaseBdev1", 00:18:28.887 "uuid": "42cc7016-e75a-5c0d-b3e2-4699ab7e5990", 00:18:28.887 "is_configured": true, 00:18:28.887 "data_offset": 2048, 00:18:28.887 "data_size": 63488 00:18:28.887 }, 00:18:28.887 { 00:18:28.887 "name": "BaseBdev2", 00:18:28.887 "uuid": "0f9a9c1b-09c5-57d3-818b-d2be05a9d556", 00:18:28.887 "is_configured": true, 00:18:28.887 "data_offset": 2048, 00:18:28.887 "data_size": 63488 00:18:28.887 }, 00:18:28.887 { 00:18:28.887 "name": "BaseBdev3", 00:18:28.887 "uuid": "794d476c-19ad-55ec-b964-52ba961b567b", 00:18:28.887 "is_configured": true, 00:18:28.887 "data_offset": 2048, 00:18:28.887 "data_size": 63488 00:18:28.887 } 00:18:28.887 ] 00:18:28.887 }' 00:18:28.887 07:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:28.887 07:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.454 07:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:29.454 07:29:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:29.712 [2024-07-12 07:29:03.394985] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:18:30.656 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:30.931 "name": "raid_bdev1", 00:18:30.931 "uuid": "4f658e35-4095-4873-9a21-0e80877bd64c", 00:18:30.931 "strip_size_kb": 64, 00:18:30.931 "state": "online", 00:18:30.931 "raid_level": "raid0", 00:18:30.931 "superblock": true, 00:18:30.931 "num_base_bdevs": 3, 00:18:30.931 "num_base_bdevs_discovered": 3, 00:18:30.931 "num_base_bdevs_operational": 3, 00:18:30.931 "base_bdevs_list": [ 00:18:30.931 { 00:18:30.931 "name": "BaseBdev1", 00:18:30.931 "uuid": "42cc7016-e75a-5c0d-b3e2-4699ab7e5990", 00:18:30.931 "is_configured": true, 00:18:30.931 "data_offset": 2048, 00:18:30.931 "data_size": 63488 00:18:30.931 }, 00:18:30.931 { 00:18:30.931 "name": "BaseBdev2", 00:18:30.931 "uuid": "0f9a9c1b-09c5-57d3-818b-d2be05a9d556", 00:18:30.931 "is_configured": true, 00:18:30.931 "data_offset": 2048, 00:18:30.931 "data_size": 63488 00:18:30.931 }, 00:18:30.931 { 00:18:30.931 "name": "BaseBdev3", 00:18:30.931 "uuid": "794d476c-19ad-55ec-b964-52ba961b567b", 00:18:30.931 "is_configured": true, 00:18:30.931 "data_offset": 2048, 00:18:30.931 "data_size": 63488 00:18:30.931 } 00:18:30.931 ] 00:18:30.931 }' 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:30.931 07:29:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.865 07:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:31.865 [2024-07-12 07:29:05.665207] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.865 [2024-07-12 07:29:05.665548] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.865 [2024-07-12 07:29:05.668200] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.865 [2024-07-12 07:29:05.668377] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.865 [2024-07-12 07:29:05.668452] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.865 [2024-07-12 07:29:05.668535] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:18:31.865 0 00:18:31.865 07:29:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 137967 00:18:31.865 07:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 137967 ']' 00:18:31.865 07:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 137967 00:18:31.865 07:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:18:31.865 07:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:31.865 07:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 137967 00:18:31.865 07:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:31.865 07:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:31.865 07:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 137967' 00:18:31.865 killing process with pid 137967 00:18:31.865 07:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 137967 00:18:31.865 07:29:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 137967 00:18:31.865 [2024-07-12 07:29:05.720699] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.122 [2024-07-12 07:29:05.767826] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:32.380 07:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.hwLPEkQnuv 00:18:32.380 07:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:32.380 07:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:32.380 07:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:18:32.380 07:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:18:32.380 07:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:32.380 07:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:32.380 07:29:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:18:32.380 00:18:32.380 real 0m6.840s 00:18:32.380 user 0m10.553s 00:18:32.380 sys 0m1.154s 00:18:32.380 07:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:32.380 07:29:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.380 ************************************ 00:18:32.380 END TEST raid_read_error_test 00:18:32.380 ************************************ 00:18:32.380 07:29:06 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:18:32.380 07:29:06 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:32.380 07:29:06 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:32.380 07:29:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:32.637 ************************************ 00:18:32.637 START TEST raid_write_error_test 00:18:32.637 ************************************ 00:18:32.637 07:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 3 write 00:18:32.637 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:18:32.637 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.feP4mZmi71 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=138160 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 138160 /var/tmp/spdk-raid.sock 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 138160 ']' 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:32.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:32.638 07:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.638 [2024-07-12 07:29:06.352166] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:32.638 [2024-07-12 07:29:06.352716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138160 ] 00:18:32.638 [2024-07-12 07:29:06.509502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.895 [2024-07-12 07:29:06.592447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.895 [2024-07-12 07:29:06.671483] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:32.895 07:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:32.895 07:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:18:32.895 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:32.895 07:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:33.152 BaseBdev1_malloc 00:18:33.152 07:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:33.411 true 00:18:33.411 07:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:33.669 [2024-07-12 07:29:07.379555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:33.669 [2024-07-12 07:29:07.379937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.669 [2024-07-12 07:29:07.380026] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:18:33.669 [2024-07-12 07:29:07.380183] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.669 [2024-07-12 07:29:07.383321] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.669 [2024-07-12 07:29:07.383498] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:33.669 BaseBdev1 00:18:33.669 07:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:33.669 07:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:33.928 BaseBdev2_malloc 00:18:33.928 07:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:34.186 true 00:18:34.186 07:29:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:34.444 [2024-07-12 07:29:08.159675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:34.444 [2024-07-12 07:29:08.160041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.444 [2024-07-12 07:29:08.160124] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:34.444 [2024-07-12 07:29:08.160258] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.444 [2024-07-12 07:29:08.163147] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.444 [2024-07-12 07:29:08.163308] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:34.444 BaseBdev2 00:18:34.444 07:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:34.444 07:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:34.703 BaseBdev3_malloc 00:18:34.703 07:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:34.961 true 00:18:34.961 07:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:34.961 [2024-07-12 07:29:08.793558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:34.961 [2024-07-12 07:29:08.793883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.961 [2024-07-12 07:29:08.793965] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:18:34.961 [2024-07-12 07:29:08.794142] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.961 [2024-07-12 07:29:08.797000] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.961 [2024-07-12 07:29:08.797180] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:34.961 BaseBdev3 00:18:34.961 07:29:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:35.219 [2024-07-12 07:29:09.065725] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.219 [2024-07-12 07:29:09.068384] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:35.219 [2024-07-12 07:29:09.068589] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:35.219 [2024-07-12 07:29:09.068928] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:18:35.219 [2024-07-12 07:29:09.069036] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:35.219 [2024-07-12 07:29:09.069236] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:18:35.219 [2024-07-12 07:29:09.069803] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:18:35.220 [2024-07-12 07:29:09.069920] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:18:35.220 [2024-07-12 07:29:09.070217] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.220 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:35.220 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:35.220 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:35.220 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:35.220 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:35.220 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:35.220 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:35.220 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:35.220 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:35.220 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:35.220 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.220 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.479 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:35.479 "name": "raid_bdev1", 00:18:35.479 "uuid": "12c4ab8b-aaf6-451a-820a-c9a91e49b01b", 00:18:35.479 "strip_size_kb": 64, 00:18:35.479 "state": "online", 00:18:35.479 "raid_level": "raid0", 00:18:35.479 "superblock": true, 00:18:35.479 "num_base_bdevs": 3, 00:18:35.479 "num_base_bdevs_discovered": 3, 00:18:35.479 "num_base_bdevs_operational": 3, 00:18:35.479 "base_bdevs_list": [ 00:18:35.479 { 00:18:35.479 "name": "BaseBdev1", 00:18:35.479 "uuid": "334d9e96-6245-5bbb-9cae-6ef32b0fcf6c", 00:18:35.479 "is_configured": true, 00:18:35.479 "data_offset": 2048, 00:18:35.479 "data_size": 63488 00:18:35.479 }, 00:18:35.479 { 00:18:35.479 "name": "BaseBdev2", 00:18:35.479 "uuid": "31fc0212-31cf-513f-88b3-2035a3f24910", 00:18:35.479 "is_configured": true, 00:18:35.479 "data_offset": 2048, 00:18:35.479 "data_size": 63488 00:18:35.479 }, 00:18:35.479 { 00:18:35.479 "name": "BaseBdev3", 00:18:35.479 "uuid": "3d45325d-2792-57ba-9133-7580be32668c", 00:18:35.479 "is_configured": true, 00:18:35.479 "data_offset": 2048, 00:18:35.479 "data_size": 63488 00:18:35.479 } 00:18:35.479 ] 00:18:35.479 }' 00:18:35.479 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:35.479 07:29:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.046 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:36.046 07:29:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:36.303 [2024-07-12 07:29:09.954904] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:18:37.237 07:29:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:37.495 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:37.495 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:37.495 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:18:37.495 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:37.495 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:37.495 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:37.495 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:37.495 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:37.495 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:37.495 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:37.495 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:37.495 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:37.495 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:37.495 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.495 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.753 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:37.753 "name": "raid_bdev1", 00:18:37.753 "uuid": "12c4ab8b-aaf6-451a-820a-c9a91e49b01b", 00:18:37.753 "strip_size_kb": 64, 00:18:37.753 "state": "online", 00:18:37.753 "raid_level": "raid0", 00:18:37.753 "superblock": true, 00:18:37.753 "num_base_bdevs": 3, 00:18:37.753 "num_base_bdevs_discovered": 3, 00:18:37.753 "num_base_bdevs_operational": 3, 00:18:37.753 "base_bdevs_list": [ 00:18:37.753 { 00:18:37.753 "name": "BaseBdev1", 00:18:37.753 "uuid": "334d9e96-6245-5bbb-9cae-6ef32b0fcf6c", 00:18:37.753 "is_configured": true, 00:18:37.753 "data_offset": 2048, 00:18:37.753 "data_size": 63488 00:18:37.753 }, 00:18:37.753 { 00:18:37.753 "name": "BaseBdev2", 00:18:37.753 "uuid": "31fc0212-31cf-513f-88b3-2035a3f24910", 00:18:37.753 "is_configured": true, 00:18:37.753 "data_offset": 2048, 00:18:37.753 "data_size": 63488 00:18:37.753 }, 00:18:37.753 { 00:18:37.753 "name": "BaseBdev3", 00:18:37.753 "uuid": "3d45325d-2792-57ba-9133-7580be32668c", 00:18:37.753 "is_configured": true, 00:18:37.753 "data_offset": 2048, 00:18:37.753 "data_size": 63488 00:18:37.753 } 00:18:37.753 ] 00:18:37.753 }' 00:18:37.753 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:37.753 07:29:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.319 07:29:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:38.577 [2024-07-12 07:29:12.264986] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.577 [2024-07-12 07:29:12.265347] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.578 [2024-07-12 07:29:12.268001] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.578 [2024-07-12 07:29:12.268188] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.578 [2024-07-12 07:29:12.268260] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.578 [2024-07-12 07:29:12.268334] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:18:38.578 0 00:18:38.578 07:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 138160 00:18:38.578 07:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 138160 ']' 00:18:38.578 07:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 138160 00:18:38.578 07:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:18:38.578 07:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:38.578 07:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 138160 00:18:38.578 killing process with pid 138160 00:18:38.578 07:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:38.578 07:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:38.578 07:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 138160' 00:18:38.578 07:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 138160 00:18:38.578 07:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 138160 00:18:38.578 [2024-07-12 07:29:12.316608] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.578 [2024-07-12 07:29:12.363216] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:39.143 07:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.feP4mZmi71 00:18:39.143 07:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:39.143 07:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:39.143 07:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:18:39.143 07:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:18:39.143 07:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:39.143 07:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:39.143 07:29:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:18:39.143 00:18:39.143 real 0m6.526s 00:18:39.143 user 0m10.280s 00:18:39.143 sys 0m1.174s 00:18:39.143 07:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:39.143 07:29:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.143 ************************************ 00:18:39.143 END TEST raid_write_error_test 00:18:39.143 ************************************ 00:18:39.143 07:29:12 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:18:39.143 07:29:12 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:18:39.143 07:29:12 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:39.143 07:29:12 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:39.143 07:29:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:39.143 ************************************ 00:18:39.143 START TEST raid_state_function_test 00:18:39.143 ************************************ 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 false 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:39.143 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=138346 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 138346' 00:18:39.144 Process raid pid: 138346 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 138346 /var/tmp/spdk-raid.sock 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 138346 ']' 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:39.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:39.144 07:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.144 [2024-07-12 07:29:12.940736] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:39.144 [2024-07-12 07:29:12.941129] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.402 [2024-07-12 07:29:13.085962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.402 [2024-07-12 07:29:13.164941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.402 [2024-07-12 07:29:13.244019] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.338 07:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:40.338 07:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:18:40.338 07:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:40.338 [2024-07-12 07:29:14.123567] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:40.338 [2024-07-12 07:29:14.123811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:40.338 [2024-07-12 07:29:14.123949] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:40.338 [2024-07-12 07:29:14.124006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:40.338 [2024-07-12 07:29:14.124085] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:40.338 [2024-07-12 07:29:14.124156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:40.338 07:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:40.338 07:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:40.338 07:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:40.338 07:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:40.338 07:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:40.338 07:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:40.338 07:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.338 07:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.338 07:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.338 07:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.338 07:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.338 07:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.597 07:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:40.597 "name": "Existed_Raid", 00:18:40.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.597 "strip_size_kb": 64, 00:18:40.597 "state": "configuring", 00:18:40.597 "raid_level": "concat", 00:18:40.597 "superblock": false, 00:18:40.597 "num_base_bdevs": 3, 00:18:40.597 "num_base_bdevs_discovered": 0, 00:18:40.597 "num_base_bdevs_operational": 3, 00:18:40.597 "base_bdevs_list": [ 00:18:40.597 { 00:18:40.597 "name": "BaseBdev1", 00:18:40.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.597 "is_configured": false, 00:18:40.597 "data_offset": 0, 00:18:40.597 "data_size": 0 00:18:40.597 }, 00:18:40.597 { 00:18:40.597 "name": "BaseBdev2", 00:18:40.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.597 "is_configured": false, 00:18:40.597 "data_offset": 0, 00:18:40.597 "data_size": 0 00:18:40.597 }, 00:18:40.597 { 00:18:40.597 "name": "BaseBdev3", 00:18:40.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.597 "is_configured": false, 00:18:40.597 "data_offset": 0, 00:18:40.597 "data_size": 0 00:18:40.597 } 00:18:40.597 ] 00:18:40.597 }' 00:18:40.597 07:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:40.597 07:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.163 07:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:41.421 [2024-07-12 07:29:15.055608] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:41.421 [2024-07-12 07:29:15.055816] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:41.421 07:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:41.421 [2024-07-12 07:29:15.239683] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:41.421 [2024-07-12 07:29:15.239969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:41.421 [2024-07-12 07:29:15.240058] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:41.421 [2024-07-12 07:29:15.240110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:41.421 [2024-07-12 07:29:15.240137] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:41.422 [2024-07-12 07:29:15.240181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:41.422 07:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:41.682 [2024-07-12 07:29:15.507803] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.682 BaseBdev1 00:18:41.682 07:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:41.682 07:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:18:41.682 07:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:41.682 07:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:41.682 07:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:41.682 07:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:41.682 07:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:41.941 07:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:42.200 [ 00:18:42.200 { 00:18:42.200 "name": "BaseBdev1", 00:18:42.200 "aliases": [ 00:18:42.200 "b2949501-1b4e-4ea0-b647-578a7fa666ef" 00:18:42.200 ], 00:18:42.200 "product_name": "Malloc disk", 00:18:42.200 "block_size": 512, 00:18:42.200 "num_blocks": 65536, 00:18:42.200 "uuid": "b2949501-1b4e-4ea0-b647-578a7fa666ef", 00:18:42.200 "assigned_rate_limits": { 00:18:42.200 "rw_ios_per_sec": 0, 00:18:42.200 "rw_mbytes_per_sec": 0, 00:18:42.200 "r_mbytes_per_sec": 0, 00:18:42.200 "w_mbytes_per_sec": 0 00:18:42.200 }, 00:18:42.200 "claimed": true, 00:18:42.200 "claim_type": "exclusive_write", 00:18:42.200 "zoned": false, 00:18:42.200 "supported_io_types": { 00:18:42.200 "read": true, 00:18:42.200 "write": true, 00:18:42.200 "unmap": true, 00:18:42.200 "write_zeroes": true, 00:18:42.200 "flush": true, 00:18:42.200 "reset": true, 00:18:42.200 "compare": false, 00:18:42.200 "compare_and_write": false, 00:18:42.200 "abort": true, 00:18:42.200 "nvme_admin": false, 00:18:42.200 "nvme_io": false 00:18:42.200 }, 00:18:42.200 "memory_domains": [ 00:18:42.200 { 00:18:42.200 "dma_device_id": "system", 00:18:42.200 "dma_device_type": 1 00:18:42.200 }, 00:18:42.200 { 00:18:42.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.200 "dma_device_type": 2 00:18:42.200 } 00:18:42.200 ], 00:18:42.200 "driver_specific": {} 00:18:42.200 } 00:18:42.200 ] 00:18:42.200 07:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:42.200 07:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:42.200 07:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:42.200 07:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:42.200 07:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:42.200 07:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:42.200 07:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:42.200 07:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:42.200 07:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:42.200 07:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:42.200 07:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:42.200 07:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.200 07:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.459 07:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:42.459 "name": "Existed_Raid", 00:18:42.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.459 "strip_size_kb": 64, 00:18:42.459 "state": "configuring", 00:18:42.459 "raid_level": "concat", 00:18:42.459 "superblock": false, 00:18:42.459 "num_base_bdevs": 3, 00:18:42.459 "num_base_bdevs_discovered": 1, 00:18:42.459 "num_base_bdevs_operational": 3, 00:18:42.459 "base_bdevs_list": [ 00:18:42.459 { 00:18:42.459 "name": "BaseBdev1", 00:18:42.459 "uuid": "b2949501-1b4e-4ea0-b647-578a7fa666ef", 00:18:42.459 "is_configured": true, 00:18:42.459 "data_offset": 0, 00:18:42.459 "data_size": 65536 00:18:42.459 }, 00:18:42.459 { 00:18:42.459 "name": "BaseBdev2", 00:18:42.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.459 "is_configured": false, 00:18:42.459 "data_offset": 0, 00:18:42.459 "data_size": 0 00:18:42.459 }, 00:18:42.459 { 00:18:42.459 "name": "BaseBdev3", 00:18:42.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.459 "is_configured": false, 00:18:42.459 "data_offset": 0, 00:18:42.459 "data_size": 0 00:18:42.459 } 00:18:42.459 ] 00:18:42.459 }' 00:18:42.459 07:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:42.459 07:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.028 07:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:43.287 [2024-07-12 07:29:16.996147] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:43.287 [2024-07-12 07:29:16.996460] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:43.287 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:43.545 [2024-07-12 07:29:17.188265] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:43.545 [2024-07-12 07:29:17.190919] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:43.545 [2024-07-12 07:29:17.191114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:43.545 [2024-07-12 07:29:17.191252] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:43.545 [2024-07-12 07:29:17.191321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.545 "name": "Existed_Raid", 00:18:43.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.545 "strip_size_kb": 64, 00:18:43.545 "state": "configuring", 00:18:43.545 "raid_level": "concat", 00:18:43.545 "superblock": false, 00:18:43.545 "num_base_bdevs": 3, 00:18:43.545 "num_base_bdevs_discovered": 1, 00:18:43.545 "num_base_bdevs_operational": 3, 00:18:43.545 "base_bdevs_list": [ 00:18:43.545 { 00:18:43.545 "name": "BaseBdev1", 00:18:43.545 "uuid": "b2949501-1b4e-4ea0-b647-578a7fa666ef", 00:18:43.545 "is_configured": true, 00:18:43.545 "data_offset": 0, 00:18:43.545 "data_size": 65536 00:18:43.545 }, 00:18:43.545 { 00:18:43.545 "name": "BaseBdev2", 00:18:43.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.545 "is_configured": false, 00:18:43.545 "data_offset": 0, 00:18:43.545 "data_size": 0 00:18:43.545 }, 00:18:43.545 { 00:18:43.545 "name": "BaseBdev3", 00:18:43.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.545 "is_configured": false, 00:18:43.545 "data_offset": 0, 00:18:43.545 "data_size": 0 00:18:43.545 } 00:18:43.545 ] 00:18:43.545 }' 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.545 07:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.480 07:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:44.480 [2024-07-12 07:29:18.202846] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:44.480 BaseBdev2 00:18:44.480 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:44.480 07:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:18:44.480 07:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:44.480 07:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:44.480 07:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:44.480 07:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:44.480 07:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:44.739 07:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:44.998 [ 00:18:44.998 { 00:18:44.998 "name": "BaseBdev2", 00:18:44.998 "aliases": [ 00:18:44.998 "96060eda-6a74-4445-a3ee-13c74c527f66" 00:18:44.998 ], 00:18:44.998 "product_name": "Malloc disk", 00:18:44.998 "block_size": 512, 00:18:44.998 "num_blocks": 65536, 00:18:44.998 "uuid": "96060eda-6a74-4445-a3ee-13c74c527f66", 00:18:44.998 "assigned_rate_limits": { 00:18:44.998 "rw_ios_per_sec": 0, 00:18:44.998 "rw_mbytes_per_sec": 0, 00:18:44.998 "r_mbytes_per_sec": 0, 00:18:44.998 "w_mbytes_per_sec": 0 00:18:44.998 }, 00:18:44.998 "claimed": true, 00:18:44.998 "claim_type": "exclusive_write", 00:18:44.998 "zoned": false, 00:18:44.998 "supported_io_types": { 00:18:44.999 "read": true, 00:18:44.999 "write": true, 00:18:44.999 "unmap": true, 00:18:44.999 "write_zeroes": true, 00:18:44.999 "flush": true, 00:18:44.999 "reset": true, 00:18:44.999 "compare": false, 00:18:44.999 "compare_and_write": false, 00:18:44.999 "abort": true, 00:18:44.999 "nvme_admin": false, 00:18:44.999 "nvme_io": false 00:18:44.999 }, 00:18:44.999 "memory_domains": [ 00:18:44.999 { 00:18:44.999 "dma_device_id": "system", 00:18:44.999 "dma_device_type": 1 00:18:44.999 }, 00:18:44.999 { 00:18:44.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.999 "dma_device_type": 2 00:18:44.999 } 00:18:44.999 ], 00:18:44.999 "driver_specific": {} 00:18:44.999 } 00:18:44.999 ] 00:18:44.999 07:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:44.999 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:44.999 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:44.999 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:44.999 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:44.999 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:44.999 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:44.999 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:44.999 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:44.999 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:44.999 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:44.999 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:44.999 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:44.999 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.999 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.258 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.258 "name": "Existed_Raid", 00:18:45.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.258 "strip_size_kb": 64, 00:18:45.258 "state": "configuring", 00:18:45.258 "raid_level": "concat", 00:18:45.258 "superblock": false, 00:18:45.258 "num_base_bdevs": 3, 00:18:45.258 "num_base_bdevs_discovered": 2, 00:18:45.258 "num_base_bdevs_operational": 3, 00:18:45.258 "base_bdevs_list": [ 00:18:45.258 { 00:18:45.258 "name": "BaseBdev1", 00:18:45.258 "uuid": "b2949501-1b4e-4ea0-b647-578a7fa666ef", 00:18:45.258 "is_configured": true, 00:18:45.258 "data_offset": 0, 00:18:45.258 "data_size": 65536 00:18:45.258 }, 00:18:45.258 { 00:18:45.258 "name": "BaseBdev2", 00:18:45.258 "uuid": "96060eda-6a74-4445-a3ee-13c74c527f66", 00:18:45.258 "is_configured": true, 00:18:45.258 "data_offset": 0, 00:18:45.258 "data_size": 65536 00:18:45.258 }, 00:18:45.258 { 00:18:45.258 "name": "BaseBdev3", 00:18:45.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.258 "is_configured": false, 00:18:45.258 "data_offset": 0, 00:18:45.258 "data_size": 0 00:18:45.258 } 00:18:45.258 ] 00:18:45.258 }' 00:18:45.258 07:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.258 07:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.824 07:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:45.824 [2024-07-12 07:29:19.696571] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:45.824 [2024-07-12 07:29:19.696809] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:18:45.824 [2024-07-12 07:29:19.696851] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:45.824 [2024-07-12 07:29:19.697098] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:18:45.824 [2024-07-12 07:29:19.697636] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:18:45.824 [2024-07-12 07:29:19.697747] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:18:45.824 [2024-07-12 07:29:19.698105] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.824 BaseBdev3 00:18:46.083 07:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:46.083 07:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:18:46.083 07:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:46.083 07:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:46.083 07:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:46.083 07:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:46.083 07:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:46.341 07:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:46.341 [ 00:18:46.341 { 00:18:46.341 "name": "BaseBdev3", 00:18:46.341 "aliases": [ 00:18:46.341 "9f6b5c16-d50a-4dde-83c9-08c025bc0b8a" 00:18:46.341 ], 00:18:46.341 "product_name": "Malloc disk", 00:18:46.341 "block_size": 512, 00:18:46.341 "num_blocks": 65536, 00:18:46.341 "uuid": "9f6b5c16-d50a-4dde-83c9-08c025bc0b8a", 00:18:46.341 "assigned_rate_limits": { 00:18:46.341 "rw_ios_per_sec": 0, 00:18:46.341 "rw_mbytes_per_sec": 0, 00:18:46.341 "r_mbytes_per_sec": 0, 00:18:46.341 "w_mbytes_per_sec": 0 00:18:46.341 }, 00:18:46.341 "claimed": true, 00:18:46.341 "claim_type": "exclusive_write", 00:18:46.341 "zoned": false, 00:18:46.341 "supported_io_types": { 00:18:46.341 "read": true, 00:18:46.341 "write": true, 00:18:46.341 "unmap": true, 00:18:46.341 "write_zeroes": true, 00:18:46.341 "flush": true, 00:18:46.341 "reset": true, 00:18:46.341 "compare": false, 00:18:46.341 "compare_and_write": false, 00:18:46.341 "abort": true, 00:18:46.341 "nvme_admin": false, 00:18:46.341 "nvme_io": false 00:18:46.341 }, 00:18:46.341 "memory_domains": [ 00:18:46.341 { 00:18:46.341 "dma_device_id": "system", 00:18:46.341 "dma_device_type": 1 00:18:46.341 }, 00:18:46.341 { 00:18:46.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.341 "dma_device_type": 2 00:18:46.341 } 00:18:46.341 ], 00:18:46.341 "driver_specific": {} 00:18:46.341 } 00:18:46.341 ] 00:18:46.341 07:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:46.341 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:46.341 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:46.342 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:46.342 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:46.342 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:46.342 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:46.342 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:46.342 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:46.342 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:46.342 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:46.342 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:46.342 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:46.342 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.342 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.908 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:46.908 "name": "Existed_Raid", 00:18:46.908 "uuid": "82bdfd79-de2c-4def-ab9a-eb78566fba90", 00:18:46.908 "strip_size_kb": 64, 00:18:46.908 "state": "online", 00:18:46.908 "raid_level": "concat", 00:18:46.908 "superblock": false, 00:18:46.908 "num_base_bdevs": 3, 00:18:46.908 "num_base_bdevs_discovered": 3, 00:18:46.908 "num_base_bdevs_operational": 3, 00:18:46.908 "base_bdevs_list": [ 00:18:46.908 { 00:18:46.908 "name": "BaseBdev1", 00:18:46.908 "uuid": "b2949501-1b4e-4ea0-b647-578a7fa666ef", 00:18:46.908 "is_configured": true, 00:18:46.908 "data_offset": 0, 00:18:46.908 "data_size": 65536 00:18:46.908 }, 00:18:46.908 { 00:18:46.908 "name": "BaseBdev2", 00:18:46.908 "uuid": "96060eda-6a74-4445-a3ee-13c74c527f66", 00:18:46.908 "is_configured": true, 00:18:46.908 "data_offset": 0, 00:18:46.908 "data_size": 65536 00:18:46.908 }, 00:18:46.908 { 00:18:46.908 "name": "BaseBdev3", 00:18:46.908 "uuid": "9f6b5c16-d50a-4dde-83c9-08c025bc0b8a", 00:18:46.908 "is_configured": true, 00:18:46.908 "data_offset": 0, 00:18:46.908 "data_size": 65536 00:18:46.908 } 00:18:46.908 ] 00:18:46.908 }' 00:18:46.908 07:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:46.908 07:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.475 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:47.475 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:47.476 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:47.476 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:47.476 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:47.476 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:47.476 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:47.476 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:47.476 [2024-07-12 07:29:21.241220] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.476 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:47.476 "name": "Existed_Raid", 00:18:47.476 "aliases": [ 00:18:47.476 "82bdfd79-de2c-4def-ab9a-eb78566fba90" 00:18:47.476 ], 00:18:47.476 "product_name": "Raid Volume", 00:18:47.476 "block_size": 512, 00:18:47.476 "num_blocks": 196608, 00:18:47.476 "uuid": "82bdfd79-de2c-4def-ab9a-eb78566fba90", 00:18:47.476 "assigned_rate_limits": { 00:18:47.476 "rw_ios_per_sec": 0, 00:18:47.476 "rw_mbytes_per_sec": 0, 00:18:47.476 "r_mbytes_per_sec": 0, 00:18:47.476 "w_mbytes_per_sec": 0 00:18:47.476 }, 00:18:47.476 "claimed": false, 00:18:47.476 "zoned": false, 00:18:47.476 "supported_io_types": { 00:18:47.476 "read": true, 00:18:47.476 "write": true, 00:18:47.476 "unmap": true, 00:18:47.476 "write_zeroes": true, 00:18:47.476 "flush": true, 00:18:47.476 "reset": true, 00:18:47.476 "compare": false, 00:18:47.476 "compare_and_write": false, 00:18:47.476 "abort": false, 00:18:47.476 "nvme_admin": false, 00:18:47.476 "nvme_io": false 00:18:47.476 }, 00:18:47.476 "memory_domains": [ 00:18:47.476 { 00:18:47.476 "dma_device_id": "system", 00:18:47.476 "dma_device_type": 1 00:18:47.476 }, 00:18:47.476 { 00:18:47.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.476 "dma_device_type": 2 00:18:47.476 }, 00:18:47.476 { 00:18:47.476 "dma_device_id": "system", 00:18:47.476 "dma_device_type": 1 00:18:47.476 }, 00:18:47.476 { 00:18:47.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.476 "dma_device_type": 2 00:18:47.476 }, 00:18:47.476 { 00:18:47.476 "dma_device_id": "system", 00:18:47.476 "dma_device_type": 1 00:18:47.476 }, 00:18:47.476 { 00:18:47.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.476 "dma_device_type": 2 00:18:47.476 } 00:18:47.476 ], 00:18:47.476 "driver_specific": { 00:18:47.476 "raid": { 00:18:47.476 "uuid": "82bdfd79-de2c-4def-ab9a-eb78566fba90", 00:18:47.476 "strip_size_kb": 64, 00:18:47.476 "state": "online", 00:18:47.476 "raid_level": "concat", 00:18:47.476 "superblock": false, 00:18:47.476 "num_base_bdevs": 3, 00:18:47.476 "num_base_bdevs_discovered": 3, 00:18:47.476 "num_base_bdevs_operational": 3, 00:18:47.476 "base_bdevs_list": [ 00:18:47.476 { 00:18:47.476 "name": "BaseBdev1", 00:18:47.476 "uuid": "b2949501-1b4e-4ea0-b647-578a7fa666ef", 00:18:47.476 "is_configured": true, 00:18:47.476 "data_offset": 0, 00:18:47.476 "data_size": 65536 00:18:47.476 }, 00:18:47.476 { 00:18:47.476 "name": "BaseBdev2", 00:18:47.476 "uuid": "96060eda-6a74-4445-a3ee-13c74c527f66", 00:18:47.476 "is_configured": true, 00:18:47.476 "data_offset": 0, 00:18:47.476 "data_size": 65536 00:18:47.476 }, 00:18:47.476 { 00:18:47.476 "name": "BaseBdev3", 00:18:47.476 "uuid": "9f6b5c16-d50a-4dde-83c9-08c025bc0b8a", 00:18:47.476 "is_configured": true, 00:18:47.476 "data_offset": 0, 00:18:47.476 "data_size": 65536 00:18:47.476 } 00:18:47.476 ] 00:18:47.476 } 00:18:47.476 } 00:18:47.476 }' 00:18:47.476 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:47.476 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:47.476 BaseBdev2 00:18:47.476 BaseBdev3' 00:18:47.476 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:47.476 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:47.476 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:47.735 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:47.735 "name": "BaseBdev1", 00:18:47.735 "aliases": [ 00:18:47.735 "b2949501-1b4e-4ea0-b647-578a7fa666ef" 00:18:47.735 ], 00:18:47.735 "product_name": "Malloc disk", 00:18:47.735 "block_size": 512, 00:18:47.735 "num_blocks": 65536, 00:18:47.735 "uuid": "b2949501-1b4e-4ea0-b647-578a7fa666ef", 00:18:47.735 "assigned_rate_limits": { 00:18:47.735 "rw_ios_per_sec": 0, 00:18:47.735 "rw_mbytes_per_sec": 0, 00:18:47.735 "r_mbytes_per_sec": 0, 00:18:47.735 "w_mbytes_per_sec": 0 00:18:47.735 }, 00:18:47.735 "claimed": true, 00:18:47.735 "claim_type": "exclusive_write", 00:18:47.735 "zoned": false, 00:18:47.735 "supported_io_types": { 00:18:47.735 "read": true, 00:18:47.735 "write": true, 00:18:47.735 "unmap": true, 00:18:47.735 "write_zeroes": true, 00:18:47.735 "flush": true, 00:18:47.735 "reset": true, 00:18:47.735 "compare": false, 00:18:47.735 "compare_and_write": false, 00:18:47.735 "abort": true, 00:18:47.735 "nvme_admin": false, 00:18:47.735 "nvme_io": false 00:18:47.735 }, 00:18:47.735 "memory_domains": [ 00:18:47.735 { 00:18:47.735 "dma_device_id": "system", 00:18:47.735 "dma_device_type": 1 00:18:47.735 }, 00:18:47.735 { 00:18:47.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.735 "dma_device_type": 2 00:18:47.735 } 00:18:47.735 ], 00:18:47.735 "driver_specific": {} 00:18:47.735 }' 00:18:47.735 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:47.735 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:47.735 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:47.735 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:47.994 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:47.994 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:47.994 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:47.994 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:47.994 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:47.994 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:47.994 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:48.253 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:48.253 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:48.253 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:48.253 07:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:48.253 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:48.253 "name": "BaseBdev2", 00:18:48.253 "aliases": [ 00:18:48.253 "96060eda-6a74-4445-a3ee-13c74c527f66" 00:18:48.253 ], 00:18:48.253 "product_name": "Malloc disk", 00:18:48.253 "block_size": 512, 00:18:48.253 "num_blocks": 65536, 00:18:48.253 "uuid": "96060eda-6a74-4445-a3ee-13c74c527f66", 00:18:48.253 "assigned_rate_limits": { 00:18:48.253 "rw_ios_per_sec": 0, 00:18:48.253 "rw_mbytes_per_sec": 0, 00:18:48.253 "r_mbytes_per_sec": 0, 00:18:48.253 "w_mbytes_per_sec": 0 00:18:48.253 }, 00:18:48.253 "claimed": true, 00:18:48.253 "claim_type": "exclusive_write", 00:18:48.253 "zoned": false, 00:18:48.253 "supported_io_types": { 00:18:48.253 "read": true, 00:18:48.253 "write": true, 00:18:48.253 "unmap": true, 00:18:48.253 "write_zeroes": true, 00:18:48.253 "flush": true, 00:18:48.253 "reset": true, 00:18:48.253 "compare": false, 00:18:48.253 "compare_and_write": false, 00:18:48.253 "abort": true, 00:18:48.253 "nvme_admin": false, 00:18:48.253 "nvme_io": false 00:18:48.253 }, 00:18:48.253 "memory_domains": [ 00:18:48.253 { 00:18:48.253 "dma_device_id": "system", 00:18:48.253 "dma_device_type": 1 00:18:48.253 }, 00:18:48.253 { 00:18:48.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.253 "dma_device_type": 2 00:18:48.253 } 00:18:48.253 ], 00:18:48.253 "driver_specific": {} 00:18:48.253 }' 00:18:48.253 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:48.512 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:48.512 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:48.512 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:48.512 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:48.512 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:48.512 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:48.512 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:48.512 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:48.512 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:48.771 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:48.771 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:48.771 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:48.771 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:48.771 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:49.030 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:49.030 "name": "BaseBdev3", 00:18:49.030 "aliases": [ 00:18:49.030 "9f6b5c16-d50a-4dde-83c9-08c025bc0b8a" 00:18:49.030 ], 00:18:49.030 "product_name": "Malloc disk", 00:18:49.030 "block_size": 512, 00:18:49.030 "num_blocks": 65536, 00:18:49.030 "uuid": "9f6b5c16-d50a-4dde-83c9-08c025bc0b8a", 00:18:49.030 "assigned_rate_limits": { 00:18:49.030 "rw_ios_per_sec": 0, 00:18:49.030 "rw_mbytes_per_sec": 0, 00:18:49.030 "r_mbytes_per_sec": 0, 00:18:49.030 "w_mbytes_per_sec": 0 00:18:49.030 }, 00:18:49.030 "claimed": true, 00:18:49.030 "claim_type": "exclusive_write", 00:18:49.030 "zoned": false, 00:18:49.030 "supported_io_types": { 00:18:49.030 "read": true, 00:18:49.030 "write": true, 00:18:49.030 "unmap": true, 00:18:49.030 "write_zeroes": true, 00:18:49.030 "flush": true, 00:18:49.030 "reset": true, 00:18:49.030 "compare": false, 00:18:49.030 "compare_and_write": false, 00:18:49.030 "abort": true, 00:18:49.030 "nvme_admin": false, 00:18:49.030 "nvme_io": false 00:18:49.030 }, 00:18:49.030 "memory_domains": [ 00:18:49.030 { 00:18:49.030 "dma_device_id": "system", 00:18:49.030 "dma_device_type": 1 00:18:49.030 }, 00:18:49.030 { 00:18:49.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.030 "dma_device_type": 2 00:18:49.030 } 00:18:49.030 ], 00:18:49.030 "driver_specific": {} 00:18:49.030 }' 00:18:49.030 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:49.030 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:49.030 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:49.030 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:49.030 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:49.030 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:49.030 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:49.300 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:49.300 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:49.300 07:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:49.300 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:49.300 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:49.300 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:49.565 [2024-07-12 07:29:23.313472] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:49.565 [2024-07-12 07:29:23.313659] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:49.565 [2024-07-12 07:29:23.313902] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.565 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.824 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.824 "name": "Existed_Raid", 00:18:49.824 "uuid": "82bdfd79-de2c-4def-ab9a-eb78566fba90", 00:18:49.824 "strip_size_kb": 64, 00:18:49.824 "state": "offline", 00:18:49.824 "raid_level": "concat", 00:18:49.824 "superblock": false, 00:18:49.824 "num_base_bdevs": 3, 00:18:49.824 "num_base_bdevs_discovered": 2, 00:18:49.824 "num_base_bdevs_operational": 2, 00:18:49.824 "base_bdevs_list": [ 00:18:49.824 { 00:18:49.824 "name": null, 00:18:49.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.824 "is_configured": false, 00:18:49.824 "data_offset": 0, 00:18:49.824 "data_size": 65536 00:18:49.824 }, 00:18:49.824 { 00:18:49.824 "name": "BaseBdev2", 00:18:49.824 "uuid": "96060eda-6a74-4445-a3ee-13c74c527f66", 00:18:49.824 "is_configured": true, 00:18:49.824 "data_offset": 0, 00:18:49.824 "data_size": 65536 00:18:49.824 }, 00:18:49.824 { 00:18:49.824 "name": "BaseBdev3", 00:18:49.824 "uuid": "9f6b5c16-d50a-4dde-83c9-08c025bc0b8a", 00:18:49.824 "is_configured": true, 00:18:49.824 "data_offset": 0, 00:18:49.824 "data_size": 65536 00:18:49.824 } 00:18:49.824 ] 00:18:49.824 }' 00:18:49.824 07:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.824 07:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.393 07:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:50.393 07:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:50.393 07:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.393 07:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:50.652 07:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:50.652 07:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:50.652 07:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:50.925 [2024-07-12 07:29:24.657755] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:50.925 07:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:50.925 07:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:50.925 07:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:50.925 07:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.184 07:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:51.184 07:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:51.184 07:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:51.442 [2024-07-12 07:29:25.190947] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:51.442 [2024-07-12 07:29:25.191275] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:18:51.442 07:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:51.442 07:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:51.442 07:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.442 07:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:51.710 07:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:51.710 07:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:51.710 07:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:51.710 07:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:51.710 07:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:51.710 07:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:51.983 BaseBdev2 00:18:51.983 07:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:51.983 07:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:18:51.983 07:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:51.983 07:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:51.983 07:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:51.983 07:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:51.983 07:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:52.242 07:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:52.501 [ 00:18:52.501 { 00:18:52.501 "name": "BaseBdev2", 00:18:52.501 "aliases": [ 00:18:52.501 "2e32d33f-5375-4547-9838-80e6deb693d0" 00:18:52.501 ], 00:18:52.501 "product_name": "Malloc disk", 00:18:52.501 "block_size": 512, 00:18:52.501 "num_blocks": 65536, 00:18:52.501 "uuid": "2e32d33f-5375-4547-9838-80e6deb693d0", 00:18:52.501 "assigned_rate_limits": { 00:18:52.501 "rw_ios_per_sec": 0, 00:18:52.501 "rw_mbytes_per_sec": 0, 00:18:52.501 "r_mbytes_per_sec": 0, 00:18:52.501 "w_mbytes_per_sec": 0 00:18:52.501 }, 00:18:52.501 "claimed": false, 00:18:52.501 "zoned": false, 00:18:52.501 "supported_io_types": { 00:18:52.501 "read": true, 00:18:52.501 "write": true, 00:18:52.501 "unmap": true, 00:18:52.501 "write_zeroes": true, 00:18:52.501 "flush": true, 00:18:52.501 "reset": true, 00:18:52.501 "compare": false, 00:18:52.501 "compare_and_write": false, 00:18:52.501 "abort": true, 00:18:52.501 "nvme_admin": false, 00:18:52.501 "nvme_io": false 00:18:52.501 }, 00:18:52.501 "memory_domains": [ 00:18:52.501 { 00:18:52.501 "dma_device_id": "system", 00:18:52.501 "dma_device_type": 1 00:18:52.501 }, 00:18:52.501 { 00:18:52.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.501 "dma_device_type": 2 00:18:52.501 } 00:18:52.501 ], 00:18:52.501 "driver_specific": {} 00:18:52.501 } 00:18:52.501 ] 00:18:52.501 07:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:52.501 07:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:52.501 07:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:52.501 07:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:52.760 BaseBdev3 00:18:52.760 07:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:52.760 07:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:18:52.760 07:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:52.761 07:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:52.761 07:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:52.761 07:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:52.761 07:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:52.761 07:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:53.020 [ 00:18:53.020 { 00:18:53.020 "name": "BaseBdev3", 00:18:53.020 "aliases": [ 00:18:53.020 "0b5601b1-de33-4f28-b99b-84da7445f8a0" 00:18:53.020 ], 00:18:53.020 "product_name": "Malloc disk", 00:18:53.020 "block_size": 512, 00:18:53.020 "num_blocks": 65536, 00:18:53.020 "uuid": "0b5601b1-de33-4f28-b99b-84da7445f8a0", 00:18:53.020 "assigned_rate_limits": { 00:18:53.020 "rw_ios_per_sec": 0, 00:18:53.020 "rw_mbytes_per_sec": 0, 00:18:53.020 "r_mbytes_per_sec": 0, 00:18:53.020 "w_mbytes_per_sec": 0 00:18:53.020 }, 00:18:53.020 "claimed": false, 00:18:53.020 "zoned": false, 00:18:53.020 "supported_io_types": { 00:18:53.020 "read": true, 00:18:53.020 "write": true, 00:18:53.020 "unmap": true, 00:18:53.020 "write_zeroes": true, 00:18:53.020 "flush": true, 00:18:53.020 "reset": true, 00:18:53.020 "compare": false, 00:18:53.020 "compare_and_write": false, 00:18:53.020 "abort": true, 00:18:53.020 "nvme_admin": false, 00:18:53.020 "nvme_io": false 00:18:53.020 }, 00:18:53.020 "memory_domains": [ 00:18:53.020 { 00:18:53.020 "dma_device_id": "system", 00:18:53.020 "dma_device_type": 1 00:18:53.020 }, 00:18:53.020 { 00:18:53.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.020 "dma_device_type": 2 00:18:53.020 } 00:18:53.020 ], 00:18:53.020 "driver_specific": {} 00:18:53.020 } 00:18:53.020 ] 00:18:53.020 07:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:53.020 07:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:53.020 07:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:53.020 07:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:53.280 [2024-07-12 07:29:27.019327] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:53.280 [2024-07-12 07:29:27.019686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:53.280 [2024-07-12 07:29:27.019813] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:53.280 [2024-07-12 07:29:27.022275] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:53.280 07:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:53.280 07:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:53.280 07:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:53.280 07:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:53.280 07:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:53.280 07:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:53.280 07:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:53.280 07:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:53.280 07:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:53.280 07:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.280 07:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.280 07:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.539 07:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.539 "name": "Existed_Raid", 00:18:53.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.539 "strip_size_kb": 64, 00:18:53.539 "state": "configuring", 00:18:53.539 "raid_level": "concat", 00:18:53.539 "superblock": false, 00:18:53.539 "num_base_bdevs": 3, 00:18:53.539 "num_base_bdevs_discovered": 2, 00:18:53.539 "num_base_bdevs_operational": 3, 00:18:53.539 "base_bdevs_list": [ 00:18:53.539 { 00:18:53.539 "name": "BaseBdev1", 00:18:53.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.539 "is_configured": false, 00:18:53.539 "data_offset": 0, 00:18:53.539 "data_size": 0 00:18:53.539 }, 00:18:53.539 { 00:18:53.539 "name": "BaseBdev2", 00:18:53.539 "uuid": "2e32d33f-5375-4547-9838-80e6deb693d0", 00:18:53.539 "is_configured": true, 00:18:53.539 "data_offset": 0, 00:18:53.539 "data_size": 65536 00:18:53.539 }, 00:18:53.539 { 00:18:53.539 "name": "BaseBdev3", 00:18:53.539 "uuid": "0b5601b1-de33-4f28-b99b-84da7445f8a0", 00:18:53.539 "is_configured": true, 00:18:53.539 "data_offset": 0, 00:18:53.539 "data_size": 65536 00:18:53.539 } 00:18:53.539 ] 00:18:53.539 }' 00:18:53.539 07:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.539 07:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.107 07:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:54.366 [2024-07-12 07:29:28.067515] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:54.366 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:54.366 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:54.366 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:54.366 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:54.366 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:54.366 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:54.366 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:54.366 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:54.366 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:54.366 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:54.366 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.366 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.625 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:54.625 "name": "Existed_Raid", 00:18:54.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.625 "strip_size_kb": 64, 00:18:54.625 "state": "configuring", 00:18:54.625 "raid_level": "concat", 00:18:54.625 "superblock": false, 00:18:54.625 "num_base_bdevs": 3, 00:18:54.625 "num_base_bdevs_discovered": 1, 00:18:54.625 "num_base_bdevs_operational": 3, 00:18:54.625 "base_bdevs_list": [ 00:18:54.625 { 00:18:54.625 "name": "BaseBdev1", 00:18:54.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.625 "is_configured": false, 00:18:54.625 "data_offset": 0, 00:18:54.625 "data_size": 0 00:18:54.625 }, 00:18:54.625 { 00:18:54.625 "name": null, 00:18:54.625 "uuid": "2e32d33f-5375-4547-9838-80e6deb693d0", 00:18:54.625 "is_configured": false, 00:18:54.625 "data_offset": 0, 00:18:54.625 "data_size": 65536 00:18:54.625 }, 00:18:54.625 { 00:18:54.625 "name": "BaseBdev3", 00:18:54.625 "uuid": "0b5601b1-de33-4f28-b99b-84da7445f8a0", 00:18:54.625 "is_configured": true, 00:18:54.625 "data_offset": 0, 00:18:54.625 "data_size": 65536 00:18:54.625 } 00:18:54.625 ] 00:18:54.625 }' 00:18:54.625 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:54.625 07:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.192 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.192 07:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:55.452 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:55.452 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:55.452 [2024-07-12 07:29:29.269052] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.452 BaseBdev1 00:18:55.452 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:55.452 07:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:18:55.452 07:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:55.452 07:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:55.452 07:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:55.452 07:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:55.452 07:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:55.711 07:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:55.968 [ 00:18:55.968 { 00:18:55.968 "name": "BaseBdev1", 00:18:55.968 "aliases": [ 00:18:55.968 "28946791-fa8a-44db-9222-aacb318f89d5" 00:18:55.968 ], 00:18:55.968 "product_name": "Malloc disk", 00:18:55.968 "block_size": 512, 00:18:55.968 "num_blocks": 65536, 00:18:55.968 "uuid": "28946791-fa8a-44db-9222-aacb318f89d5", 00:18:55.968 "assigned_rate_limits": { 00:18:55.968 "rw_ios_per_sec": 0, 00:18:55.968 "rw_mbytes_per_sec": 0, 00:18:55.968 "r_mbytes_per_sec": 0, 00:18:55.968 "w_mbytes_per_sec": 0 00:18:55.968 }, 00:18:55.968 "claimed": true, 00:18:55.968 "claim_type": "exclusive_write", 00:18:55.968 "zoned": false, 00:18:55.968 "supported_io_types": { 00:18:55.968 "read": true, 00:18:55.968 "write": true, 00:18:55.968 "unmap": true, 00:18:55.968 "write_zeroes": true, 00:18:55.968 "flush": true, 00:18:55.968 "reset": true, 00:18:55.968 "compare": false, 00:18:55.968 "compare_and_write": false, 00:18:55.968 "abort": true, 00:18:55.968 "nvme_admin": false, 00:18:55.968 "nvme_io": false 00:18:55.968 }, 00:18:55.969 "memory_domains": [ 00:18:55.969 { 00:18:55.969 "dma_device_id": "system", 00:18:55.969 "dma_device_type": 1 00:18:55.969 }, 00:18:55.969 { 00:18:55.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.969 "dma_device_type": 2 00:18:55.969 } 00:18:55.969 ], 00:18:55.969 "driver_specific": {} 00:18:55.969 } 00:18:55.969 ] 00:18:55.969 07:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:55.969 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:55.969 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:55.969 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:55.969 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:55.969 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:55.969 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:55.969 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:55.969 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:55.969 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:55.969 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:55.969 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.969 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.226 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:56.226 "name": "Existed_Raid", 00:18:56.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.226 "strip_size_kb": 64, 00:18:56.226 "state": "configuring", 00:18:56.226 "raid_level": "concat", 00:18:56.226 "superblock": false, 00:18:56.226 "num_base_bdevs": 3, 00:18:56.226 "num_base_bdevs_discovered": 2, 00:18:56.226 "num_base_bdevs_operational": 3, 00:18:56.226 "base_bdevs_list": [ 00:18:56.226 { 00:18:56.226 "name": "BaseBdev1", 00:18:56.226 "uuid": "28946791-fa8a-44db-9222-aacb318f89d5", 00:18:56.226 "is_configured": true, 00:18:56.226 "data_offset": 0, 00:18:56.226 "data_size": 65536 00:18:56.226 }, 00:18:56.226 { 00:18:56.226 "name": null, 00:18:56.226 "uuid": "2e32d33f-5375-4547-9838-80e6deb693d0", 00:18:56.226 "is_configured": false, 00:18:56.226 "data_offset": 0, 00:18:56.226 "data_size": 65536 00:18:56.226 }, 00:18:56.226 { 00:18:56.226 "name": "BaseBdev3", 00:18:56.226 "uuid": "0b5601b1-de33-4f28-b99b-84da7445f8a0", 00:18:56.226 "is_configured": true, 00:18:56.226 "data_offset": 0, 00:18:56.226 "data_size": 65536 00:18:56.226 } 00:18:56.226 ] 00:18:56.226 }' 00:18:56.226 07:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:56.226 07:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.791 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.791 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:57.049 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:57.049 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:57.308 [2024-07-12 07:29:30.939555] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:57.308 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:57.308 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:57.308 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:57.308 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:57.308 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:57.308 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:57.308 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:57.308 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:57.308 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:57.308 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:57.308 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.308 07:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.308 07:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:57.308 "name": "Existed_Raid", 00:18:57.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.308 "strip_size_kb": 64, 00:18:57.308 "state": "configuring", 00:18:57.308 "raid_level": "concat", 00:18:57.308 "superblock": false, 00:18:57.308 "num_base_bdevs": 3, 00:18:57.308 "num_base_bdevs_discovered": 1, 00:18:57.308 "num_base_bdevs_operational": 3, 00:18:57.308 "base_bdevs_list": [ 00:18:57.308 { 00:18:57.308 "name": "BaseBdev1", 00:18:57.308 "uuid": "28946791-fa8a-44db-9222-aacb318f89d5", 00:18:57.308 "is_configured": true, 00:18:57.308 "data_offset": 0, 00:18:57.308 "data_size": 65536 00:18:57.308 }, 00:18:57.308 { 00:18:57.308 "name": null, 00:18:57.308 "uuid": "2e32d33f-5375-4547-9838-80e6deb693d0", 00:18:57.308 "is_configured": false, 00:18:57.308 "data_offset": 0, 00:18:57.308 "data_size": 65536 00:18:57.308 }, 00:18:57.308 { 00:18:57.308 "name": null, 00:18:57.308 "uuid": "0b5601b1-de33-4f28-b99b-84da7445f8a0", 00:18:57.308 "is_configured": false, 00:18:57.308 "data_offset": 0, 00:18:57.308 "data_size": 65536 00:18:57.308 } 00:18:57.308 ] 00:18:57.308 }' 00:18:57.308 07:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:57.308 07:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.876 07:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.876 07:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:58.443 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:58.443 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:58.443 [2024-07-12 07:29:32.265592] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:58.443 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:58.443 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:58.443 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:58.443 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:58.443 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:58.443 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:58.443 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.443 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.443 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.443 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.443 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.443 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.700 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:58.700 "name": "Existed_Raid", 00:18:58.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.700 "strip_size_kb": 64, 00:18:58.700 "state": "configuring", 00:18:58.700 "raid_level": "concat", 00:18:58.700 "superblock": false, 00:18:58.700 "num_base_bdevs": 3, 00:18:58.700 "num_base_bdevs_discovered": 2, 00:18:58.700 "num_base_bdevs_operational": 3, 00:18:58.700 "base_bdevs_list": [ 00:18:58.700 { 00:18:58.700 "name": "BaseBdev1", 00:18:58.700 "uuid": "28946791-fa8a-44db-9222-aacb318f89d5", 00:18:58.700 "is_configured": true, 00:18:58.700 "data_offset": 0, 00:18:58.700 "data_size": 65536 00:18:58.700 }, 00:18:58.700 { 00:18:58.700 "name": null, 00:18:58.700 "uuid": "2e32d33f-5375-4547-9838-80e6deb693d0", 00:18:58.700 "is_configured": false, 00:18:58.700 "data_offset": 0, 00:18:58.700 "data_size": 65536 00:18:58.700 }, 00:18:58.700 { 00:18:58.700 "name": "BaseBdev3", 00:18:58.700 "uuid": "0b5601b1-de33-4f28-b99b-84da7445f8a0", 00:18:58.700 "is_configured": true, 00:18:58.700 "data_offset": 0, 00:18:58.700 "data_size": 65536 00:18:58.700 } 00:18:58.700 ] 00:18:58.700 }' 00:18:58.700 07:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:58.700 07:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.633 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.633 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:59.633 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:59.633 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:59.892 [2024-07-12 07:29:33.697928] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:59.892 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:59.892 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:59.892 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:59.892 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:59.892 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:59.892 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:59.892 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:59.892 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:59.892 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:59.892 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:59.892 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.892 07:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.151 07:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:00.151 "name": "Existed_Raid", 00:19:00.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.151 "strip_size_kb": 64, 00:19:00.151 "state": "configuring", 00:19:00.151 "raid_level": "concat", 00:19:00.151 "superblock": false, 00:19:00.151 "num_base_bdevs": 3, 00:19:00.151 "num_base_bdevs_discovered": 1, 00:19:00.151 "num_base_bdevs_operational": 3, 00:19:00.151 "base_bdevs_list": [ 00:19:00.151 { 00:19:00.151 "name": null, 00:19:00.151 "uuid": "28946791-fa8a-44db-9222-aacb318f89d5", 00:19:00.151 "is_configured": false, 00:19:00.151 "data_offset": 0, 00:19:00.151 "data_size": 65536 00:19:00.151 }, 00:19:00.151 { 00:19:00.151 "name": null, 00:19:00.151 "uuid": "2e32d33f-5375-4547-9838-80e6deb693d0", 00:19:00.151 "is_configured": false, 00:19:00.151 "data_offset": 0, 00:19:00.151 "data_size": 65536 00:19:00.151 }, 00:19:00.151 { 00:19:00.151 "name": "BaseBdev3", 00:19:00.151 "uuid": "0b5601b1-de33-4f28-b99b-84da7445f8a0", 00:19:00.151 "is_configured": true, 00:19:00.151 "data_offset": 0, 00:19:00.151 "data_size": 65536 00:19:00.151 } 00:19:00.151 ] 00:19:00.151 }' 00:19:00.151 07:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:00.151 07:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.089 07:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.089 07:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:01.089 07:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:01.089 07:29:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:01.349 [2024-07-12 07:29:35.074153] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:01.349 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:01.349 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:01.349 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:01.349 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:01.349 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:01.349 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:01.349 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.349 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.349 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.349 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.349 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.349 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.608 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.608 "name": "Existed_Raid", 00:19:01.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.608 "strip_size_kb": 64, 00:19:01.608 "state": "configuring", 00:19:01.608 "raid_level": "concat", 00:19:01.608 "superblock": false, 00:19:01.608 "num_base_bdevs": 3, 00:19:01.608 "num_base_bdevs_discovered": 2, 00:19:01.608 "num_base_bdevs_operational": 3, 00:19:01.608 "base_bdevs_list": [ 00:19:01.608 { 00:19:01.608 "name": null, 00:19:01.608 "uuid": "28946791-fa8a-44db-9222-aacb318f89d5", 00:19:01.608 "is_configured": false, 00:19:01.608 "data_offset": 0, 00:19:01.608 "data_size": 65536 00:19:01.608 }, 00:19:01.608 { 00:19:01.608 "name": "BaseBdev2", 00:19:01.608 "uuid": "2e32d33f-5375-4547-9838-80e6deb693d0", 00:19:01.608 "is_configured": true, 00:19:01.608 "data_offset": 0, 00:19:01.608 "data_size": 65536 00:19:01.608 }, 00:19:01.608 { 00:19:01.608 "name": "BaseBdev3", 00:19:01.608 "uuid": "0b5601b1-de33-4f28-b99b-84da7445f8a0", 00:19:01.608 "is_configured": true, 00:19:01.608 "data_offset": 0, 00:19:01.608 "data_size": 65536 00:19:01.608 } 00:19:01.608 ] 00:19:01.608 }' 00:19:01.608 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.608 07:29:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.175 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.175 07:29:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:02.434 07:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:02.434 07:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:02.434 07:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.434 07:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 28946791-fa8a-44db-9222-aacb318f89d5 00:19:02.694 [2024-07-12 07:29:36.520025] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:02.694 [2024-07-12 07:29:36.520343] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:02.694 [2024-07-12 07:29:36.520384] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:02.694 [2024-07-12 07:29:36.520595] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:19:02.694 [2024-07-12 07:29:36.521040] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:02.694 [2024-07-12 07:29:36.521151] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:19:02.694 [2024-07-12 07:29:36.521470] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.694 NewBaseBdev 00:19:02.694 07:29:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:02.694 07:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:19:02.694 07:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:02.694 07:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:02.694 07:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:02.694 07:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:02.694 07:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:02.953 07:29:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:03.212 [ 00:19:03.212 { 00:19:03.212 "name": "NewBaseBdev", 00:19:03.212 "aliases": [ 00:19:03.212 "28946791-fa8a-44db-9222-aacb318f89d5" 00:19:03.212 ], 00:19:03.212 "product_name": "Malloc disk", 00:19:03.212 "block_size": 512, 00:19:03.212 "num_blocks": 65536, 00:19:03.212 "uuid": "28946791-fa8a-44db-9222-aacb318f89d5", 00:19:03.212 "assigned_rate_limits": { 00:19:03.212 "rw_ios_per_sec": 0, 00:19:03.212 "rw_mbytes_per_sec": 0, 00:19:03.212 "r_mbytes_per_sec": 0, 00:19:03.212 "w_mbytes_per_sec": 0 00:19:03.212 }, 00:19:03.212 "claimed": true, 00:19:03.212 "claim_type": "exclusive_write", 00:19:03.212 "zoned": false, 00:19:03.212 "supported_io_types": { 00:19:03.212 "read": true, 00:19:03.212 "write": true, 00:19:03.212 "unmap": true, 00:19:03.212 "write_zeroes": true, 00:19:03.212 "flush": true, 00:19:03.212 "reset": true, 00:19:03.212 "compare": false, 00:19:03.212 "compare_and_write": false, 00:19:03.212 "abort": true, 00:19:03.212 "nvme_admin": false, 00:19:03.212 "nvme_io": false 00:19:03.212 }, 00:19:03.212 "memory_domains": [ 00:19:03.212 { 00:19:03.212 "dma_device_id": "system", 00:19:03.212 "dma_device_type": 1 00:19:03.212 }, 00:19:03.212 { 00:19:03.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.212 "dma_device_type": 2 00:19:03.212 } 00:19:03.212 ], 00:19:03.212 "driver_specific": {} 00:19:03.212 } 00:19:03.212 ] 00:19:03.212 07:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:03.212 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:03.212 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:03.212 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:03.212 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:03.212 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:03.212 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:03.212 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.212 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.212 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.212 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.212 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.212 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.472 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.472 "name": "Existed_Raid", 00:19:03.472 "uuid": "44bbab93-513d-4529-ba03-d11c35d45ee8", 00:19:03.472 "strip_size_kb": 64, 00:19:03.472 "state": "online", 00:19:03.472 "raid_level": "concat", 00:19:03.472 "superblock": false, 00:19:03.472 "num_base_bdevs": 3, 00:19:03.472 "num_base_bdevs_discovered": 3, 00:19:03.472 "num_base_bdevs_operational": 3, 00:19:03.472 "base_bdevs_list": [ 00:19:03.472 { 00:19:03.472 "name": "NewBaseBdev", 00:19:03.472 "uuid": "28946791-fa8a-44db-9222-aacb318f89d5", 00:19:03.472 "is_configured": true, 00:19:03.472 "data_offset": 0, 00:19:03.472 "data_size": 65536 00:19:03.472 }, 00:19:03.472 { 00:19:03.472 "name": "BaseBdev2", 00:19:03.472 "uuid": "2e32d33f-5375-4547-9838-80e6deb693d0", 00:19:03.472 "is_configured": true, 00:19:03.472 "data_offset": 0, 00:19:03.472 "data_size": 65536 00:19:03.472 }, 00:19:03.472 { 00:19:03.472 "name": "BaseBdev3", 00:19:03.472 "uuid": "0b5601b1-de33-4f28-b99b-84da7445f8a0", 00:19:03.472 "is_configured": true, 00:19:03.472 "data_offset": 0, 00:19:03.472 "data_size": 65536 00:19:03.472 } 00:19:03.472 ] 00:19:03.472 }' 00:19:03.472 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.472 07:29:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.039 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:04.039 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:04.039 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:04.039 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:04.039 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:04.039 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:04.039 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:04.039 07:29:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:04.296 [2024-07-12 07:29:37.988675] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.296 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:04.296 "name": "Existed_Raid", 00:19:04.296 "aliases": [ 00:19:04.296 "44bbab93-513d-4529-ba03-d11c35d45ee8" 00:19:04.296 ], 00:19:04.296 "product_name": "Raid Volume", 00:19:04.296 "block_size": 512, 00:19:04.296 "num_blocks": 196608, 00:19:04.296 "uuid": "44bbab93-513d-4529-ba03-d11c35d45ee8", 00:19:04.296 "assigned_rate_limits": { 00:19:04.296 "rw_ios_per_sec": 0, 00:19:04.296 "rw_mbytes_per_sec": 0, 00:19:04.296 "r_mbytes_per_sec": 0, 00:19:04.296 "w_mbytes_per_sec": 0 00:19:04.296 }, 00:19:04.296 "claimed": false, 00:19:04.296 "zoned": false, 00:19:04.296 "supported_io_types": { 00:19:04.296 "read": true, 00:19:04.296 "write": true, 00:19:04.296 "unmap": true, 00:19:04.296 "write_zeroes": true, 00:19:04.296 "flush": true, 00:19:04.296 "reset": true, 00:19:04.296 "compare": false, 00:19:04.296 "compare_and_write": false, 00:19:04.296 "abort": false, 00:19:04.296 "nvme_admin": false, 00:19:04.296 "nvme_io": false 00:19:04.296 }, 00:19:04.296 "memory_domains": [ 00:19:04.296 { 00:19:04.296 "dma_device_id": "system", 00:19:04.296 "dma_device_type": 1 00:19:04.296 }, 00:19:04.296 { 00:19:04.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.296 "dma_device_type": 2 00:19:04.296 }, 00:19:04.296 { 00:19:04.296 "dma_device_id": "system", 00:19:04.296 "dma_device_type": 1 00:19:04.296 }, 00:19:04.296 { 00:19:04.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.296 "dma_device_type": 2 00:19:04.296 }, 00:19:04.296 { 00:19:04.296 "dma_device_id": "system", 00:19:04.296 "dma_device_type": 1 00:19:04.296 }, 00:19:04.296 { 00:19:04.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.296 "dma_device_type": 2 00:19:04.296 } 00:19:04.296 ], 00:19:04.296 "driver_specific": { 00:19:04.296 "raid": { 00:19:04.296 "uuid": "44bbab93-513d-4529-ba03-d11c35d45ee8", 00:19:04.296 "strip_size_kb": 64, 00:19:04.296 "state": "online", 00:19:04.296 "raid_level": "concat", 00:19:04.296 "superblock": false, 00:19:04.296 "num_base_bdevs": 3, 00:19:04.296 "num_base_bdevs_discovered": 3, 00:19:04.296 "num_base_bdevs_operational": 3, 00:19:04.296 "base_bdevs_list": [ 00:19:04.296 { 00:19:04.296 "name": "NewBaseBdev", 00:19:04.296 "uuid": "28946791-fa8a-44db-9222-aacb318f89d5", 00:19:04.296 "is_configured": true, 00:19:04.296 "data_offset": 0, 00:19:04.296 "data_size": 65536 00:19:04.296 }, 00:19:04.296 { 00:19:04.296 "name": "BaseBdev2", 00:19:04.296 "uuid": "2e32d33f-5375-4547-9838-80e6deb693d0", 00:19:04.296 "is_configured": true, 00:19:04.296 "data_offset": 0, 00:19:04.296 "data_size": 65536 00:19:04.296 }, 00:19:04.296 { 00:19:04.296 "name": "BaseBdev3", 00:19:04.296 "uuid": "0b5601b1-de33-4f28-b99b-84da7445f8a0", 00:19:04.296 "is_configured": true, 00:19:04.296 "data_offset": 0, 00:19:04.296 "data_size": 65536 00:19:04.296 } 00:19:04.296 ] 00:19:04.296 } 00:19:04.296 } 00:19:04.296 }' 00:19:04.296 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:04.296 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:04.296 BaseBdev2 00:19:04.296 BaseBdev3' 00:19:04.296 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:04.296 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:04.296 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:04.554 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:04.554 "name": "NewBaseBdev", 00:19:04.554 "aliases": [ 00:19:04.554 "28946791-fa8a-44db-9222-aacb318f89d5" 00:19:04.554 ], 00:19:04.554 "product_name": "Malloc disk", 00:19:04.554 "block_size": 512, 00:19:04.554 "num_blocks": 65536, 00:19:04.554 "uuid": "28946791-fa8a-44db-9222-aacb318f89d5", 00:19:04.554 "assigned_rate_limits": { 00:19:04.554 "rw_ios_per_sec": 0, 00:19:04.554 "rw_mbytes_per_sec": 0, 00:19:04.554 "r_mbytes_per_sec": 0, 00:19:04.554 "w_mbytes_per_sec": 0 00:19:04.554 }, 00:19:04.554 "claimed": true, 00:19:04.554 "claim_type": "exclusive_write", 00:19:04.554 "zoned": false, 00:19:04.554 "supported_io_types": { 00:19:04.554 "read": true, 00:19:04.554 "write": true, 00:19:04.554 "unmap": true, 00:19:04.554 "write_zeroes": true, 00:19:04.554 "flush": true, 00:19:04.554 "reset": true, 00:19:04.554 "compare": false, 00:19:04.554 "compare_and_write": false, 00:19:04.554 "abort": true, 00:19:04.554 "nvme_admin": false, 00:19:04.554 "nvme_io": false 00:19:04.554 }, 00:19:04.554 "memory_domains": [ 00:19:04.554 { 00:19:04.554 "dma_device_id": "system", 00:19:04.554 "dma_device_type": 1 00:19:04.554 }, 00:19:04.554 { 00:19:04.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.554 "dma_device_type": 2 00:19:04.554 } 00:19:04.554 ], 00:19:04.554 "driver_specific": {} 00:19:04.554 }' 00:19:04.554 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:04.554 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:04.554 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:04.554 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:04.554 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:04.812 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:04.812 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:04.812 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:04.812 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:04.812 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:04.812 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:04.812 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:04.812 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:04.812 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:04.812 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:05.070 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:05.070 "name": "BaseBdev2", 00:19:05.070 "aliases": [ 00:19:05.070 "2e32d33f-5375-4547-9838-80e6deb693d0" 00:19:05.070 ], 00:19:05.070 "product_name": "Malloc disk", 00:19:05.070 "block_size": 512, 00:19:05.070 "num_blocks": 65536, 00:19:05.070 "uuid": "2e32d33f-5375-4547-9838-80e6deb693d0", 00:19:05.070 "assigned_rate_limits": { 00:19:05.070 "rw_ios_per_sec": 0, 00:19:05.070 "rw_mbytes_per_sec": 0, 00:19:05.070 "r_mbytes_per_sec": 0, 00:19:05.070 "w_mbytes_per_sec": 0 00:19:05.070 }, 00:19:05.070 "claimed": true, 00:19:05.070 "claim_type": "exclusive_write", 00:19:05.070 "zoned": false, 00:19:05.070 "supported_io_types": { 00:19:05.070 "read": true, 00:19:05.070 "write": true, 00:19:05.070 "unmap": true, 00:19:05.070 "write_zeroes": true, 00:19:05.070 "flush": true, 00:19:05.070 "reset": true, 00:19:05.070 "compare": false, 00:19:05.070 "compare_and_write": false, 00:19:05.070 "abort": true, 00:19:05.070 "nvme_admin": false, 00:19:05.070 "nvme_io": false 00:19:05.070 }, 00:19:05.070 "memory_domains": [ 00:19:05.070 { 00:19:05.070 "dma_device_id": "system", 00:19:05.070 "dma_device_type": 1 00:19:05.070 }, 00:19:05.070 { 00:19:05.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.070 "dma_device_type": 2 00:19:05.070 } 00:19:05.070 ], 00:19:05.070 "driver_specific": {} 00:19:05.070 }' 00:19:05.070 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.070 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.070 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:05.070 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.329 07:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.329 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:05.329 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.329 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.329 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:05.329 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.329 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.329 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:05.329 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:05.329 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:05.329 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:05.587 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:05.587 "name": "BaseBdev3", 00:19:05.587 "aliases": [ 00:19:05.587 "0b5601b1-de33-4f28-b99b-84da7445f8a0" 00:19:05.587 ], 00:19:05.587 "product_name": "Malloc disk", 00:19:05.587 "block_size": 512, 00:19:05.587 "num_blocks": 65536, 00:19:05.587 "uuid": "0b5601b1-de33-4f28-b99b-84da7445f8a0", 00:19:05.587 "assigned_rate_limits": { 00:19:05.587 "rw_ios_per_sec": 0, 00:19:05.587 "rw_mbytes_per_sec": 0, 00:19:05.587 "r_mbytes_per_sec": 0, 00:19:05.587 "w_mbytes_per_sec": 0 00:19:05.587 }, 00:19:05.587 "claimed": true, 00:19:05.587 "claim_type": "exclusive_write", 00:19:05.587 "zoned": false, 00:19:05.587 "supported_io_types": { 00:19:05.587 "read": true, 00:19:05.587 "write": true, 00:19:05.587 "unmap": true, 00:19:05.587 "write_zeroes": true, 00:19:05.587 "flush": true, 00:19:05.587 "reset": true, 00:19:05.587 "compare": false, 00:19:05.587 "compare_and_write": false, 00:19:05.587 "abort": true, 00:19:05.587 "nvme_admin": false, 00:19:05.587 "nvme_io": false 00:19:05.587 }, 00:19:05.587 "memory_domains": [ 00:19:05.587 { 00:19:05.587 "dma_device_id": "system", 00:19:05.587 "dma_device_type": 1 00:19:05.587 }, 00:19:05.587 { 00:19:05.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.588 "dma_device_type": 2 00:19:05.588 } 00:19:05.588 ], 00:19:05.588 "driver_specific": {} 00:19:05.588 }' 00:19:05.588 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.588 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.588 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:05.588 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.847 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.847 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:05.847 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.847 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.847 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:05.847 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.847 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.847 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:05.847 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:06.105 [2024-07-12 07:29:39.976835] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:06.105 [2024-07-12 07:29:39.977120] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:06.105 [2024-07-12 07:29:39.977367] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.105 [2024-07-12 07:29:39.977538] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.105 [2024-07-12 07:29:39.977618] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:19:06.364 07:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 138346 00:19:06.364 07:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 138346 ']' 00:19:06.364 07:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 138346 00:19:06.364 07:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:19:06.364 07:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:06.364 07:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 138346 00:19:06.364 07:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:06.364 07:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:06.364 07:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 138346' 00:19:06.364 killing process with pid 138346 00:19:06.364 07:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 138346 00:19:06.364 [2024-07-12 07:29:40.029297] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:06.364 07:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 138346 00:19:06.364 [2024-07-12 07:29:40.087945] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:06.622 07:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:06.622 00:19:06.622 real 0m27.621s 00:19:06.622 user 0m50.448s 00:19:06.622 sys 0m4.825s 00:19:06.622 07:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:06.622 07:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.622 ************************************ 00:19:06.622 END TEST raid_state_function_test 00:19:06.622 ************************************ 00:19:06.881 07:29:40 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:19:06.881 07:29:40 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:06.881 07:29:40 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:06.881 07:29:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.881 ************************************ 00:19:06.881 START TEST raid_state_function_test_sb 00:19:06.881 ************************************ 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 true 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=139301 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 139301' 00:19:06.881 Process raid pid: 139301 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 139301 /var/tmp/spdk-raid.sock 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 139301 ']' 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:06.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:06.881 07:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.881 [2024-07-12 07:29:40.654983] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:06.881 [2024-07-12 07:29:40.655546] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.141 [2024-07-12 07:29:40.813826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.141 [2024-07-12 07:29:40.909735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.141 [2024-07-12 07:29:40.996810] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.709 07:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:07.709 07:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:19:07.709 07:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:07.968 [2024-07-12 07:29:41.735208] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:07.968 [2024-07-12 07:29:41.735585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:07.968 [2024-07-12 07:29:41.735695] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:07.968 [2024-07-12 07:29:41.735752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:07.968 [2024-07-12 07:29:41.735902] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:07.968 [2024-07-12 07:29:41.735988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:07.968 07:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:07.968 07:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:07.968 07:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:07.968 07:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:07.968 07:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:07.968 07:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:07.968 07:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:07.968 07:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:07.968 07:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:07.968 07:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:07.968 07:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.968 07:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.227 07:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.227 "name": "Existed_Raid", 00:19:08.227 "uuid": "69930d6b-1cb9-442b-bb88-6b07ebeefa20", 00:19:08.227 "strip_size_kb": 64, 00:19:08.227 "state": "configuring", 00:19:08.227 "raid_level": "concat", 00:19:08.227 "superblock": true, 00:19:08.227 "num_base_bdevs": 3, 00:19:08.227 "num_base_bdevs_discovered": 0, 00:19:08.227 "num_base_bdevs_operational": 3, 00:19:08.227 "base_bdevs_list": [ 00:19:08.227 { 00:19:08.227 "name": "BaseBdev1", 00:19:08.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.227 "is_configured": false, 00:19:08.227 "data_offset": 0, 00:19:08.227 "data_size": 0 00:19:08.227 }, 00:19:08.227 { 00:19:08.227 "name": "BaseBdev2", 00:19:08.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.227 "is_configured": false, 00:19:08.227 "data_offset": 0, 00:19:08.227 "data_size": 0 00:19:08.227 }, 00:19:08.227 { 00:19:08.227 "name": "BaseBdev3", 00:19:08.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.227 "is_configured": false, 00:19:08.227 "data_offset": 0, 00:19:08.227 "data_size": 0 00:19:08.227 } 00:19:08.227 ] 00:19:08.227 }' 00:19:08.227 07:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.227 07:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.795 07:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:09.054 [2024-07-12 07:29:42.915265] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:09.054 [2024-07-12 07:29:42.915529] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:19:09.054 07:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:09.325 [2024-07-12 07:29:43.199337] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:09.325 [2024-07-12 07:29:43.199671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:09.325 [2024-07-12 07:29:43.199826] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:09.325 [2024-07-12 07:29:43.199948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:09.325 [2024-07-12 07:29:43.200035] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:09.325 [2024-07-12 07:29:43.200096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:09.590 07:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:09.848 [2024-07-12 07:29:43.479819] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:09.848 BaseBdev1 00:19:09.848 07:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:09.848 07:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:09.848 07:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:09.848 07:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:09.848 07:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:09.848 07:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:09.848 07:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:09.848 07:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:10.107 [ 00:19:10.107 { 00:19:10.107 "name": "BaseBdev1", 00:19:10.107 "aliases": [ 00:19:10.107 "8701103e-1265-4b97-bc66-56861ef6db86" 00:19:10.107 ], 00:19:10.107 "product_name": "Malloc disk", 00:19:10.107 "block_size": 512, 00:19:10.107 "num_blocks": 65536, 00:19:10.107 "uuid": "8701103e-1265-4b97-bc66-56861ef6db86", 00:19:10.107 "assigned_rate_limits": { 00:19:10.107 "rw_ios_per_sec": 0, 00:19:10.107 "rw_mbytes_per_sec": 0, 00:19:10.107 "r_mbytes_per_sec": 0, 00:19:10.107 "w_mbytes_per_sec": 0 00:19:10.107 }, 00:19:10.107 "claimed": true, 00:19:10.107 "claim_type": "exclusive_write", 00:19:10.107 "zoned": false, 00:19:10.107 "supported_io_types": { 00:19:10.107 "read": true, 00:19:10.107 "write": true, 00:19:10.107 "unmap": true, 00:19:10.107 "write_zeroes": true, 00:19:10.107 "flush": true, 00:19:10.107 "reset": true, 00:19:10.107 "compare": false, 00:19:10.107 "compare_and_write": false, 00:19:10.107 "abort": true, 00:19:10.107 "nvme_admin": false, 00:19:10.107 "nvme_io": false 00:19:10.107 }, 00:19:10.107 "memory_domains": [ 00:19:10.107 { 00:19:10.107 "dma_device_id": "system", 00:19:10.107 "dma_device_type": 1 00:19:10.107 }, 00:19:10.107 { 00:19:10.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.107 "dma_device_type": 2 00:19:10.107 } 00:19:10.107 ], 00:19:10.107 "driver_specific": {} 00:19:10.107 } 00:19:10.107 ] 00:19:10.107 07:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:10.107 07:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:10.107 07:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:10.107 07:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:10.107 07:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:10.107 07:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:10.107 07:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:10.107 07:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:10.107 07:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:10.107 07:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:10.107 07:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:10.107 07:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.107 07:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.366 07:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:10.366 "name": "Existed_Raid", 00:19:10.366 "uuid": "de536b8f-877e-4ef8-9c79-d33f5a3464cd", 00:19:10.366 "strip_size_kb": 64, 00:19:10.366 "state": "configuring", 00:19:10.366 "raid_level": "concat", 00:19:10.366 "superblock": true, 00:19:10.366 "num_base_bdevs": 3, 00:19:10.366 "num_base_bdevs_discovered": 1, 00:19:10.366 "num_base_bdevs_operational": 3, 00:19:10.366 "base_bdevs_list": [ 00:19:10.366 { 00:19:10.366 "name": "BaseBdev1", 00:19:10.366 "uuid": "8701103e-1265-4b97-bc66-56861ef6db86", 00:19:10.366 "is_configured": true, 00:19:10.366 "data_offset": 2048, 00:19:10.366 "data_size": 63488 00:19:10.366 }, 00:19:10.366 { 00:19:10.366 "name": "BaseBdev2", 00:19:10.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.366 "is_configured": false, 00:19:10.366 "data_offset": 0, 00:19:10.366 "data_size": 0 00:19:10.366 }, 00:19:10.366 { 00:19:10.366 "name": "BaseBdev3", 00:19:10.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.366 "is_configured": false, 00:19:10.366 "data_offset": 0, 00:19:10.366 "data_size": 0 00:19:10.366 } 00:19:10.366 ] 00:19:10.366 }' 00:19:10.366 07:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:10.366 07:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.973 07:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:11.231 [2024-07-12 07:29:44.900236] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:11.231 [2024-07-12 07:29:44.900522] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:19:11.231 07:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:11.231 [2024-07-12 07:29:45.096350] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:11.231 [2024-07-12 07:29:45.099057] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:11.231 [2024-07-12 07:29:45.099267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:11.231 [2024-07-12 07:29:45.099372] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:11.231 [2024-07-12 07:29:45.099435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:11.491 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:11.491 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:11.491 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:11.491 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:11.491 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:11.491 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:11.491 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:11.491 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:11.491 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:11.491 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:11.491 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:11.491 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:11.491 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.491 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.749 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:11.749 "name": "Existed_Raid", 00:19:11.749 "uuid": "d76b5c0a-4149-43c2-b53b-713055be591a", 00:19:11.749 "strip_size_kb": 64, 00:19:11.749 "state": "configuring", 00:19:11.749 "raid_level": "concat", 00:19:11.749 "superblock": true, 00:19:11.749 "num_base_bdevs": 3, 00:19:11.749 "num_base_bdevs_discovered": 1, 00:19:11.749 "num_base_bdevs_operational": 3, 00:19:11.749 "base_bdevs_list": [ 00:19:11.749 { 00:19:11.749 "name": "BaseBdev1", 00:19:11.749 "uuid": "8701103e-1265-4b97-bc66-56861ef6db86", 00:19:11.749 "is_configured": true, 00:19:11.749 "data_offset": 2048, 00:19:11.749 "data_size": 63488 00:19:11.749 }, 00:19:11.749 { 00:19:11.749 "name": "BaseBdev2", 00:19:11.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.749 "is_configured": false, 00:19:11.749 "data_offset": 0, 00:19:11.749 "data_size": 0 00:19:11.749 }, 00:19:11.749 { 00:19:11.749 "name": "BaseBdev3", 00:19:11.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.750 "is_configured": false, 00:19:11.750 "data_offset": 0, 00:19:11.750 "data_size": 0 00:19:11.750 } 00:19:11.750 ] 00:19:11.750 }' 00:19:11.750 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:11.750 07:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.316 07:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:12.575 [2024-07-12 07:29:46.270810] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:12.575 BaseBdev2 00:19:12.575 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:12.575 07:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:19:12.575 07:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:12.575 07:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:12.575 07:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:12.575 07:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:12.575 07:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:12.834 [ 00:19:12.834 { 00:19:12.834 "name": "BaseBdev2", 00:19:12.834 "aliases": [ 00:19:12.834 "9977f684-8c75-4250-9d28-22035a211d4b" 00:19:12.834 ], 00:19:12.834 "product_name": "Malloc disk", 00:19:12.834 "block_size": 512, 00:19:12.834 "num_blocks": 65536, 00:19:12.834 "uuid": "9977f684-8c75-4250-9d28-22035a211d4b", 00:19:12.834 "assigned_rate_limits": { 00:19:12.834 "rw_ios_per_sec": 0, 00:19:12.834 "rw_mbytes_per_sec": 0, 00:19:12.834 "r_mbytes_per_sec": 0, 00:19:12.834 "w_mbytes_per_sec": 0 00:19:12.834 }, 00:19:12.834 "claimed": true, 00:19:12.834 "claim_type": "exclusive_write", 00:19:12.834 "zoned": false, 00:19:12.834 "supported_io_types": { 00:19:12.834 "read": true, 00:19:12.834 "write": true, 00:19:12.834 "unmap": true, 00:19:12.834 "write_zeroes": true, 00:19:12.834 "flush": true, 00:19:12.834 "reset": true, 00:19:12.834 "compare": false, 00:19:12.834 "compare_and_write": false, 00:19:12.834 "abort": true, 00:19:12.834 "nvme_admin": false, 00:19:12.834 "nvme_io": false 00:19:12.834 }, 00:19:12.834 "memory_domains": [ 00:19:12.834 { 00:19:12.834 "dma_device_id": "system", 00:19:12.834 "dma_device_type": 1 00:19:12.834 }, 00:19:12.834 { 00:19:12.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.834 "dma_device_type": 2 00:19:12.834 } 00:19:12.834 ], 00:19:12.834 "driver_specific": {} 00:19:12.834 } 00:19:12.834 ] 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.834 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.092 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:13.092 "name": "Existed_Raid", 00:19:13.092 "uuid": "d76b5c0a-4149-43c2-b53b-713055be591a", 00:19:13.092 "strip_size_kb": 64, 00:19:13.092 "state": "configuring", 00:19:13.092 "raid_level": "concat", 00:19:13.092 "superblock": true, 00:19:13.092 "num_base_bdevs": 3, 00:19:13.092 "num_base_bdevs_discovered": 2, 00:19:13.092 "num_base_bdevs_operational": 3, 00:19:13.092 "base_bdevs_list": [ 00:19:13.092 { 00:19:13.092 "name": "BaseBdev1", 00:19:13.092 "uuid": "8701103e-1265-4b97-bc66-56861ef6db86", 00:19:13.092 "is_configured": true, 00:19:13.092 "data_offset": 2048, 00:19:13.092 "data_size": 63488 00:19:13.092 }, 00:19:13.092 { 00:19:13.092 "name": "BaseBdev2", 00:19:13.092 "uuid": "9977f684-8c75-4250-9d28-22035a211d4b", 00:19:13.092 "is_configured": true, 00:19:13.092 "data_offset": 2048, 00:19:13.092 "data_size": 63488 00:19:13.092 }, 00:19:13.092 { 00:19:13.092 "name": "BaseBdev3", 00:19:13.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.092 "is_configured": false, 00:19:13.092 "data_offset": 0, 00:19:13.092 "data_size": 0 00:19:13.092 } 00:19:13.092 ] 00:19:13.092 }' 00:19:13.092 07:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:13.092 07:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.658 07:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:13.916 [2024-07-12 07:29:47.612572] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:13.916 [2024-07-12 07:29:47.613070] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:19:13.916 [2024-07-12 07:29:47.613196] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:13.916 [2024-07-12 07:29:47.613440] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:19:13.916 [2024-07-12 07:29:47.613900] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:19:13.916 [2024-07-12 07:29:47.614020] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:19:13.916 [2024-07-12 07:29:47.614278] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.916 BaseBdev3 00:19:13.916 07:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:13.916 07:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:19:13.916 07:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:13.916 07:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:13.916 07:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:13.916 07:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:13.917 07:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.176 07:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:14.176 [ 00:19:14.176 { 00:19:14.176 "name": "BaseBdev3", 00:19:14.176 "aliases": [ 00:19:14.176 "1ab3df69-932d-4533-9173-2bc075beb9b4" 00:19:14.176 ], 00:19:14.176 "product_name": "Malloc disk", 00:19:14.176 "block_size": 512, 00:19:14.176 "num_blocks": 65536, 00:19:14.176 "uuid": "1ab3df69-932d-4533-9173-2bc075beb9b4", 00:19:14.176 "assigned_rate_limits": { 00:19:14.176 "rw_ios_per_sec": 0, 00:19:14.176 "rw_mbytes_per_sec": 0, 00:19:14.176 "r_mbytes_per_sec": 0, 00:19:14.176 "w_mbytes_per_sec": 0 00:19:14.176 }, 00:19:14.176 "claimed": true, 00:19:14.176 "claim_type": "exclusive_write", 00:19:14.176 "zoned": false, 00:19:14.176 "supported_io_types": { 00:19:14.176 "read": true, 00:19:14.176 "write": true, 00:19:14.176 "unmap": true, 00:19:14.176 "write_zeroes": true, 00:19:14.176 "flush": true, 00:19:14.176 "reset": true, 00:19:14.176 "compare": false, 00:19:14.176 "compare_and_write": false, 00:19:14.176 "abort": true, 00:19:14.176 "nvme_admin": false, 00:19:14.176 "nvme_io": false 00:19:14.176 }, 00:19:14.176 "memory_domains": [ 00:19:14.176 { 00:19:14.176 "dma_device_id": "system", 00:19:14.176 "dma_device_type": 1 00:19:14.176 }, 00:19:14.176 { 00:19:14.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.176 "dma_device_type": 2 00:19:14.176 } 00:19:14.176 ], 00:19:14.176 "driver_specific": {} 00:19:14.176 } 00:19:14.176 ] 00:19:14.435 07:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:14.435 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:14.435 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:14.435 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:14.435 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:14.435 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:14.435 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:14.435 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:14.435 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:14.435 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:14.435 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:14.435 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:14.435 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:14.435 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.435 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.695 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:14.695 "name": "Existed_Raid", 00:19:14.695 "uuid": "d76b5c0a-4149-43c2-b53b-713055be591a", 00:19:14.695 "strip_size_kb": 64, 00:19:14.695 "state": "online", 00:19:14.695 "raid_level": "concat", 00:19:14.695 "superblock": true, 00:19:14.695 "num_base_bdevs": 3, 00:19:14.695 "num_base_bdevs_discovered": 3, 00:19:14.695 "num_base_bdevs_operational": 3, 00:19:14.695 "base_bdevs_list": [ 00:19:14.695 { 00:19:14.695 "name": "BaseBdev1", 00:19:14.695 "uuid": "8701103e-1265-4b97-bc66-56861ef6db86", 00:19:14.695 "is_configured": true, 00:19:14.695 "data_offset": 2048, 00:19:14.695 "data_size": 63488 00:19:14.695 }, 00:19:14.695 { 00:19:14.695 "name": "BaseBdev2", 00:19:14.695 "uuid": "9977f684-8c75-4250-9d28-22035a211d4b", 00:19:14.695 "is_configured": true, 00:19:14.695 "data_offset": 2048, 00:19:14.695 "data_size": 63488 00:19:14.695 }, 00:19:14.695 { 00:19:14.695 "name": "BaseBdev3", 00:19:14.695 "uuid": "1ab3df69-932d-4533-9173-2bc075beb9b4", 00:19:14.695 "is_configured": true, 00:19:14.695 "data_offset": 2048, 00:19:14.695 "data_size": 63488 00:19:14.695 } 00:19:14.695 ] 00:19:14.695 }' 00:19:14.695 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:14.695 07:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.264 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:15.264 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:15.264 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:15.264 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:15.264 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:15.264 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:15.264 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:15.264 07:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:15.264 [2024-07-12 07:29:49.077222] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:15.264 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:15.264 "name": "Existed_Raid", 00:19:15.264 "aliases": [ 00:19:15.264 "d76b5c0a-4149-43c2-b53b-713055be591a" 00:19:15.264 ], 00:19:15.264 "product_name": "Raid Volume", 00:19:15.264 "block_size": 512, 00:19:15.264 "num_blocks": 190464, 00:19:15.264 "uuid": "d76b5c0a-4149-43c2-b53b-713055be591a", 00:19:15.264 "assigned_rate_limits": { 00:19:15.264 "rw_ios_per_sec": 0, 00:19:15.264 "rw_mbytes_per_sec": 0, 00:19:15.264 "r_mbytes_per_sec": 0, 00:19:15.264 "w_mbytes_per_sec": 0 00:19:15.264 }, 00:19:15.264 "claimed": false, 00:19:15.264 "zoned": false, 00:19:15.264 "supported_io_types": { 00:19:15.264 "read": true, 00:19:15.264 "write": true, 00:19:15.264 "unmap": true, 00:19:15.264 "write_zeroes": true, 00:19:15.264 "flush": true, 00:19:15.264 "reset": true, 00:19:15.264 "compare": false, 00:19:15.264 "compare_and_write": false, 00:19:15.264 "abort": false, 00:19:15.264 "nvme_admin": false, 00:19:15.264 "nvme_io": false 00:19:15.264 }, 00:19:15.264 "memory_domains": [ 00:19:15.264 { 00:19:15.264 "dma_device_id": "system", 00:19:15.264 "dma_device_type": 1 00:19:15.264 }, 00:19:15.264 { 00:19:15.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.264 "dma_device_type": 2 00:19:15.264 }, 00:19:15.264 { 00:19:15.264 "dma_device_id": "system", 00:19:15.264 "dma_device_type": 1 00:19:15.264 }, 00:19:15.264 { 00:19:15.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.264 "dma_device_type": 2 00:19:15.264 }, 00:19:15.264 { 00:19:15.264 "dma_device_id": "system", 00:19:15.264 "dma_device_type": 1 00:19:15.264 }, 00:19:15.264 { 00:19:15.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.264 "dma_device_type": 2 00:19:15.264 } 00:19:15.264 ], 00:19:15.264 "driver_specific": { 00:19:15.264 "raid": { 00:19:15.264 "uuid": "d76b5c0a-4149-43c2-b53b-713055be591a", 00:19:15.264 "strip_size_kb": 64, 00:19:15.264 "state": "online", 00:19:15.264 "raid_level": "concat", 00:19:15.264 "superblock": true, 00:19:15.264 "num_base_bdevs": 3, 00:19:15.264 "num_base_bdevs_discovered": 3, 00:19:15.264 "num_base_bdevs_operational": 3, 00:19:15.264 "base_bdevs_list": [ 00:19:15.264 { 00:19:15.264 "name": "BaseBdev1", 00:19:15.264 "uuid": "8701103e-1265-4b97-bc66-56861ef6db86", 00:19:15.264 "is_configured": true, 00:19:15.264 "data_offset": 2048, 00:19:15.264 "data_size": 63488 00:19:15.264 }, 00:19:15.264 { 00:19:15.264 "name": "BaseBdev2", 00:19:15.264 "uuid": "9977f684-8c75-4250-9d28-22035a211d4b", 00:19:15.264 "is_configured": true, 00:19:15.264 "data_offset": 2048, 00:19:15.264 "data_size": 63488 00:19:15.264 }, 00:19:15.264 { 00:19:15.264 "name": "BaseBdev3", 00:19:15.264 "uuid": "1ab3df69-932d-4533-9173-2bc075beb9b4", 00:19:15.264 "is_configured": true, 00:19:15.264 "data_offset": 2048, 00:19:15.264 "data_size": 63488 00:19:15.264 } 00:19:15.264 ] 00:19:15.264 } 00:19:15.264 } 00:19:15.264 }' 00:19:15.264 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:15.524 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:15.524 BaseBdev2 00:19:15.524 BaseBdev3' 00:19:15.524 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:15.524 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:15.524 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:15.783 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:15.783 "name": "BaseBdev1", 00:19:15.783 "aliases": [ 00:19:15.783 "8701103e-1265-4b97-bc66-56861ef6db86" 00:19:15.783 ], 00:19:15.783 "product_name": "Malloc disk", 00:19:15.783 "block_size": 512, 00:19:15.783 "num_blocks": 65536, 00:19:15.783 "uuid": "8701103e-1265-4b97-bc66-56861ef6db86", 00:19:15.783 "assigned_rate_limits": { 00:19:15.783 "rw_ios_per_sec": 0, 00:19:15.783 "rw_mbytes_per_sec": 0, 00:19:15.783 "r_mbytes_per_sec": 0, 00:19:15.783 "w_mbytes_per_sec": 0 00:19:15.783 }, 00:19:15.783 "claimed": true, 00:19:15.783 "claim_type": "exclusive_write", 00:19:15.783 "zoned": false, 00:19:15.783 "supported_io_types": { 00:19:15.783 "read": true, 00:19:15.783 "write": true, 00:19:15.783 "unmap": true, 00:19:15.783 "write_zeroes": true, 00:19:15.783 "flush": true, 00:19:15.783 "reset": true, 00:19:15.783 "compare": false, 00:19:15.783 "compare_and_write": false, 00:19:15.783 "abort": true, 00:19:15.783 "nvme_admin": false, 00:19:15.783 "nvme_io": false 00:19:15.783 }, 00:19:15.783 "memory_domains": [ 00:19:15.783 { 00:19:15.783 "dma_device_id": "system", 00:19:15.783 "dma_device_type": 1 00:19:15.783 }, 00:19:15.783 { 00:19:15.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.783 "dma_device_type": 2 00:19:15.783 } 00:19:15.783 ], 00:19:15.783 "driver_specific": {} 00:19:15.783 }' 00:19:15.783 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:15.783 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:15.783 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:15.783 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:15.783 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:15.783 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:15.783 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:15.783 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:15.783 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:16.043 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:16.043 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:16.043 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:16.043 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:16.043 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:16.043 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:16.302 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:16.302 "name": "BaseBdev2", 00:19:16.302 "aliases": [ 00:19:16.302 "9977f684-8c75-4250-9d28-22035a211d4b" 00:19:16.302 ], 00:19:16.302 "product_name": "Malloc disk", 00:19:16.302 "block_size": 512, 00:19:16.302 "num_blocks": 65536, 00:19:16.302 "uuid": "9977f684-8c75-4250-9d28-22035a211d4b", 00:19:16.302 "assigned_rate_limits": { 00:19:16.302 "rw_ios_per_sec": 0, 00:19:16.302 "rw_mbytes_per_sec": 0, 00:19:16.302 "r_mbytes_per_sec": 0, 00:19:16.302 "w_mbytes_per_sec": 0 00:19:16.302 }, 00:19:16.302 "claimed": true, 00:19:16.302 "claim_type": "exclusive_write", 00:19:16.302 "zoned": false, 00:19:16.302 "supported_io_types": { 00:19:16.302 "read": true, 00:19:16.302 "write": true, 00:19:16.302 "unmap": true, 00:19:16.302 "write_zeroes": true, 00:19:16.302 "flush": true, 00:19:16.302 "reset": true, 00:19:16.302 "compare": false, 00:19:16.302 "compare_and_write": false, 00:19:16.302 "abort": true, 00:19:16.302 "nvme_admin": false, 00:19:16.302 "nvme_io": false 00:19:16.302 }, 00:19:16.302 "memory_domains": [ 00:19:16.302 { 00:19:16.302 "dma_device_id": "system", 00:19:16.302 "dma_device_type": 1 00:19:16.302 }, 00:19:16.302 { 00:19:16.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.302 "dma_device_type": 2 00:19:16.302 } 00:19:16.302 ], 00:19:16.302 "driver_specific": {} 00:19:16.302 }' 00:19:16.302 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:16.302 07:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:16.302 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:16.302 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:16.302 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:16.302 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:16.302 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:16.561 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:16.561 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:16.561 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:16.561 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:16.561 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:16.561 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:16.561 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:16.561 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:16.820 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:16.820 "name": "BaseBdev3", 00:19:16.820 "aliases": [ 00:19:16.820 "1ab3df69-932d-4533-9173-2bc075beb9b4" 00:19:16.820 ], 00:19:16.820 "product_name": "Malloc disk", 00:19:16.820 "block_size": 512, 00:19:16.820 "num_blocks": 65536, 00:19:16.820 "uuid": "1ab3df69-932d-4533-9173-2bc075beb9b4", 00:19:16.820 "assigned_rate_limits": { 00:19:16.820 "rw_ios_per_sec": 0, 00:19:16.820 "rw_mbytes_per_sec": 0, 00:19:16.820 "r_mbytes_per_sec": 0, 00:19:16.820 "w_mbytes_per_sec": 0 00:19:16.820 }, 00:19:16.820 "claimed": true, 00:19:16.820 "claim_type": "exclusive_write", 00:19:16.820 "zoned": false, 00:19:16.820 "supported_io_types": { 00:19:16.820 "read": true, 00:19:16.820 "write": true, 00:19:16.820 "unmap": true, 00:19:16.820 "write_zeroes": true, 00:19:16.820 "flush": true, 00:19:16.820 "reset": true, 00:19:16.820 "compare": false, 00:19:16.821 "compare_and_write": false, 00:19:16.821 "abort": true, 00:19:16.821 "nvme_admin": false, 00:19:16.821 "nvme_io": false 00:19:16.821 }, 00:19:16.821 "memory_domains": [ 00:19:16.821 { 00:19:16.821 "dma_device_id": "system", 00:19:16.821 "dma_device_type": 1 00:19:16.821 }, 00:19:16.821 { 00:19:16.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.821 "dma_device_type": 2 00:19:16.821 } 00:19:16.821 ], 00:19:16.821 "driver_specific": {} 00:19:16.821 }' 00:19:16.821 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:16.821 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:16.821 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:16.821 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:17.080 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:17.080 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:17.080 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:17.080 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:17.080 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:17.080 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:17.080 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:17.339 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:17.339 07:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:17.339 [2024-07-12 07:29:51.205494] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:17.339 [2024-07-12 07:29:51.205755] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:17.339 [2024-07-12 07:29:51.206026] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.598 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.856 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:17.856 "name": "Existed_Raid", 00:19:17.856 "uuid": "d76b5c0a-4149-43c2-b53b-713055be591a", 00:19:17.856 "strip_size_kb": 64, 00:19:17.856 "state": "offline", 00:19:17.856 "raid_level": "concat", 00:19:17.856 "superblock": true, 00:19:17.856 "num_base_bdevs": 3, 00:19:17.856 "num_base_bdevs_discovered": 2, 00:19:17.856 "num_base_bdevs_operational": 2, 00:19:17.856 "base_bdevs_list": [ 00:19:17.856 { 00:19:17.856 "name": null, 00:19:17.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.856 "is_configured": false, 00:19:17.856 "data_offset": 2048, 00:19:17.856 "data_size": 63488 00:19:17.856 }, 00:19:17.856 { 00:19:17.856 "name": "BaseBdev2", 00:19:17.856 "uuid": "9977f684-8c75-4250-9d28-22035a211d4b", 00:19:17.856 "is_configured": true, 00:19:17.856 "data_offset": 2048, 00:19:17.856 "data_size": 63488 00:19:17.856 }, 00:19:17.856 { 00:19:17.856 "name": "BaseBdev3", 00:19:17.856 "uuid": "1ab3df69-932d-4533-9173-2bc075beb9b4", 00:19:17.856 "is_configured": true, 00:19:17.856 "data_offset": 2048, 00:19:17.856 "data_size": 63488 00:19:17.856 } 00:19:17.856 ] 00:19:17.856 }' 00:19:17.856 07:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:17.856 07:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.424 07:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:18.424 07:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:18.424 07:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:18.424 07:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.683 07:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:18.683 07:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:18.683 07:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:18.941 [2024-07-12 07:29:52.619297] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:18.941 07:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:18.941 07:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:18.941 07:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.941 07:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:19.200 07:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:19.200 07:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:19.200 07:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:19.458 [2024-07-12 07:29:53.148832] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:19.458 [2024-07-12 07:29:53.149126] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:19:19.458 07:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:19.458 07:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:19.458 07:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:19.458 07:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.716 07:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:19.716 07:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:19.716 07:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:19.716 07:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:19.716 07:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:19.716 07:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:19.974 BaseBdev2 00:19:19.974 07:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:19.974 07:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:19:19.974 07:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:19.974 07:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:19.974 07:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:19.974 07:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:19.974 07:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:20.232 07:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:20.490 [ 00:19:20.490 { 00:19:20.490 "name": "BaseBdev2", 00:19:20.490 "aliases": [ 00:19:20.490 "71c2016e-f5de-4aa2-8472-3bc41b4f4a2d" 00:19:20.490 ], 00:19:20.490 "product_name": "Malloc disk", 00:19:20.490 "block_size": 512, 00:19:20.490 "num_blocks": 65536, 00:19:20.490 "uuid": "71c2016e-f5de-4aa2-8472-3bc41b4f4a2d", 00:19:20.490 "assigned_rate_limits": { 00:19:20.490 "rw_ios_per_sec": 0, 00:19:20.490 "rw_mbytes_per_sec": 0, 00:19:20.490 "r_mbytes_per_sec": 0, 00:19:20.490 "w_mbytes_per_sec": 0 00:19:20.490 }, 00:19:20.490 "claimed": false, 00:19:20.490 "zoned": false, 00:19:20.490 "supported_io_types": { 00:19:20.490 "read": true, 00:19:20.490 "write": true, 00:19:20.490 "unmap": true, 00:19:20.490 "write_zeroes": true, 00:19:20.490 "flush": true, 00:19:20.490 "reset": true, 00:19:20.490 "compare": false, 00:19:20.490 "compare_and_write": false, 00:19:20.490 "abort": true, 00:19:20.490 "nvme_admin": false, 00:19:20.490 "nvme_io": false 00:19:20.490 }, 00:19:20.490 "memory_domains": [ 00:19:20.490 { 00:19:20.490 "dma_device_id": "system", 00:19:20.490 "dma_device_type": 1 00:19:20.490 }, 00:19:20.490 { 00:19:20.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.490 "dma_device_type": 2 00:19:20.490 } 00:19:20.490 ], 00:19:20.490 "driver_specific": {} 00:19:20.490 } 00:19:20.490 ] 00:19:20.490 07:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:20.490 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:20.490 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:20.490 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:20.490 BaseBdev3 00:19:20.490 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:20.490 07:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:19:20.490 07:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:20.490 07:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:20.490 07:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:20.490 07:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:20.490 07:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:20.748 07:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:21.017 [ 00:19:21.017 { 00:19:21.017 "name": "BaseBdev3", 00:19:21.017 "aliases": [ 00:19:21.017 "0a47fc0d-2dc5-431c-98a5-06583ba9699a" 00:19:21.017 ], 00:19:21.017 "product_name": "Malloc disk", 00:19:21.017 "block_size": 512, 00:19:21.017 "num_blocks": 65536, 00:19:21.017 "uuid": "0a47fc0d-2dc5-431c-98a5-06583ba9699a", 00:19:21.017 "assigned_rate_limits": { 00:19:21.017 "rw_ios_per_sec": 0, 00:19:21.017 "rw_mbytes_per_sec": 0, 00:19:21.017 "r_mbytes_per_sec": 0, 00:19:21.017 "w_mbytes_per_sec": 0 00:19:21.017 }, 00:19:21.017 "claimed": false, 00:19:21.017 "zoned": false, 00:19:21.017 "supported_io_types": { 00:19:21.017 "read": true, 00:19:21.017 "write": true, 00:19:21.017 "unmap": true, 00:19:21.017 "write_zeroes": true, 00:19:21.017 "flush": true, 00:19:21.017 "reset": true, 00:19:21.017 "compare": false, 00:19:21.017 "compare_and_write": false, 00:19:21.017 "abort": true, 00:19:21.017 "nvme_admin": false, 00:19:21.017 "nvme_io": false 00:19:21.017 }, 00:19:21.017 "memory_domains": [ 00:19:21.017 { 00:19:21.017 "dma_device_id": "system", 00:19:21.017 "dma_device_type": 1 00:19:21.017 }, 00:19:21.017 { 00:19:21.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.017 "dma_device_type": 2 00:19:21.017 } 00:19:21.017 ], 00:19:21.017 "driver_specific": {} 00:19:21.017 } 00:19:21.017 ] 00:19:21.017 07:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:21.017 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:21.017 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:21.017 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:21.313 [2024-07-12 07:29:54.925916] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:21.313 [2024-07-12 07:29:54.926258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:21.313 [2024-07-12 07:29:54.926402] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:21.313 [2024-07-12 07:29:54.928811] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:21.314 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:21.314 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:21.314 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:21.314 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:21.314 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:21.314 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:21.314 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:21.314 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:21.314 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:21.314 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:21.314 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.314 07:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.314 07:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.314 "name": "Existed_Raid", 00:19:21.314 "uuid": "adf12cb1-657a-4700-9676-b26e6387e9de", 00:19:21.314 "strip_size_kb": 64, 00:19:21.314 "state": "configuring", 00:19:21.314 "raid_level": "concat", 00:19:21.314 "superblock": true, 00:19:21.314 "num_base_bdevs": 3, 00:19:21.314 "num_base_bdevs_discovered": 2, 00:19:21.314 "num_base_bdevs_operational": 3, 00:19:21.314 "base_bdevs_list": [ 00:19:21.314 { 00:19:21.314 "name": "BaseBdev1", 00:19:21.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.314 "is_configured": false, 00:19:21.314 "data_offset": 0, 00:19:21.314 "data_size": 0 00:19:21.314 }, 00:19:21.314 { 00:19:21.314 "name": "BaseBdev2", 00:19:21.314 "uuid": "71c2016e-f5de-4aa2-8472-3bc41b4f4a2d", 00:19:21.314 "is_configured": true, 00:19:21.314 "data_offset": 2048, 00:19:21.314 "data_size": 63488 00:19:21.314 }, 00:19:21.314 { 00:19:21.314 "name": "BaseBdev3", 00:19:21.314 "uuid": "0a47fc0d-2dc5-431c-98a5-06583ba9699a", 00:19:21.314 "is_configured": true, 00:19:21.314 "data_offset": 2048, 00:19:21.314 "data_size": 63488 00:19:21.314 } 00:19:21.314 ] 00:19:21.314 }' 00:19:21.314 07:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.314 07:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.902 07:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:22.159 [2024-07-12 07:29:56.018141] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:22.159 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:22.159 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:22.159 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:22.159 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:22.159 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:22.159 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:22.416 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.416 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.416 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.416 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.416 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.416 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.675 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.675 "name": "Existed_Raid", 00:19:22.675 "uuid": "adf12cb1-657a-4700-9676-b26e6387e9de", 00:19:22.675 "strip_size_kb": 64, 00:19:22.675 "state": "configuring", 00:19:22.675 "raid_level": "concat", 00:19:22.675 "superblock": true, 00:19:22.675 "num_base_bdevs": 3, 00:19:22.675 "num_base_bdevs_discovered": 1, 00:19:22.675 "num_base_bdevs_operational": 3, 00:19:22.675 "base_bdevs_list": [ 00:19:22.675 { 00:19:22.675 "name": "BaseBdev1", 00:19:22.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.675 "is_configured": false, 00:19:22.675 "data_offset": 0, 00:19:22.675 "data_size": 0 00:19:22.675 }, 00:19:22.675 { 00:19:22.675 "name": null, 00:19:22.675 "uuid": "71c2016e-f5de-4aa2-8472-3bc41b4f4a2d", 00:19:22.675 "is_configured": false, 00:19:22.675 "data_offset": 2048, 00:19:22.675 "data_size": 63488 00:19:22.675 }, 00:19:22.675 { 00:19:22.675 "name": "BaseBdev3", 00:19:22.675 "uuid": "0a47fc0d-2dc5-431c-98a5-06583ba9699a", 00:19:22.675 "is_configured": true, 00:19:22.675 "data_offset": 2048, 00:19:22.675 "data_size": 63488 00:19:22.675 } 00:19:22.675 ] 00:19:22.675 }' 00:19:22.675 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.675 07:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.241 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.241 07:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:23.499 07:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:23.499 07:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:23.756 [2024-07-12 07:29:57.431916] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.756 BaseBdev1 00:19:23.756 07:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:23.756 07:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:23.756 07:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:23.756 07:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:23.756 07:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:23.756 07:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:23.756 07:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:24.013 07:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:24.272 [ 00:19:24.272 { 00:19:24.272 "name": "BaseBdev1", 00:19:24.272 "aliases": [ 00:19:24.272 "388a9643-d8d4-436c-b4d6-2aa92b115dc9" 00:19:24.272 ], 00:19:24.272 "product_name": "Malloc disk", 00:19:24.272 "block_size": 512, 00:19:24.272 "num_blocks": 65536, 00:19:24.272 "uuid": "388a9643-d8d4-436c-b4d6-2aa92b115dc9", 00:19:24.272 "assigned_rate_limits": { 00:19:24.272 "rw_ios_per_sec": 0, 00:19:24.272 "rw_mbytes_per_sec": 0, 00:19:24.272 "r_mbytes_per_sec": 0, 00:19:24.272 "w_mbytes_per_sec": 0 00:19:24.272 }, 00:19:24.272 "claimed": true, 00:19:24.272 "claim_type": "exclusive_write", 00:19:24.272 "zoned": false, 00:19:24.272 "supported_io_types": { 00:19:24.272 "read": true, 00:19:24.272 "write": true, 00:19:24.272 "unmap": true, 00:19:24.272 "write_zeroes": true, 00:19:24.272 "flush": true, 00:19:24.272 "reset": true, 00:19:24.272 "compare": false, 00:19:24.272 "compare_and_write": false, 00:19:24.272 "abort": true, 00:19:24.272 "nvme_admin": false, 00:19:24.272 "nvme_io": false 00:19:24.272 }, 00:19:24.272 "memory_domains": [ 00:19:24.272 { 00:19:24.272 "dma_device_id": "system", 00:19:24.272 "dma_device_type": 1 00:19:24.272 }, 00:19:24.272 { 00:19:24.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.272 "dma_device_type": 2 00:19:24.272 } 00:19:24.272 ], 00:19:24.272 "driver_specific": {} 00:19:24.272 } 00:19:24.272 ] 00:19:24.272 07:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:24.272 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:24.272 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:24.272 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:24.272 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:24.272 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:24.272 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:24.272 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:24.272 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:24.272 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:24.272 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:24.272 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.272 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.530 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:24.530 "name": "Existed_Raid", 00:19:24.530 "uuid": "adf12cb1-657a-4700-9676-b26e6387e9de", 00:19:24.530 "strip_size_kb": 64, 00:19:24.530 "state": "configuring", 00:19:24.530 "raid_level": "concat", 00:19:24.530 "superblock": true, 00:19:24.530 "num_base_bdevs": 3, 00:19:24.530 "num_base_bdevs_discovered": 2, 00:19:24.530 "num_base_bdevs_operational": 3, 00:19:24.530 "base_bdevs_list": [ 00:19:24.530 { 00:19:24.530 "name": "BaseBdev1", 00:19:24.530 "uuid": "388a9643-d8d4-436c-b4d6-2aa92b115dc9", 00:19:24.530 "is_configured": true, 00:19:24.530 "data_offset": 2048, 00:19:24.530 "data_size": 63488 00:19:24.530 }, 00:19:24.530 { 00:19:24.530 "name": null, 00:19:24.530 "uuid": "71c2016e-f5de-4aa2-8472-3bc41b4f4a2d", 00:19:24.530 "is_configured": false, 00:19:24.530 "data_offset": 2048, 00:19:24.530 "data_size": 63488 00:19:24.530 }, 00:19:24.530 { 00:19:24.530 "name": "BaseBdev3", 00:19:24.530 "uuid": "0a47fc0d-2dc5-431c-98a5-06583ba9699a", 00:19:24.530 "is_configured": true, 00:19:24.530 "data_offset": 2048, 00:19:24.530 "data_size": 63488 00:19:24.530 } 00:19:24.530 ] 00:19:24.530 }' 00:19:24.530 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:24.530 07:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.095 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.095 07:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:25.353 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:25.353 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:25.611 [2024-07-12 07:29:59.372457] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:25.611 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:25.611 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:25.611 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:25.611 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:25.611 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:25.611 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:25.611 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:25.611 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:25.611 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:25.611 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:25.611 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.611 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.868 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:25.868 "name": "Existed_Raid", 00:19:25.868 "uuid": "adf12cb1-657a-4700-9676-b26e6387e9de", 00:19:25.869 "strip_size_kb": 64, 00:19:25.869 "state": "configuring", 00:19:25.869 "raid_level": "concat", 00:19:25.869 "superblock": true, 00:19:25.869 "num_base_bdevs": 3, 00:19:25.869 "num_base_bdevs_discovered": 1, 00:19:25.869 "num_base_bdevs_operational": 3, 00:19:25.869 "base_bdevs_list": [ 00:19:25.869 { 00:19:25.869 "name": "BaseBdev1", 00:19:25.869 "uuid": "388a9643-d8d4-436c-b4d6-2aa92b115dc9", 00:19:25.869 "is_configured": true, 00:19:25.869 "data_offset": 2048, 00:19:25.869 "data_size": 63488 00:19:25.869 }, 00:19:25.869 { 00:19:25.869 "name": null, 00:19:25.869 "uuid": "71c2016e-f5de-4aa2-8472-3bc41b4f4a2d", 00:19:25.869 "is_configured": false, 00:19:25.869 "data_offset": 2048, 00:19:25.869 "data_size": 63488 00:19:25.869 }, 00:19:25.869 { 00:19:25.869 "name": null, 00:19:25.869 "uuid": "0a47fc0d-2dc5-431c-98a5-06583ba9699a", 00:19:25.869 "is_configured": false, 00:19:25.869 "data_offset": 2048, 00:19:25.869 "data_size": 63488 00:19:25.869 } 00:19:25.869 ] 00:19:25.869 }' 00:19:25.869 07:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:25.869 07:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.431 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.431 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:26.688 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:26.688 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:26.946 [2024-07-12 07:30:00.685783] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:26.946 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:26.946 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:26.946 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:26.946 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:26.946 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:26.946 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:26.946 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:26.946 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:26.946 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:26.946 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:26.946 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.946 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.203 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.203 "name": "Existed_Raid", 00:19:27.203 "uuid": "adf12cb1-657a-4700-9676-b26e6387e9de", 00:19:27.203 "strip_size_kb": 64, 00:19:27.203 "state": "configuring", 00:19:27.203 "raid_level": "concat", 00:19:27.203 "superblock": true, 00:19:27.203 "num_base_bdevs": 3, 00:19:27.203 "num_base_bdevs_discovered": 2, 00:19:27.203 "num_base_bdevs_operational": 3, 00:19:27.203 "base_bdevs_list": [ 00:19:27.203 { 00:19:27.203 "name": "BaseBdev1", 00:19:27.203 "uuid": "388a9643-d8d4-436c-b4d6-2aa92b115dc9", 00:19:27.203 "is_configured": true, 00:19:27.203 "data_offset": 2048, 00:19:27.203 "data_size": 63488 00:19:27.203 }, 00:19:27.203 { 00:19:27.203 "name": null, 00:19:27.203 "uuid": "71c2016e-f5de-4aa2-8472-3bc41b4f4a2d", 00:19:27.203 "is_configured": false, 00:19:27.204 "data_offset": 2048, 00:19:27.204 "data_size": 63488 00:19:27.204 }, 00:19:27.204 { 00:19:27.204 "name": "BaseBdev3", 00:19:27.204 "uuid": "0a47fc0d-2dc5-431c-98a5-06583ba9699a", 00:19:27.204 "is_configured": true, 00:19:27.204 "data_offset": 2048, 00:19:27.204 "data_size": 63488 00:19:27.204 } 00:19:27.204 ] 00:19:27.204 }' 00:19:27.204 07:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.204 07:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.769 07:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:27.769 07:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.027 07:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:28.027 07:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:28.284 [2024-07-12 07:30:02.030048] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:28.284 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:28.284 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:28.284 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:28.284 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:28.284 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:28.284 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:28.284 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.284 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.284 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.284 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.284 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.284 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.543 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:28.543 "name": "Existed_Raid", 00:19:28.543 "uuid": "adf12cb1-657a-4700-9676-b26e6387e9de", 00:19:28.543 "strip_size_kb": 64, 00:19:28.543 "state": "configuring", 00:19:28.543 "raid_level": "concat", 00:19:28.543 "superblock": true, 00:19:28.543 "num_base_bdevs": 3, 00:19:28.543 "num_base_bdevs_discovered": 1, 00:19:28.543 "num_base_bdevs_operational": 3, 00:19:28.543 "base_bdevs_list": [ 00:19:28.543 { 00:19:28.543 "name": null, 00:19:28.543 "uuid": "388a9643-d8d4-436c-b4d6-2aa92b115dc9", 00:19:28.543 "is_configured": false, 00:19:28.543 "data_offset": 2048, 00:19:28.543 "data_size": 63488 00:19:28.543 }, 00:19:28.543 { 00:19:28.543 "name": null, 00:19:28.543 "uuid": "71c2016e-f5de-4aa2-8472-3bc41b4f4a2d", 00:19:28.543 "is_configured": false, 00:19:28.543 "data_offset": 2048, 00:19:28.543 "data_size": 63488 00:19:28.543 }, 00:19:28.543 { 00:19:28.543 "name": "BaseBdev3", 00:19:28.543 "uuid": "0a47fc0d-2dc5-431c-98a5-06583ba9699a", 00:19:28.543 "is_configured": true, 00:19:28.543 "data_offset": 2048, 00:19:28.543 "data_size": 63488 00:19:28.543 } 00:19:28.543 ] 00:19:28.543 }' 00:19:28.543 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:28.543 07:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.142 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.142 07:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:29.400 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:29.400 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:29.659 [2024-07-12 07:30:03.354683] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:29.659 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:29.659 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:29.659 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:29.659 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:29.659 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:29.659 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:29.659 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:29.659 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:29.659 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:29.659 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:29.659 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.659 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.917 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.917 "name": "Existed_Raid", 00:19:29.917 "uuid": "adf12cb1-657a-4700-9676-b26e6387e9de", 00:19:29.917 "strip_size_kb": 64, 00:19:29.917 "state": "configuring", 00:19:29.917 "raid_level": "concat", 00:19:29.917 "superblock": true, 00:19:29.917 "num_base_bdevs": 3, 00:19:29.917 "num_base_bdevs_discovered": 2, 00:19:29.917 "num_base_bdevs_operational": 3, 00:19:29.917 "base_bdevs_list": [ 00:19:29.917 { 00:19:29.917 "name": null, 00:19:29.917 "uuid": "388a9643-d8d4-436c-b4d6-2aa92b115dc9", 00:19:29.917 "is_configured": false, 00:19:29.917 "data_offset": 2048, 00:19:29.917 "data_size": 63488 00:19:29.917 }, 00:19:29.917 { 00:19:29.917 "name": "BaseBdev2", 00:19:29.917 "uuid": "71c2016e-f5de-4aa2-8472-3bc41b4f4a2d", 00:19:29.917 "is_configured": true, 00:19:29.917 "data_offset": 2048, 00:19:29.917 "data_size": 63488 00:19:29.917 }, 00:19:29.917 { 00:19:29.917 "name": "BaseBdev3", 00:19:29.917 "uuid": "0a47fc0d-2dc5-431c-98a5-06583ba9699a", 00:19:29.917 "is_configured": true, 00:19:29.917 "data_offset": 2048, 00:19:29.917 "data_size": 63488 00:19:29.917 } 00:19:29.917 ] 00:19:29.917 }' 00:19:29.917 07:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.917 07:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.482 07:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.482 07:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:30.740 07:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:30.740 07:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.740 07:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:30.998 07:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 388a9643-d8d4-436c-b4d6-2aa92b115dc9 00:19:31.257 [2024-07-12 07:30:04.944383] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:31.257 [2024-07-12 07:30:04.944893] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:31.257 [2024-07-12 07:30:04.945003] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:31.257 [2024-07-12 07:30:04.945127] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:19:31.257 [2024-07-12 07:30:04.945558] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:31.257 [2024-07-12 07:30:04.945671] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:19:31.257 [2024-07-12 07:30:04.945857] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.257 NewBaseBdev 00:19:31.257 07:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:31.257 07:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:19:31.257 07:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:31.257 07:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:31.257 07:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:31.257 07:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:31.257 07:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:31.516 07:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:31.516 [ 00:19:31.516 { 00:19:31.516 "name": "NewBaseBdev", 00:19:31.516 "aliases": [ 00:19:31.516 "388a9643-d8d4-436c-b4d6-2aa92b115dc9" 00:19:31.516 ], 00:19:31.516 "product_name": "Malloc disk", 00:19:31.516 "block_size": 512, 00:19:31.516 "num_blocks": 65536, 00:19:31.516 "uuid": "388a9643-d8d4-436c-b4d6-2aa92b115dc9", 00:19:31.516 "assigned_rate_limits": { 00:19:31.516 "rw_ios_per_sec": 0, 00:19:31.516 "rw_mbytes_per_sec": 0, 00:19:31.516 "r_mbytes_per_sec": 0, 00:19:31.516 "w_mbytes_per_sec": 0 00:19:31.516 }, 00:19:31.516 "claimed": true, 00:19:31.516 "claim_type": "exclusive_write", 00:19:31.516 "zoned": false, 00:19:31.516 "supported_io_types": { 00:19:31.516 "read": true, 00:19:31.516 "write": true, 00:19:31.516 "unmap": true, 00:19:31.516 "write_zeroes": true, 00:19:31.516 "flush": true, 00:19:31.516 "reset": true, 00:19:31.516 "compare": false, 00:19:31.516 "compare_and_write": false, 00:19:31.516 "abort": true, 00:19:31.516 "nvme_admin": false, 00:19:31.516 "nvme_io": false 00:19:31.516 }, 00:19:31.516 "memory_domains": [ 00:19:31.516 { 00:19:31.516 "dma_device_id": "system", 00:19:31.516 "dma_device_type": 1 00:19:31.516 }, 00:19:31.516 { 00:19:31.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.516 "dma_device_type": 2 00:19:31.516 } 00:19:31.516 ], 00:19:31.516 "driver_specific": {} 00:19:31.516 } 00:19:31.516 ] 00:19:31.516 07:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:31.516 07:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:31.517 07:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:31.517 07:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:31.517 07:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:31.517 07:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:31.517 07:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:31.517 07:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:31.517 07:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:31.517 07:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:31.517 07:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:31.517 07:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.517 07:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.776 07:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:31.776 "name": "Existed_Raid", 00:19:31.776 "uuid": "adf12cb1-657a-4700-9676-b26e6387e9de", 00:19:31.776 "strip_size_kb": 64, 00:19:31.776 "state": "online", 00:19:31.776 "raid_level": "concat", 00:19:31.776 "superblock": true, 00:19:31.776 "num_base_bdevs": 3, 00:19:31.776 "num_base_bdevs_discovered": 3, 00:19:31.776 "num_base_bdevs_operational": 3, 00:19:31.776 "base_bdevs_list": [ 00:19:31.776 { 00:19:31.776 "name": "NewBaseBdev", 00:19:31.776 "uuid": "388a9643-d8d4-436c-b4d6-2aa92b115dc9", 00:19:31.776 "is_configured": true, 00:19:31.776 "data_offset": 2048, 00:19:31.776 "data_size": 63488 00:19:31.776 }, 00:19:31.776 { 00:19:31.776 "name": "BaseBdev2", 00:19:31.776 "uuid": "71c2016e-f5de-4aa2-8472-3bc41b4f4a2d", 00:19:31.776 "is_configured": true, 00:19:31.776 "data_offset": 2048, 00:19:31.776 "data_size": 63488 00:19:31.776 }, 00:19:31.776 { 00:19:31.776 "name": "BaseBdev3", 00:19:31.776 "uuid": "0a47fc0d-2dc5-431c-98a5-06583ba9699a", 00:19:31.776 "is_configured": true, 00:19:31.776 "data_offset": 2048, 00:19:31.776 "data_size": 63488 00:19:31.776 } 00:19:31.776 ] 00:19:31.776 }' 00:19:31.776 07:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:31.776 07:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.715 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:32.715 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:32.715 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:32.715 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:32.715 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:32.715 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:32.715 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:32.715 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:32.715 [2024-07-12 07:30:06.552696] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.715 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:32.715 "name": "Existed_Raid", 00:19:32.715 "aliases": [ 00:19:32.715 "adf12cb1-657a-4700-9676-b26e6387e9de" 00:19:32.715 ], 00:19:32.715 "product_name": "Raid Volume", 00:19:32.715 "block_size": 512, 00:19:32.715 "num_blocks": 190464, 00:19:32.715 "uuid": "adf12cb1-657a-4700-9676-b26e6387e9de", 00:19:32.715 "assigned_rate_limits": { 00:19:32.716 "rw_ios_per_sec": 0, 00:19:32.716 "rw_mbytes_per_sec": 0, 00:19:32.716 "r_mbytes_per_sec": 0, 00:19:32.716 "w_mbytes_per_sec": 0 00:19:32.716 }, 00:19:32.716 "claimed": false, 00:19:32.716 "zoned": false, 00:19:32.716 "supported_io_types": { 00:19:32.716 "read": true, 00:19:32.716 "write": true, 00:19:32.716 "unmap": true, 00:19:32.716 "write_zeroes": true, 00:19:32.716 "flush": true, 00:19:32.716 "reset": true, 00:19:32.716 "compare": false, 00:19:32.716 "compare_and_write": false, 00:19:32.716 "abort": false, 00:19:32.716 "nvme_admin": false, 00:19:32.716 "nvme_io": false 00:19:32.716 }, 00:19:32.716 "memory_domains": [ 00:19:32.716 { 00:19:32.716 "dma_device_id": "system", 00:19:32.716 "dma_device_type": 1 00:19:32.716 }, 00:19:32.716 { 00:19:32.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.716 "dma_device_type": 2 00:19:32.716 }, 00:19:32.716 { 00:19:32.716 "dma_device_id": "system", 00:19:32.716 "dma_device_type": 1 00:19:32.716 }, 00:19:32.716 { 00:19:32.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.716 "dma_device_type": 2 00:19:32.716 }, 00:19:32.716 { 00:19:32.716 "dma_device_id": "system", 00:19:32.716 "dma_device_type": 1 00:19:32.716 }, 00:19:32.716 { 00:19:32.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.716 "dma_device_type": 2 00:19:32.716 } 00:19:32.716 ], 00:19:32.716 "driver_specific": { 00:19:32.716 "raid": { 00:19:32.716 "uuid": "adf12cb1-657a-4700-9676-b26e6387e9de", 00:19:32.716 "strip_size_kb": 64, 00:19:32.716 "state": "online", 00:19:32.716 "raid_level": "concat", 00:19:32.716 "superblock": true, 00:19:32.716 "num_base_bdevs": 3, 00:19:32.716 "num_base_bdevs_discovered": 3, 00:19:32.716 "num_base_bdevs_operational": 3, 00:19:32.716 "base_bdevs_list": [ 00:19:32.716 { 00:19:32.716 "name": "NewBaseBdev", 00:19:32.716 "uuid": "388a9643-d8d4-436c-b4d6-2aa92b115dc9", 00:19:32.716 "is_configured": true, 00:19:32.716 "data_offset": 2048, 00:19:32.716 "data_size": 63488 00:19:32.716 }, 00:19:32.716 { 00:19:32.716 "name": "BaseBdev2", 00:19:32.716 "uuid": "71c2016e-f5de-4aa2-8472-3bc41b4f4a2d", 00:19:32.716 "is_configured": true, 00:19:32.716 "data_offset": 2048, 00:19:32.716 "data_size": 63488 00:19:32.716 }, 00:19:32.716 { 00:19:32.716 "name": "BaseBdev3", 00:19:32.716 "uuid": "0a47fc0d-2dc5-431c-98a5-06583ba9699a", 00:19:32.716 "is_configured": true, 00:19:32.716 "data_offset": 2048, 00:19:32.716 "data_size": 63488 00:19:32.716 } 00:19:32.716 ] 00:19:32.716 } 00:19:32.716 } 00:19:32.716 }' 00:19:32.716 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:32.976 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:32.976 BaseBdev2 00:19:32.976 BaseBdev3' 00:19:32.976 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:32.976 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:32.976 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:32.976 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:32.976 "name": "NewBaseBdev", 00:19:32.976 "aliases": [ 00:19:32.976 "388a9643-d8d4-436c-b4d6-2aa92b115dc9" 00:19:32.976 ], 00:19:32.976 "product_name": "Malloc disk", 00:19:32.976 "block_size": 512, 00:19:32.976 "num_blocks": 65536, 00:19:32.976 "uuid": "388a9643-d8d4-436c-b4d6-2aa92b115dc9", 00:19:32.976 "assigned_rate_limits": { 00:19:32.976 "rw_ios_per_sec": 0, 00:19:32.976 "rw_mbytes_per_sec": 0, 00:19:32.976 "r_mbytes_per_sec": 0, 00:19:32.976 "w_mbytes_per_sec": 0 00:19:32.976 }, 00:19:32.976 "claimed": true, 00:19:32.976 "claim_type": "exclusive_write", 00:19:32.976 "zoned": false, 00:19:32.976 "supported_io_types": { 00:19:32.976 "read": true, 00:19:32.976 "write": true, 00:19:32.976 "unmap": true, 00:19:32.976 "write_zeroes": true, 00:19:32.976 "flush": true, 00:19:32.976 "reset": true, 00:19:32.976 "compare": false, 00:19:32.976 "compare_and_write": false, 00:19:32.976 "abort": true, 00:19:32.976 "nvme_admin": false, 00:19:32.976 "nvme_io": false 00:19:32.976 }, 00:19:32.976 "memory_domains": [ 00:19:32.976 { 00:19:32.976 "dma_device_id": "system", 00:19:32.976 "dma_device_type": 1 00:19:32.976 }, 00:19:32.976 { 00:19:32.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.976 "dma_device_type": 2 00:19:32.976 } 00:19:32.976 ], 00:19:32.976 "driver_specific": {} 00:19:32.976 }' 00:19:32.976 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.235 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.235 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:33.235 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.235 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.235 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:33.235 07:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.235 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.235 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:33.235 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.495 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.495 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:33.495 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:33.495 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:33.495 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:33.495 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:33.495 "name": "BaseBdev2", 00:19:33.495 "aliases": [ 00:19:33.495 "71c2016e-f5de-4aa2-8472-3bc41b4f4a2d" 00:19:33.495 ], 00:19:33.495 "product_name": "Malloc disk", 00:19:33.495 "block_size": 512, 00:19:33.496 "num_blocks": 65536, 00:19:33.496 "uuid": "71c2016e-f5de-4aa2-8472-3bc41b4f4a2d", 00:19:33.496 "assigned_rate_limits": { 00:19:33.496 "rw_ios_per_sec": 0, 00:19:33.496 "rw_mbytes_per_sec": 0, 00:19:33.496 "r_mbytes_per_sec": 0, 00:19:33.496 "w_mbytes_per_sec": 0 00:19:33.496 }, 00:19:33.496 "claimed": true, 00:19:33.496 "claim_type": "exclusive_write", 00:19:33.496 "zoned": false, 00:19:33.496 "supported_io_types": { 00:19:33.496 "read": true, 00:19:33.496 "write": true, 00:19:33.496 "unmap": true, 00:19:33.496 "write_zeroes": true, 00:19:33.496 "flush": true, 00:19:33.496 "reset": true, 00:19:33.496 "compare": false, 00:19:33.496 "compare_and_write": false, 00:19:33.496 "abort": true, 00:19:33.496 "nvme_admin": false, 00:19:33.496 "nvme_io": false 00:19:33.496 }, 00:19:33.496 "memory_domains": [ 00:19:33.496 { 00:19:33.496 "dma_device_id": "system", 00:19:33.496 "dma_device_type": 1 00:19:33.496 }, 00:19:33.496 { 00:19:33.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.496 "dma_device_type": 2 00:19:33.496 } 00:19:33.496 ], 00:19:33.496 "driver_specific": {} 00:19:33.496 }' 00:19:33.496 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.755 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.755 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:33.755 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.755 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.755 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:33.755 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.755 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.755 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:33.755 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:34.015 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:34.015 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:34.015 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:34.015 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:34.015 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:34.275 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:34.275 "name": "BaseBdev3", 00:19:34.275 "aliases": [ 00:19:34.275 "0a47fc0d-2dc5-431c-98a5-06583ba9699a" 00:19:34.275 ], 00:19:34.275 "product_name": "Malloc disk", 00:19:34.275 "block_size": 512, 00:19:34.275 "num_blocks": 65536, 00:19:34.275 "uuid": "0a47fc0d-2dc5-431c-98a5-06583ba9699a", 00:19:34.275 "assigned_rate_limits": { 00:19:34.275 "rw_ios_per_sec": 0, 00:19:34.275 "rw_mbytes_per_sec": 0, 00:19:34.275 "r_mbytes_per_sec": 0, 00:19:34.275 "w_mbytes_per_sec": 0 00:19:34.275 }, 00:19:34.275 "claimed": true, 00:19:34.275 "claim_type": "exclusive_write", 00:19:34.275 "zoned": false, 00:19:34.275 "supported_io_types": { 00:19:34.275 "read": true, 00:19:34.275 "write": true, 00:19:34.275 "unmap": true, 00:19:34.275 "write_zeroes": true, 00:19:34.275 "flush": true, 00:19:34.275 "reset": true, 00:19:34.275 "compare": false, 00:19:34.275 "compare_and_write": false, 00:19:34.275 "abort": true, 00:19:34.275 "nvme_admin": false, 00:19:34.275 "nvme_io": false 00:19:34.275 }, 00:19:34.275 "memory_domains": [ 00:19:34.275 { 00:19:34.275 "dma_device_id": "system", 00:19:34.275 "dma_device_type": 1 00:19:34.275 }, 00:19:34.275 { 00:19:34.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.275 "dma_device_type": 2 00:19:34.275 } 00:19:34.275 ], 00:19:34.275 "driver_specific": {} 00:19:34.275 }' 00:19:34.275 07:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:34.275 07:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:34.275 07:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:34.275 07:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:34.275 07:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:34.534 07:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:34.534 07:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:34.534 07:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:34.534 07:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:34.534 07:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:34.534 07:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:34.534 07:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:34.534 07:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:34.792 [2024-07-12 07:30:08.542171] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:34.792 [2024-07-12 07:30:08.542745] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:34.792 [2024-07-12 07:30:08.542961] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.792 [2024-07-12 07:30:08.543137] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.792 [2024-07-12 07:30:08.543239] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:19:34.792 07:30:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 139301 00:19:34.792 07:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 139301 ']' 00:19:34.792 07:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 139301 00:19:34.792 07:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:19:34.792 07:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:34.792 07:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 139301 00:19:34.792 07:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:34.792 07:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:34.792 07:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 139301' 00:19:34.792 killing process with pid 139301 00:19:34.792 07:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 139301 00:19:34.792 [2024-07-12 07:30:08.594833] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:34.792 07:30:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 139301 00:19:34.792 [2024-07-12 07:30:08.653120] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:35.360 ************************************ 00:19:35.360 END TEST raid_state_function_test_sb 00:19:35.360 ************************************ 00:19:35.360 07:30:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:35.360 00:19:35.360 real 0m28.490s 00:19:35.360 user 0m52.094s 00:19:35.360 sys 0m5.014s 00:19:35.360 07:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:35.360 07:30:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.360 07:30:09 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:19:35.360 07:30:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:35.360 07:30:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:35.360 07:30:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:35.360 ************************************ 00:19:35.360 START TEST raid_superblock_test 00:19:35.360 ************************************ 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 3 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=140265 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 140265 /var/tmp/spdk-raid.sock 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 140265 ']' 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:35.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:35.360 07:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.360 [2024-07-12 07:30:09.219890] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:35.360 [2024-07-12 07:30:09.220434] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140265 ] 00:19:35.620 [2024-07-12 07:30:09.379137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.620 [2024-07-12 07:30:09.476007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.878 [2024-07-12 07:30:09.560294] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:36.446 07:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:36.446 07:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:19:36.446 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:19:36.446 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:36.446 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:19:36.446 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:19:36.446 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:36.446 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:36.446 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:36.446 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:36.446 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:36.705 malloc1 00:19:36.705 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:36.964 [2024-07-12 07:30:10.609272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:36.964 [2024-07-12 07:30:10.609609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.964 [2024-07-12 07:30:10.609812] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:19:36.964 [2024-07-12 07:30:10.609975] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.964 [2024-07-12 07:30:10.613426] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.964 [2024-07-12 07:30:10.613616] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:36.964 pt1 00:19:36.964 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:36.964 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:36.964 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:19:36.964 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:19:36.964 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:36.964 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:36.964 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:36.964 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:36.964 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:37.224 malloc2 00:19:37.224 07:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:37.483 [2024-07-12 07:30:11.121794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:37.483 [2024-07-12 07:30:11.122149] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.483 [2024-07-12 07:30:11.122309] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:19:37.483 [2024-07-12 07:30:11.122461] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.483 [2024-07-12 07:30:11.125340] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.483 [2024-07-12 07:30:11.125518] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:37.483 pt2 00:19:37.483 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:37.483 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:37.483 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:19:37.484 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:19:37.484 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:37.484 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:37.484 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:37.484 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:37.484 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:37.484 malloc3 00:19:37.746 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:37.746 [2024-07-12 07:30:11.598027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:37.746 [2024-07-12 07:30:11.598366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.746 [2024-07-12 07:30:11.598525] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:37.746 [2024-07-12 07:30:11.598663] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.746 [2024-07-12 07:30:11.601718] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.746 [2024-07-12 07:30:11.601907] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:37.746 pt3 00:19:37.746 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:37.746 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:37.746 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:38.008 [2024-07-12 07:30:11.806382] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:38.008 [2024-07-12 07:30:11.809355] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:38.008 [2024-07-12 07:30:11.809573] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:38.008 [2024-07-12 07:30:11.809932] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:38.008 [2024-07-12 07:30:11.810051] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:38.008 [2024-07-12 07:30:11.810332] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:19:38.008 [2024-07-12 07:30:11.810945] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:38.008 [2024-07-12 07:30:11.811065] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:19:38.008 [2024-07-12 07:30:11.811385] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.008 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:38.008 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:38.008 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:38.008 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:38.008 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:38.008 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:38.008 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:38.008 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:38.008 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:38.008 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:38.008 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.008 07:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.267 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:38.267 "name": "raid_bdev1", 00:19:38.267 "uuid": "916196d2-597d-4442-82f3-054db9dddc1f", 00:19:38.267 "strip_size_kb": 64, 00:19:38.267 "state": "online", 00:19:38.267 "raid_level": "concat", 00:19:38.267 "superblock": true, 00:19:38.267 "num_base_bdevs": 3, 00:19:38.267 "num_base_bdevs_discovered": 3, 00:19:38.267 "num_base_bdevs_operational": 3, 00:19:38.267 "base_bdevs_list": [ 00:19:38.267 { 00:19:38.267 "name": "pt1", 00:19:38.267 "uuid": "585d9e85-6744-5241-bb80-178fe689d77a", 00:19:38.267 "is_configured": true, 00:19:38.267 "data_offset": 2048, 00:19:38.267 "data_size": 63488 00:19:38.267 }, 00:19:38.267 { 00:19:38.267 "name": "pt2", 00:19:38.267 "uuid": "c41b4263-a9ed-5eab-b5f4-cc340d8c3d8d", 00:19:38.267 "is_configured": true, 00:19:38.267 "data_offset": 2048, 00:19:38.267 "data_size": 63488 00:19:38.267 }, 00:19:38.267 { 00:19:38.267 "name": "pt3", 00:19:38.267 "uuid": "f0688a4c-35fd-5a50-b3b6-895291f836f0", 00:19:38.267 "is_configured": true, 00:19:38.267 "data_offset": 2048, 00:19:38.267 "data_size": 63488 00:19:38.267 } 00:19:38.267 ] 00:19:38.267 }' 00:19:38.267 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:38.267 07:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.834 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:19:38.834 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:38.834 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:38.834 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:38.834 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:38.834 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:38.834 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:38.834 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:39.093 [2024-07-12 07:30:12.899853] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:39.093 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:39.093 "name": "raid_bdev1", 00:19:39.093 "aliases": [ 00:19:39.093 "916196d2-597d-4442-82f3-054db9dddc1f" 00:19:39.093 ], 00:19:39.093 "product_name": "Raid Volume", 00:19:39.093 "block_size": 512, 00:19:39.093 "num_blocks": 190464, 00:19:39.093 "uuid": "916196d2-597d-4442-82f3-054db9dddc1f", 00:19:39.093 "assigned_rate_limits": { 00:19:39.093 "rw_ios_per_sec": 0, 00:19:39.093 "rw_mbytes_per_sec": 0, 00:19:39.093 "r_mbytes_per_sec": 0, 00:19:39.093 "w_mbytes_per_sec": 0 00:19:39.093 }, 00:19:39.093 "claimed": false, 00:19:39.093 "zoned": false, 00:19:39.093 "supported_io_types": { 00:19:39.093 "read": true, 00:19:39.093 "write": true, 00:19:39.093 "unmap": true, 00:19:39.093 "write_zeroes": true, 00:19:39.093 "flush": true, 00:19:39.093 "reset": true, 00:19:39.093 "compare": false, 00:19:39.093 "compare_and_write": false, 00:19:39.093 "abort": false, 00:19:39.093 "nvme_admin": false, 00:19:39.093 "nvme_io": false 00:19:39.093 }, 00:19:39.093 "memory_domains": [ 00:19:39.093 { 00:19:39.093 "dma_device_id": "system", 00:19:39.093 "dma_device_type": 1 00:19:39.093 }, 00:19:39.093 { 00:19:39.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.093 "dma_device_type": 2 00:19:39.093 }, 00:19:39.093 { 00:19:39.093 "dma_device_id": "system", 00:19:39.093 "dma_device_type": 1 00:19:39.093 }, 00:19:39.093 { 00:19:39.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.093 "dma_device_type": 2 00:19:39.093 }, 00:19:39.093 { 00:19:39.093 "dma_device_id": "system", 00:19:39.093 "dma_device_type": 1 00:19:39.093 }, 00:19:39.093 { 00:19:39.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.093 "dma_device_type": 2 00:19:39.093 } 00:19:39.093 ], 00:19:39.093 "driver_specific": { 00:19:39.093 "raid": { 00:19:39.093 "uuid": "916196d2-597d-4442-82f3-054db9dddc1f", 00:19:39.093 "strip_size_kb": 64, 00:19:39.093 "state": "online", 00:19:39.093 "raid_level": "concat", 00:19:39.093 "superblock": true, 00:19:39.093 "num_base_bdevs": 3, 00:19:39.093 "num_base_bdevs_discovered": 3, 00:19:39.093 "num_base_bdevs_operational": 3, 00:19:39.093 "base_bdevs_list": [ 00:19:39.093 { 00:19:39.093 "name": "pt1", 00:19:39.093 "uuid": "585d9e85-6744-5241-bb80-178fe689d77a", 00:19:39.093 "is_configured": true, 00:19:39.093 "data_offset": 2048, 00:19:39.093 "data_size": 63488 00:19:39.093 }, 00:19:39.093 { 00:19:39.093 "name": "pt2", 00:19:39.093 "uuid": "c41b4263-a9ed-5eab-b5f4-cc340d8c3d8d", 00:19:39.093 "is_configured": true, 00:19:39.093 "data_offset": 2048, 00:19:39.093 "data_size": 63488 00:19:39.093 }, 00:19:39.093 { 00:19:39.093 "name": "pt3", 00:19:39.093 "uuid": "f0688a4c-35fd-5a50-b3b6-895291f836f0", 00:19:39.093 "is_configured": true, 00:19:39.093 "data_offset": 2048, 00:19:39.093 "data_size": 63488 00:19:39.093 } 00:19:39.093 ] 00:19:39.093 } 00:19:39.093 } 00:19:39.093 }' 00:19:39.093 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:39.093 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:39.093 pt2 00:19:39.093 pt3' 00:19:39.093 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:39.093 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:39.093 07:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:39.352 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:39.352 "name": "pt1", 00:19:39.352 "aliases": [ 00:19:39.352 "585d9e85-6744-5241-bb80-178fe689d77a" 00:19:39.352 ], 00:19:39.352 "product_name": "passthru", 00:19:39.352 "block_size": 512, 00:19:39.352 "num_blocks": 65536, 00:19:39.352 "uuid": "585d9e85-6744-5241-bb80-178fe689d77a", 00:19:39.352 "assigned_rate_limits": { 00:19:39.352 "rw_ios_per_sec": 0, 00:19:39.352 "rw_mbytes_per_sec": 0, 00:19:39.352 "r_mbytes_per_sec": 0, 00:19:39.352 "w_mbytes_per_sec": 0 00:19:39.352 }, 00:19:39.352 "claimed": true, 00:19:39.352 "claim_type": "exclusive_write", 00:19:39.352 "zoned": false, 00:19:39.352 "supported_io_types": { 00:19:39.352 "read": true, 00:19:39.352 "write": true, 00:19:39.352 "unmap": true, 00:19:39.352 "write_zeroes": true, 00:19:39.352 "flush": true, 00:19:39.352 "reset": true, 00:19:39.352 "compare": false, 00:19:39.352 "compare_and_write": false, 00:19:39.352 "abort": true, 00:19:39.352 "nvme_admin": false, 00:19:39.352 "nvme_io": false 00:19:39.352 }, 00:19:39.352 "memory_domains": [ 00:19:39.352 { 00:19:39.352 "dma_device_id": "system", 00:19:39.352 "dma_device_type": 1 00:19:39.352 }, 00:19:39.352 { 00:19:39.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.352 "dma_device_type": 2 00:19:39.352 } 00:19:39.352 ], 00:19:39.352 "driver_specific": { 00:19:39.352 "passthru": { 00:19:39.352 "name": "pt1", 00:19:39.352 "base_bdev_name": "malloc1" 00:19:39.352 } 00:19:39.352 } 00:19:39.352 }' 00:19:39.352 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:39.611 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:39.611 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:39.611 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:39.611 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:39.611 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:39.611 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:39.611 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:39.611 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:39.611 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:39.611 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:39.870 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:39.870 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:39.870 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:39.870 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:39.870 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:39.870 "name": "pt2", 00:19:39.870 "aliases": [ 00:19:39.870 "c41b4263-a9ed-5eab-b5f4-cc340d8c3d8d" 00:19:39.870 ], 00:19:39.870 "product_name": "passthru", 00:19:39.870 "block_size": 512, 00:19:39.870 "num_blocks": 65536, 00:19:39.870 "uuid": "c41b4263-a9ed-5eab-b5f4-cc340d8c3d8d", 00:19:39.870 "assigned_rate_limits": { 00:19:39.870 "rw_ios_per_sec": 0, 00:19:39.870 "rw_mbytes_per_sec": 0, 00:19:39.870 "r_mbytes_per_sec": 0, 00:19:39.870 "w_mbytes_per_sec": 0 00:19:39.870 }, 00:19:39.870 "claimed": true, 00:19:39.870 "claim_type": "exclusive_write", 00:19:39.870 "zoned": false, 00:19:39.870 "supported_io_types": { 00:19:39.870 "read": true, 00:19:39.870 "write": true, 00:19:39.870 "unmap": true, 00:19:39.870 "write_zeroes": true, 00:19:39.870 "flush": true, 00:19:39.870 "reset": true, 00:19:39.870 "compare": false, 00:19:39.870 "compare_and_write": false, 00:19:39.870 "abort": true, 00:19:39.870 "nvme_admin": false, 00:19:39.870 "nvme_io": false 00:19:39.870 }, 00:19:39.870 "memory_domains": [ 00:19:39.870 { 00:19:39.870 "dma_device_id": "system", 00:19:39.870 "dma_device_type": 1 00:19:39.870 }, 00:19:39.870 { 00:19:39.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.870 "dma_device_type": 2 00:19:39.870 } 00:19:39.870 ], 00:19:39.870 "driver_specific": { 00:19:39.870 "passthru": { 00:19:39.870 "name": "pt2", 00:19:39.870 "base_bdev_name": "malloc2" 00:19:39.870 } 00:19:39.870 } 00:19:39.870 }' 00:19:39.870 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.128 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.128 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:40.128 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.128 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.128 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:40.128 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.129 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.129 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:40.129 07:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:40.387 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:40.387 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:40.387 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:40.387 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:40.387 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:40.646 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:40.646 "name": "pt3", 00:19:40.646 "aliases": [ 00:19:40.646 "f0688a4c-35fd-5a50-b3b6-895291f836f0" 00:19:40.646 ], 00:19:40.646 "product_name": "passthru", 00:19:40.646 "block_size": 512, 00:19:40.646 "num_blocks": 65536, 00:19:40.646 "uuid": "f0688a4c-35fd-5a50-b3b6-895291f836f0", 00:19:40.646 "assigned_rate_limits": { 00:19:40.646 "rw_ios_per_sec": 0, 00:19:40.646 "rw_mbytes_per_sec": 0, 00:19:40.646 "r_mbytes_per_sec": 0, 00:19:40.646 "w_mbytes_per_sec": 0 00:19:40.646 }, 00:19:40.646 "claimed": true, 00:19:40.646 "claim_type": "exclusive_write", 00:19:40.646 "zoned": false, 00:19:40.646 "supported_io_types": { 00:19:40.646 "read": true, 00:19:40.646 "write": true, 00:19:40.646 "unmap": true, 00:19:40.646 "write_zeroes": true, 00:19:40.646 "flush": true, 00:19:40.646 "reset": true, 00:19:40.646 "compare": false, 00:19:40.646 "compare_and_write": false, 00:19:40.646 "abort": true, 00:19:40.646 "nvme_admin": false, 00:19:40.646 "nvme_io": false 00:19:40.646 }, 00:19:40.646 "memory_domains": [ 00:19:40.646 { 00:19:40.646 "dma_device_id": "system", 00:19:40.646 "dma_device_type": 1 00:19:40.646 }, 00:19:40.646 { 00:19:40.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.646 "dma_device_type": 2 00:19:40.646 } 00:19:40.646 ], 00:19:40.646 "driver_specific": { 00:19:40.646 "passthru": { 00:19:40.646 "name": "pt3", 00:19:40.646 "base_bdev_name": "malloc3" 00:19:40.646 } 00:19:40.646 } 00:19:40.646 }' 00:19:40.646 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.646 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.646 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:40.646 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.646 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.904 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:40.904 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.904 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.904 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:40.904 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:40.904 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:40.904 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:40.904 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:40.904 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:19:41.163 [2024-07-12 07:30:14.936253] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.163 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=916196d2-597d-4442-82f3-054db9dddc1f 00:19:41.163 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 916196d2-597d-4442-82f3-054db9dddc1f ']' 00:19:41.163 07:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:41.421 [2024-07-12 07:30:15.144044] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.421 [2024-07-12 07:30:15.144307] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.421 [2024-07-12 07:30:15.144602] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.421 [2024-07-12 07:30:15.144779] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.421 [2024-07-12 07:30:15.144867] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:19:41.421 07:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.421 07:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:19:41.678 07:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:19:41.678 07:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:19:41.678 07:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:41.678 07:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:41.936 07:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:41.936 07:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:42.193 07:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:42.193 07:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:42.451 07:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:42.451 07:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:42.726 07:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:19:42.726 07:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:42.726 07:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:42.726 07:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:42.726 07:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:42.726 07:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:42.726 07:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:42.726 07:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:42.726 07:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:42.726 07:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:42.726 07:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:42.726 07:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:42.727 07:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:42.990 [2024-07-12 07:30:16.681734] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:42.990 [2024-07-12 07:30:16.684583] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:42.990 [2024-07-12 07:30:16.684797] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:42.990 [2024-07-12 07:30:16.684950] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:42.990 [2024-07-12 07:30:16.685159] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:42.990 [2024-07-12 07:30:16.685342] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:42.990 [2024-07-12 07:30:16.685476] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:42.990 [2024-07-12 07:30:16.685576] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:19:42.990 request: 00:19:42.990 { 00:19:42.990 "name": "raid_bdev1", 00:19:42.990 "raid_level": "concat", 00:19:42.990 "base_bdevs": [ 00:19:42.990 "malloc1", 00:19:42.990 "malloc2", 00:19:42.990 "malloc3" 00:19:42.990 ], 00:19:42.990 "superblock": false, 00:19:42.990 "strip_size_kb": 64, 00:19:42.990 "method": "bdev_raid_create", 00:19:42.990 "req_id": 1 00:19:42.990 } 00:19:42.990 Got JSON-RPC error response 00:19:42.990 response: 00:19:42.990 { 00:19:42.990 "code": -17, 00:19:42.990 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:42.990 } 00:19:42.990 07:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:19:42.990 07:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:42.990 07:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:42.990 07:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:42.990 07:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:19:42.990 07:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.249 07:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:19:43.249 07:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:19:43.249 07:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:43.249 [2024-07-12 07:30:17.101607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:43.249 [2024-07-12 07:30:17.102000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.249 [2024-07-12 07:30:17.102096] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:43.249 [2024-07-12 07:30:17.102223] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.249 [2024-07-12 07:30:17.105242] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.249 [2024-07-12 07:30:17.105461] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:43.249 [2024-07-12 07:30:17.105734] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:43.249 [2024-07-12 07:30:17.105872] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:43.249 pt1 00:19:43.249 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:43.249 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:43.249 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:43.249 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:43.249 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:43.249 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:43.249 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.249 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.249 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.249 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.249 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.249 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.507 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:43.507 "name": "raid_bdev1", 00:19:43.507 "uuid": "916196d2-597d-4442-82f3-054db9dddc1f", 00:19:43.508 "strip_size_kb": 64, 00:19:43.508 "state": "configuring", 00:19:43.508 "raid_level": "concat", 00:19:43.508 "superblock": true, 00:19:43.508 "num_base_bdevs": 3, 00:19:43.508 "num_base_bdevs_discovered": 1, 00:19:43.508 "num_base_bdevs_operational": 3, 00:19:43.508 "base_bdevs_list": [ 00:19:43.508 { 00:19:43.508 "name": "pt1", 00:19:43.508 "uuid": "585d9e85-6744-5241-bb80-178fe689d77a", 00:19:43.508 "is_configured": true, 00:19:43.508 "data_offset": 2048, 00:19:43.508 "data_size": 63488 00:19:43.508 }, 00:19:43.508 { 00:19:43.508 "name": null, 00:19:43.508 "uuid": "c41b4263-a9ed-5eab-b5f4-cc340d8c3d8d", 00:19:43.508 "is_configured": false, 00:19:43.508 "data_offset": 2048, 00:19:43.508 "data_size": 63488 00:19:43.508 }, 00:19:43.508 { 00:19:43.508 "name": null, 00:19:43.508 "uuid": "f0688a4c-35fd-5a50-b3b6-895291f836f0", 00:19:43.508 "is_configured": false, 00:19:43.508 "data_offset": 2048, 00:19:43.508 "data_size": 63488 00:19:43.508 } 00:19:43.508 ] 00:19:43.508 }' 00:19:43.508 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:43.508 07:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.076 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:19:44.076 07:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:44.335 [2024-07-12 07:30:18.210095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:44.335 [2024-07-12 07:30:18.210463] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.335 [2024-07-12 07:30:18.210578] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:44.335 [2024-07-12 07:30:18.210839] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.335 [2024-07-12 07:30:18.211403] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.335 [2024-07-12 07:30:18.211560] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:44.335 [2024-07-12 07:30:18.211783] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:44.335 [2024-07-12 07:30:18.211922] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:44.335 pt2 00:19:44.594 07:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:44.594 [2024-07-12 07:30:18.414139] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:44.594 07:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:44.594 07:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:44.594 07:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:44.594 07:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:44.594 07:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:44.594 07:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:44.594 07:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:44.594 07:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:44.594 07:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:44.594 07:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:44.594 07:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.594 07:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.853 07:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:44.853 "name": "raid_bdev1", 00:19:44.853 "uuid": "916196d2-597d-4442-82f3-054db9dddc1f", 00:19:44.853 "strip_size_kb": 64, 00:19:44.853 "state": "configuring", 00:19:44.853 "raid_level": "concat", 00:19:44.853 "superblock": true, 00:19:44.853 "num_base_bdevs": 3, 00:19:44.853 "num_base_bdevs_discovered": 1, 00:19:44.853 "num_base_bdevs_operational": 3, 00:19:44.853 "base_bdevs_list": [ 00:19:44.853 { 00:19:44.853 "name": "pt1", 00:19:44.853 "uuid": "585d9e85-6744-5241-bb80-178fe689d77a", 00:19:44.853 "is_configured": true, 00:19:44.853 "data_offset": 2048, 00:19:44.853 "data_size": 63488 00:19:44.853 }, 00:19:44.853 { 00:19:44.853 "name": null, 00:19:44.853 "uuid": "c41b4263-a9ed-5eab-b5f4-cc340d8c3d8d", 00:19:44.853 "is_configured": false, 00:19:44.853 "data_offset": 2048, 00:19:44.853 "data_size": 63488 00:19:44.853 }, 00:19:44.853 { 00:19:44.853 "name": null, 00:19:44.853 "uuid": "f0688a4c-35fd-5a50-b3b6-895291f836f0", 00:19:44.853 "is_configured": false, 00:19:44.853 "data_offset": 2048, 00:19:44.853 "data_size": 63488 00:19:44.853 } 00:19:44.853 ] 00:19:44.853 }' 00:19:44.853 07:30:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:44.853 07:30:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.422 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:19:45.422 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:45.422 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:45.682 [2024-07-12 07:30:19.378326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:45.682 [2024-07-12 07:30:19.378722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.682 [2024-07-12 07:30:19.378802] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:45.682 [2024-07-12 07:30:19.378962] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.682 [2024-07-12 07:30:19.379530] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.682 [2024-07-12 07:30:19.379691] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:45.682 [2024-07-12 07:30:19.379897] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:45.682 [2024-07-12 07:30:19.380017] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:45.682 pt2 00:19:45.682 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:45.682 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:45.682 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:45.941 [2024-07-12 07:30:19.666346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:45.941 [2024-07-12 07:30:19.666688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.941 [2024-07-12 07:30:19.666836] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:45.941 [2024-07-12 07:30:19.666998] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.941 [2024-07-12 07:30:19.667624] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.941 [2024-07-12 07:30:19.667791] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:45.941 [2024-07-12 07:30:19.668013] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:45.941 [2024-07-12 07:30:19.668142] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:45.941 [2024-07-12 07:30:19.668451] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:19:45.941 [2024-07-12 07:30:19.668558] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:45.941 [2024-07-12 07:30:19.668714] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:19:45.941 [2024-07-12 07:30:19.669091] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:19:45.941 [2024-07-12 07:30:19.669212] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:19:45.941 [2024-07-12 07:30:19.669446] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.941 pt3 00:19:45.941 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:45.941 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:45.941 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:45.941 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:45.941 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:45.941 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:45.941 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:45.941 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:45.941 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.941 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.941 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.941 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.941 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.941 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.200 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.200 "name": "raid_bdev1", 00:19:46.200 "uuid": "916196d2-597d-4442-82f3-054db9dddc1f", 00:19:46.200 "strip_size_kb": 64, 00:19:46.200 "state": "online", 00:19:46.200 "raid_level": "concat", 00:19:46.200 "superblock": true, 00:19:46.200 "num_base_bdevs": 3, 00:19:46.200 "num_base_bdevs_discovered": 3, 00:19:46.200 "num_base_bdevs_operational": 3, 00:19:46.200 "base_bdevs_list": [ 00:19:46.200 { 00:19:46.200 "name": "pt1", 00:19:46.200 "uuid": "585d9e85-6744-5241-bb80-178fe689d77a", 00:19:46.200 "is_configured": true, 00:19:46.200 "data_offset": 2048, 00:19:46.200 "data_size": 63488 00:19:46.200 }, 00:19:46.200 { 00:19:46.200 "name": "pt2", 00:19:46.200 "uuid": "c41b4263-a9ed-5eab-b5f4-cc340d8c3d8d", 00:19:46.200 "is_configured": true, 00:19:46.200 "data_offset": 2048, 00:19:46.200 "data_size": 63488 00:19:46.200 }, 00:19:46.200 { 00:19:46.200 "name": "pt3", 00:19:46.200 "uuid": "f0688a4c-35fd-5a50-b3b6-895291f836f0", 00:19:46.200 "is_configured": true, 00:19:46.200 "data_offset": 2048, 00:19:46.200 "data_size": 63488 00:19:46.200 } 00:19:46.200 ] 00:19:46.200 }' 00:19:46.200 07:30:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.200 07:30:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.768 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:19:46.768 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:46.768 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:46.768 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:46.768 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:46.768 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:46.768 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:46.768 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:47.028 [2024-07-12 07:30:20.666843] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.028 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:47.028 "name": "raid_bdev1", 00:19:47.028 "aliases": [ 00:19:47.028 "916196d2-597d-4442-82f3-054db9dddc1f" 00:19:47.028 ], 00:19:47.028 "product_name": "Raid Volume", 00:19:47.028 "block_size": 512, 00:19:47.028 "num_blocks": 190464, 00:19:47.028 "uuid": "916196d2-597d-4442-82f3-054db9dddc1f", 00:19:47.028 "assigned_rate_limits": { 00:19:47.028 "rw_ios_per_sec": 0, 00:19:47.028 "rw_mbytes_per_sec": 0, 00:19:47.028 "r_mbytes_per_sec": 0, 00:19:47.028 "w_mbytes_per_sec": 0 00:19:47.028 }, 00:19:47.028 "claimed": false, 00:19:47.028 "zoned": false, 00:19:47.028 "supported_io_types": { 00:19:47.028 "read": true, 00:19:47.028 "write": true, 00:19:47.028 "unmap": true, 00:19:47.028 "write_zeroes": true, 00:19:47.028 "flush": true, 00:19:47.028 "reset": true, 00:19:47.028 "compare": false, 00:19:47.028 "compare_and_write": false, 00:19:47.028 "abort": false, 00:19:47.028 "nvme_admin": false, 00:19:47.028 "nvme_io": false 00:19:47.028 }, 00:19:47.028 "memory_domains": [ 00:19:47.028 { 00:19:47.028 "dma_device_id": "system", 00:19:47.028 "dma_device_type": 1 00:19:47.028 }, 00:19:47.028 { 00:19:47.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.028 "dma_device_type": 2 00:19:47.028 }, 00:19:47.028 { 00:19:47.028 "dma_device_id": "system", 00:19:47.028 "dma_device_type": 1 00:19:47.028 }, 00:19:47.028 { 00:19:47.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.028 "dma_device_type": 2 00:19:47.028 }, 00:19:47.028 { 00:19:47.028 "dma_device_id": "system", 00:19:47.028 "dma_device_type": 1 00:19:47.028 }, 00:19:47.028 { 00:19:47.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.028 "dma_device_type": 2 00:19:47.028 } 00:19:47.028 ], 00:19:47.028 "driver_specific": { 00:19:47.028 "raid": { 00:19:47.028 "uuid": "916196d2-597d-4442-82f3-054db9dddc1f", 00:19:47.028 "strip_size_kb": 64, 00:19:47.028 "state": "online", 00:19:47.028 "raid_level": "concat", 00:19:47.028 "superblock": true, 00:19:47.028 "num_base_bdevs": 3, 00:19:47.028 "num_base_bdevs_discovered": 3, 00:19:47.028 "num_base_bdevs_operational": 3, 00:19:47.028 "base_bdevs_list": [ 00:19:47.028 { 00:19:47.028 "name": "pt1", 00:19:47.028 "uuid": "585d9e85-6744-5241-bb80-178fe689d77a", 00:19:47.028 "is_configured": true, 00:19:47.028 "data_offset": 2048, 00:19:47.028 "data_size": 63488 00:19:47.028 }, 00:19:47.028 { 00:19:47.028 "name": "pt2", 00:19:47.028 "uuid": "c41b4263-a9ed-5eab-b5f4-cc340d8c3d8d", 00:19:47.028 "is_configured": true, 00:19:47.028 "data_offset": 2048, 00:19:47.028 "data_size": 63488 00:19:47.028 }, 00:19:47.028 { 00:19:47.028 "name": "pt3", 00:19:47.028 "uuid": "f0688a4c-35fd-5a50-b3b6-895291f836f0", 00:19:47.028 "is_configured": true, 00:19:47.028 "data_offset": 2048, 00:19:47.028 "data_size": 63488 00:19:47.028 } 00:19:47.028 ] 00:19:47.028 } 00:19:47.028 } 00:19:47.028 }' 00:19:47.028 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:47.028 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:47.028 pt2 00:19:47.028 pt3' 00:19:47.028 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:47.028 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:47.028 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:47.310 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:47.310 "name": "pt1", 00:19:47.310 "aliases": [ 00:19:47.310 "585d9e85-6744-5241-bb80-178fe689d77a" 00:19:47.310 ], 00:19:47.310 "product_name": "passthru", 00:19:47.310 "block_size": 512, 00:19:47.310 "num_blocks": 65536, 00:19:47.310 "uuid": "585d9e85-6744-5241-bb80-178fe689d77a", 00:19:47.310 "assigned_rate_limits": { 00:19:47.310 "rw_ios_per_sec": 0, 00:19:47.310 "rw_mbytes_per_sec": 0, 00:19:47.310 "r_mbytes_per_sec": 0, 00:19:47.310 "w_mbytes_per_sec": 0 00:19:47.310 }, 00:19:47.310 "claimed": true, 00:19:47.310 "claim_type": "exclusive_write", 00:19:47.310 "zoned": false, 00:19:47.310 "supported_io_types": { 00:19:47.310 "read": true, 00:19:47.310 "write": true, 00:19:47.310 "unmap": true, 00:19:47.310 "write_zeroes": true, 00:19:47.310 "flush": true, 00:19:47.310 "reset": true, 00:19:47.310 "compare": false, 00:19:47.310 "compare_and_write": false, 00:19:47.310 "abort": true, 00:19:47.310 "nvme_admin": false, 00:19:47.310 "nvme_io": false 00:19:47.310 }, 00:19:47.310 "memory_domains": [ 00:19:47.310 { 00:19:47.310 "dma_device_id": "system", 00:19:47.310 "dma_device_type": 1 00:19:47.310 }, 00:19:47.311 { 00:19:47.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.311 "dma_device_type": 2 00:19:47.311 } 00:19:47.311 ], 00:19:47.311 "driver_specific": { 00:19:47.311 "passthru": { 00:19:47.311 "name": "pt1", 00:19:47.311 "base_bdev_name": "malloc1" 00:19:47.311 } 00:19:47.311 } 00:19:47.311 }' 00:19:47.311 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:47.311 07:30:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:47.311 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:47.311 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:47.311 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:47.311 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:47.311 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:47.311 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:47.595 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:47.595 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:47.595 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:47.595 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:47.595 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:47.595 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:47.595 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:47.854 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:47.854 "name": "pt2", 00:19:47.854 "aliases": [ 00:19:47.854 "c41b4263-a9ed-5eab-b5f4-cc340d8c3d8d" 00:19:47.854 ], 00:19:47.854 "product_name": "passthru", 00:19:47.854 "block_size": 512, 00:19:47.854 "num_blocks": 65536, 00:19:47.854 "uuid": "c41b4263-a9ed-5eab-b5f4-cc340d8c3d8d", 00:19:47.854 "assigned_rate_limits": { 00:19:47.854 "rw_ios_per_sec": 0, 00:19:47.854 "rw_mbytes_per_sec": 0, 00:19:47.854 "r_mbytes_per_sec": 0, 00:19:47.854 "w_mbytes_per_sec": 0 00:19:47.854 }, 00:19:47.854 "claimed": true, 00:19:47.854 "claim_type": "exclusive_write", 00:19:47.854 "zoned": false, 00:19:47.854 "supported_io_types": { 00:19:47.854 "read": true, 00:19:47.854 "write": true, 00:19:47.854 "unmap": true, 00:19:47.854 "write_zeroes": true, 00:19:47.854 "flush": true, 00:19:47.854 "reset": true, 00:19:47.854 "compare": false, 00:19:47.854 "compare_and_write": false, 00:19:47.854 "abort": true, 00:19:47.854 "nvme_admin": false, 00:19:47.854 "nvme_io": false 00:19:47.854 }, 00:19:47.854 "memory_domains": [ 00:19:47.854 { 00:19:47.854 "dma_device_id": "system", 00:19:47.854 "dma_device_type": 1 00:19:47.854 }, 00:19:47.854 { 00:19:47.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.854 "dma_device_type": 2 00:19:47.854 } 00:19:47.854 ], 00:19:47.854 "driver_specific": { 00:19:47.854 "passthru": { 00:19:47.854 "name": "pt2", 00:19:47.854 "base_bdev_name": "malloc2" 00:19:47.854 } 00:19:47.854 } 00:19:47.854 }' 00:19:47.854 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:47.854 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:47.854 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:47.854 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:48.112 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:48.112 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:48.112 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:48.112 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:48.112 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:48.112 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:48.112 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:48.112 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:48.112 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:48.112 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:48.112 07:30:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:48.370 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:48.370 "name": "pt3", 00:19:48.370 "aliases": [ 00:19:48.370 "f0688a4c-35fd-5a50-b3b6-895291f836f0" 00:19:48.370 ], 00:19:48.370 "product_name": "passthru", 00:19:48.370 "block_size": 512, 00:19:48.370 "num_blocks": 65536, 00:19:48.370 "uuid": "f0688a4c-35fd-5a50-b3b6-895291f836f0", 00:19:48.370 "assigned_rate_limits": { 00:19:48.370 "rw_ios_per_sec": 0, 00:19:48.371 "rw_mbytes_per_sec": 0, 00:19:48.371 "r_mbytes_per_sec": 0, 00:19:48.371 "w_mbytes_per_sec": 0 00:19:48.371 }, 00:19:48.371 "claimed": true, 00:19:48.371 "claim_type": "exclusive_write", 00:19:48.371 "zoned": false, 00:19:48.371 "supported_io_types": { 00:19:48.371 "read": true, 00:19:48.371 "write": true, 00:19:48.371 "unmap": true, 00:19:48.371 "write_zeroes": true, 00:19:48.371 "flush": true, 00:19:48.371 "reset": true, 00:19:48.371 "compare": false, 00:19:48.371 "compare_and_write": false, 00:19:48.371 "abort": true, 00:19:48.371 "nvme_admin": false, 00:19:48.371 "nvme_io": false 00:19:48.371 }, 00:19:48.371 "memory_domains": [ 00:19:48.371 { 00:19:48.371 "dma_device_id": "system", 00:19:48.371 "dma_device_type": 1 00:19:48.371 }, 00:19:48.371 { 00:19:48.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.371 "dma_device_type": 2 00:19:48.371 } 00:19:48.371 ], 00:19:48.371 "driver_specific": { 00:19:48.371 "passthru": { 00:19:48.371 "name": "pt3", 00:19:48.371 "base_bdev_name": "malloc3" 00:19:48.371 } 00:19:48.371 } 00:19:48.371 }' 00:19:48.371 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:48.629 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:48.629 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:48.629 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:48.629 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:48.629 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:48.629 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:48.629 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:48.629 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:48.629 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:48.886 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:48.886 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:48.886 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:19:48.886 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:49.144 [2024-07-12 07:30:22.778312] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:49.144 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 916196d2-597d-4442-82f3-054db9dddc1f '!=' 916196d2-597d-4442-82f3-054db9dddc1f ']' 00:19:49.144 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:19:49.144 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:49.144 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:49.144 07:30:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 140265 00:19:49.144 07:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 140265 ']' 00:19:49.144 07:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 140265 00:19:49.144 07:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:19:49.144 07:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:49.144 07:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 140265 00:19:49.144 07:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:49.144 07:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:49.144 07:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 140265' 00:19:49.144 killing process with pid 140265 00:19:49.144 07:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 140265 00:19:49.144 [2024-07-12 07:30:22.827020] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:49.144 07:30:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 140265 00:19:49.144 [2024-07-12 07:30:22.827253] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:49.144 [2024-07-12 07:30:22.827438] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:49.145 [2024-07-12 07:30:22.827554] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:19:49.145 [2024-07-12 07:30:22.890978] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:49.708 07:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:19:49.708 00:19:49.708 real 0m14.146s 00:19:49.709 user 0m24.963s 00:19:49.709 sys 0m2.701s 00:19:49.709 07:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:49.709 07:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.709 ************************************ 00:19:49.709 END TEST raid_superblock_test 00:19:49.709 ************************************ 00:19:49.709 07:30:23 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:19:49.709 07:30:23 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:49.709 07:30:23 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:49.709 07:30:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:49.709 ************************************ 00:19:49.709 START TEST raid_read_error_test 00:19:49.709 ************************************ 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 3 read 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.MhoFmOUSsZ 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=140741 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 140741 /var/tmp/spdk-raid.sock 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 140741 ']' 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:49.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:49.709 07:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.709 [2024-07-12 07:30:23.443085] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:49.709 [2024-07-12 07:30:23.443540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140741 ] 00:19:49.966 [2024-07-12 07:30:23.594976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.966 [2024-07-12 07:30:23.691718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.966 [2024-07-12 07:30:23.778645] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:50.532 07:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:50.532 07:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:19:50.532 07:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:50.532 07:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:50.790 BaseBdev1_malloc 00:19:50.790 07:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:51.048 true 00:19:51.048 07:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:51.307 [2024-07-12 07:30:25.018885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:51.307 [2024-07-12 07:30:25.019238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.307 [2024-07-12 07:30:25.019331] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:19:51.307 [2024-07-12 07:30:25.019663] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.307 [2024-07-12 07:30:25.022760] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.307 [2024-07-12 07:30:25.022945] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:51.307 BaseBdev1 00:19:51.307 07:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:51.307 07:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:51.565 BaseBdev2_malloc 00:19:51.565 07:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:51.823 true 00:19:51.823 07:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:52.081 [2024-07-12 07:30:25.787256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:52.081 [2024-07-12 07:30:25.787640] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.081 [2024-07-12 07:30:25.787846] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:52.081 [2024-07-12 07:30:25.788017] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.081 [2024-07-12 07:30:25.791266] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.081 [2024-07-12 07:30:25.791448] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:52.081 BaseBdev2 00:19:52.081 07:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:52.081 07:30:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:52.339 BaseBdev3_malloc 00:19:52.339 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:52.597 true 00:19:52.597 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:52.856 [2024-07-12 07:30:26.502770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:52.856 [2024-07-12 07:30:26.503091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.856 [2024-07-12 07:30:26.503174] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:19:52.856 [2024-07-12 07:30:26.503320] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.856 [2024-07-12 07:30:26.506223] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.856 [2024-07-12 07:30:26.506402] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:52.856 BaseBdev3 00:19:52.856 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:52.856 [2024-07-12 07:30:26.698934] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:52.856 [2024-07-12 07:30:26.701785] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:52.856 [2024-07-12 07:30:26.701995] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:52.856 [2024-07-12 07:30:26.702363] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:19:52.856 [2024-07-12 07:30:26.702474] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:52.856 [2024-07-12 07:30:26.702699] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:19:52.856 [2024-07-12 07:30:26.703309] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:19:52.856 [2024-07-12 07:30:26.703415] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:19:52.856 [2024-07-12 07:30:26.703780] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.856 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:52.856 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:52.856 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:52.856 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:52.856 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:52.856 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:52.856 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:52.856 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:52.856 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:52.856 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:52.856 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.856 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.114 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:53.114 "name": "raid_bdev1", 00:19:53.114 "uuid": "e1198b62-525e-4bc6-9753-0291f1a94973", 00:19:53.114 "strip_size_kb": 64, 00:19:53.114 "state": "online", 00:19:53.114 "raid_level": "concat", 00:19:53.114 "superblock": true, 00:19:53.114 "num_base_bdevs": 3, 00:19:53.114 "num_base_bdevs_discovered": 3, 00:19:53.114 "num_base_bdevs_operational": 3, 00:19:53.114 "base_bdevs_list": [ 00:19:53.114 { 00:19:53.114 "name": "BaseBdev1", 00:19:53.114 "uuid": "2e41eabc-4375-5eb1-be60-4ac6c8837002", 00:19:53.114 "is_configured": true, 00:19:53.114 "data_offset": 2048, 00:19:53.114 "data_size": 63488 00:19:53.114 }, 00:19:53.114 { 00:19:53.114 "name": "BaseBdev2", 00:19:53.114 "uuid": "12612902-8e7a-58fc-b3ee-863cdafbc5b8", 00:19:53.114 "is_configured": true, 00:19:53.114 "data_offset": 2048, 00:19:53.114 "data_size": 63488 00:19:53.114 }, 00:19:53.114 { 00:19:53.114 "name": "BaseBdev3", 00:19:53.114 "uuid": "1d1a4f78-172c-534d-82ad-c5baf0d69f02", 00:19:53.115 "is_configured": true, 00:19:53.115 "data_offset": 2048, 00:19:53.115 "data_size": 63488 00:19:53.115 } 00:19:53.115 ] 00:19:53.115 }' 00:19:53.115 07:30:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:53.115 07:30:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.682 07:30:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:53.682 07:30:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:53.682 [2024-07-12 07:30:27.548363] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:19:54.619 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:54.878 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:54.878 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:54.878 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:54.878 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:54.878 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:54.878 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:54.878 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:54.878 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:54.878 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:54.878 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:54.878 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:54.878 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:54.878 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:54.878 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.878 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.137 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.137 "name": "raid_bdev1", 00:19:55.137 "uuid": "e1198b62-525e-4bc6-9753-0291f1a94973", 00:19:55.137 "strip_size_kb": 64, 00:19:55.137 "state": "online", 00:19:55.137 "raid_level": "concat", 00:19:55.137 "superblock": true, 00:19:55.138 "num_base_bdevs": 3, 00:19:55.138 "num_base_bdevs_discovered": 3, 00:19:55.138 "num_base_bdevs_operational": 3, 00:19:55.138 "base_bdevs_list": [ 00:19:55.138 { 00:19:55.138 "name": "BaseBdev1", 00:19:55.138 "uuid": "2e41eabc-4375-5eb1-be60-4ac6c8837002", 00:19:55.138 "is_configured": true, 00:19:55.138 "data_offset": 2048, 00:19:55.138 "data_size": 63488 00:19:55.138 }, 00:19:55.138 { 00:19:55.138 "name": "BaseBdev2", 00:19:55.138 "uuid": "12612902-8e7a-58fc-b3ee-863cdafbc5b8", 00:19:55.138 "is_configured": true, 00:19:55.138 "data_offset": 2048, 00:19:55.138 "data_size": 63488 00:19:55.138 }, 00:19:55.138 { 00:19:55.138 "name": "BaseBdev3", 00:19:55.138 "uuid": "1d1a4f78-172c-534d-82ad-c5baf0d69f02", 00:19:55.138 "is_configured": true, 00:19:55.138 "data_offset": 2048, 00:19:55.138 "data_size": 63488 00:19:55.138 } 00:19:55.138 ] 00:19:55.138 }' 00:19:55.138 07:30:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.138 07:30:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.705 07:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:55.964 [2024-07-12 07:30:29.766067] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:55.964 [2024-07-12 07:30:29.766359] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:55.964 [2024-07-12 07:30:29.768985] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:55.964 [2024-07-12 07:30:29.769149] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.964 [2024-07-12 07:30:29.769231] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:55.964 [2024-07-12 07:30:29.769335] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:19:55.964 0 00:19:55.964 07:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 140741 00:19:55.964 07:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 140741 ']' 00:19:55.964 07:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 140741 00:19:55.964 07:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:19:55.964 07:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:55.964 07:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 140741 00:19:55.964 07:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:55.964 07:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:55.964 07:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 140741' 00:19:55.964 killing process with pid 140741 00:19:55.964 07:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 140741 00:19:55.964 [2024-07-12 07:30:29.822884] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:55.964 07:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 140741 00:19:56.224 [2024-07-12 07:30:29.871394] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:56.481 07:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.MhoFmOUSsZ 00:19:56.481 07:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:56.481 07:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:56.481 07:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:19:56.481 07:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:56.481 07:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:56.481 07:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:56.481 07:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:19:56.481 00:19:56.481 real 0m6.937s 00:19:56.481 user 0m10.666s 00:19:56.481 sys 0m1.198s 00:19:56.481 07:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:56.481 07:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.481 ************************************ 00:19:56.481 END TEST raid_read_error_test 00:19:56.481 ************************************ 00:19:56.481 07:30:30 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:19:56.481 07:30:30 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:56.481 07:30:30 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:56.481 07:30:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:56.739 ************************************ 00:19:56.739 START TEST raid_write_error_test 00:19:56.739 ************************************ 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 3 write 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ulhncU72Ux 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=140934 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 140934 /var/tmp/spdk-raid.sock 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 140934 ']' 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:56.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:56.739 07:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.739 [2024-07-12 07:30:30.452844] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:56.739 [2024-07-12 07:30:30.453278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140934 ] 00:19:56.739 [2024-07-12 07:30:30.597697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.997 [2024-07-12 07:30:30.686775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.997 [2024-07-12 07:30:30.766972] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:57.564 07:30:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:57.564 07:30:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:19:57.564 07:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:57.564 07:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:57.823 BaseBdev1_malloc 00:19:57.823 07:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:58.081 true 00:19:58.081 07:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:58.340 [2024-07-12 07:30:32.083994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:58.340 [2024-07-12 07:30:32.084283] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.340 [2024-07-12 07:30:32.084476] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:19:58.340 [2024-07-12 07:30:32.084659] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.340 [2024-07-12 07:30:32.087975] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.340 [2024-07-12 07:30:32.088161] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:58.340 BaseBdev1 00:19:58.340 07:30:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:58.340 07:30:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:58.598 BaseBdev2_malloc 00:19:58.598 07:30:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:58.856 true 00:19:58.856 07:30:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:59.115 [2024-07-12 07:30:32.760419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:59.115 [2024-07-12 07:30:32.760721] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.115 [2024-07-12 07:30:32.760810] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:59.115 [2024-07-12 07:30:32.760969] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.115 [2024-07-12 07:30:32.763839] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.115 [2024-07-12 07:30:32.764028] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:59.115 BaseBdev2 00:19:59.115 07:30:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:59.115 07:30:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:59.115 BaseBdev3_malloc 00:19:59.372 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:59.372 true 00:19:59.372 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:59.630 [2024-07-12 07:30:33.390589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:59.630 [2024-07-12 07:30:33.390874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.630 [2024-07-12 07:30:33.390969] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:19:59.630 [2024-07-12 07:30:33.391117] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.630 [2024-07-12 07:30:33.393990] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.630 [2024-07-12 07:30:33.394152] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:59.630 BaseBdev3 00:19:59.630 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:59.887 [2024-07-12 07:30:33.582710] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:59.887 [2024-07-12 07:30:33.585574] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:59.887 [2024-07-12 07:30:33.585774] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:59.887 [2024-07-12 07:30:33.586185] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:19:59.887 [2024-07-12 07:30:33.586284] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:59.887 [2024-07-12 07:30:33.586492] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:19:59.887 [2024-07-12 07:30:33.587067] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:19:59.887 [2024-07-12 07:30:33.587168] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:19:59.887 [2024-07-12 07:30:33.587473] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:59.887 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:59.887 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:59.887 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:59.887 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:59.887 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:59.887 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:59.887 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.887 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.887 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.887 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.887 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.887 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.145 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:00.145 "name": "raid_bdev1", 00:20:00.145 "uuid": "c3cf1958-34f2-4dfa-95f3-7a1642db9ead", 00:20:00.145 "strip_size_kb": 64, 00:20:00.145 "state": "online", 00:20:00.145 "raid_level": "concat", 00:20:00.145 "superblock": true, 00:20:00.145 "num_base_bdevs": 3, 00:20:00.145 "num_base_bdevs_discovered": 3, 00:20:00.145 "num_base_bdevs_operational": 3, 00:20:00.145 "base_bdevs_list": [ 00:20:00.145 { 00:20:00.145 "name": "BaseBdev1", 00:20:00.145 "uuid": "23afc628-841b-52fe-ad3c-16598c2da5e5", 00:20:00.145 "is_configured": true, 00:20:00.145 "data_offset": 2048, 00:20:00.145 "data_size": 63488 00:20:00.145 }, 00:20:00.145 { 00:20:00.145 "name": "BaseBdev2", 00:20:00.145 "uuid": "816b4029-cac3-532b-9c5d-b2e948a28fd4", 00:20:00.145 "is_configured": true, 00:20:00.145 "data_offset": 2048, 00:20:00.145 "data_size": 63488 00:20:00.145 }, 00:20:00.145 { 00:20:00.145 "name": "BaseBdev3", 00:20:00.145 "uuid": "b62b582d-80d0-533f-aace-71ca75d26972", 00:20:00.145 "is_configured": true, 00:20:00.145 "data_offset": 2048, 00:20:00.145 "data_size": 63488 00:20:00.145 } 00:20:00.145 ] 00:20:00.145 }' 00:20:00.145 07:30:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:00.145 07:30:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.711 07:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:00.711 07:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:00.711 [2024-07-12 07:30:34.436161] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:20:01.648 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:01.907 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:01.907 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:20:01.907 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:20:01.907 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:01.907 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:01.907 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:01.907 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:01.907 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:01.907 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:01.907 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:01.907 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:01.907 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:01.907 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:01.907 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.907 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.165 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.165 "name": "raid_bdev1", 00:20:02.165 "uuid": "c3cf1958-34f2-4dfa-95f3-7a1642db9ead", 00:20:02.165 "strip_size_kb": 64, 00:20:02.165 "state": "online", 00:20:02.165 "raid_level": "concat", 00:20:02.165 "superblock": true, 00:20:02.165 "num_base_bdevs": 3, 00:20:02.165 "num_base_bdevs_discovered": 3, 00:20:02.165 "num_base_bdevs_operational": 3, 00:20:02.165 "base_bdevs_list": [ 00:20:02.165 { 00:20:02.165 "name": "BaseBdev1", 00:20:02.165 "uuid": "23afc628-841b-52fe-ad3c-16598c2da5e5", 00:20:02.165 "is_configured": true, 00:20:02.165 "data_offset": 2048, 00:20:02.165 "data_size": 63488 00:20:02.165 }, 00:20:02.165 { 00:20:02.165 "name": "BaseBdev2", 00:20:02.165 "uuid": "816b4029-cac3-532b-9c5d-b2e948a28fd4", 00:20:02.165 "is_configured": true, 00:20:02.165 "data_offset": 2048, 00:20:02.165 "data_size": 63488 00:20:02.165 }, 00:20:02.165 { 00:20:02.165 "name": "BaseBdev3", 00:20:02.165 "uuid": "b62b582d-80d0-533f-aace-71ca75d26972", 00:20:02.165 "is_configured": true, 00:20:02.165 "data_offset": 2048, 00:20:02.165 "data_size": 63488 00:20:02.165 } 00:20:02.165 ] 00:20:02.165 }' 00:20:02.165 07:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.165 07:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.732 07:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:02.991 [2024-07-12 07:30:36.645215] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:02.991 [2024-07-12 07:30:36.645485] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:02.991 [2024-07-12 07:30:36.648213] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:02.991 [2024-07-12 07:30:36.648373] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.991 [2024-07-12 07:30:36.648446] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:02.991 [2024-07-12 07:30:36.648577] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:20:02.991 0 00:20:02.991 07:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 140934 00:20:02.991 07:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 140934 ']' 00:20:02.991 07:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 140934 00:20:02.991 07:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:20:02.991 07:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:02.991 07:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 140934 00:20:02.991 07:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:02.991 07:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:02.991 07:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 140934' 00:20:02.991 killing process with pid 140934 00:20:02.991 07:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 140934 00:20:02.991 07:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 140934 00:20:02.991 [2024-07-12 07:30:36.692035] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:02.991 [2024-07-12 07:30:36.739323] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:03.559 07:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:03.559 07:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ulhncU72Ux 00:20:03.559 07:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:03.559 07:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:20:03.559 07:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:20:03.559 07:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:03.559 07:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:03.559 07:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:20:03.559 00:20:03.559 real 0m6.794s 00:20:03.559 user 0m10.356s 00:20:03.559 sys 0m1.180s 00:20:03.559 07:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:03.559 07:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.559 ************************************ 00:20:03.559 END TEST raid_write_error_test 00:20:03.559 ************************************ 00:20:03.559 07:30:37 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:20:03.559 07:30:37 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:20:03.559 07:30:37 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:03.559 07:30:37 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:03.559 07:30:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:03.559 ************************************ 00:20:03.559 START TEST raid_state_function_test 00:20:03.559 ************************************ 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 false 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=141127 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 141127' 00:20:03.559 Process raid pid: 141127 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 141127 /var/tmp/spdk-raid.sock 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 141127 ']' 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:03.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:03.559 07:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.559 [2024-07-12 07:30:37.303772] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:03.559 [2024-07-12 07:30:37.303993] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.817 [2024-07-12 07:30:37.446331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.817 [2024-07-12 07:30:37.525107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.817 [2024-07-12 07:30:37.605619] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.383 07:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:04.383 07:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:20:04.383 07:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:04.641 [2024-07-12 07:30:38.421573] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:04.641 [2024-07-12 07:30:38.421680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:04.641 [2024-07-12 07:30:38.421693] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:04.641 [2024-07-12 07:30:38.421714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:04.641 [2024-07-12 07:30:38.421721] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:04.641 [2024-07-12 07:30:38.421763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:04.641 07:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:04.641 07:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:04.641 07:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:04.641 07:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:04.641 07:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:04.641 07:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:04.641 07:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:04.641 07:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:04.641 07:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:04.641 07:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:04.641 07:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.641 07:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.899 07:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.899 "name": "Existed_Raid", 00:20:04.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.899 "strip_size_kb": 0, 00:20:04.899 "state": "configuring", 00:20:04.899 "raid_level": "raid1", 00:20:04.899 "superblock": false, 00:20:04.899 "num_base_bdevs": 3, 00:20:04.899 "num_base_bdevs_discovered": 0, 00:20:04.899 "num_base_bdevs_operational": 3, 00:20:04.899 "base_bdevs_list": [ 00:20:04.899 { 00:20:04.899 "name": "BaseBdev1", 00:20:04.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.899 "is_configured": false, 00:20:04.899 "data_offset": 0, 00:20:04.899 "data_size": 0 00:20:04.899 }, 00:20:04.899 { 00:20:04.899 "name": "BaseBdev2", 00:20:04.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.899 "is_configured": false, 00:20:04.899 "data_offset": 0, 00:20:04.899 "data_size": 0 00:20:04.899 }, 00:20:04.899 { 00:20:04.899 "name": "BaseBdev3", 00:20:04.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.899 "is_configured": false, 00:20:04.899 "data_offset": 0, 00:20:04.899 "data_size": 0 00:20:04.899 } 00:20:04.899 ] 00:20:04.899 }' 00:20:04.899 07:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.899 07:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.466 07:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:05.725 [2024-07-12 07:30:39.501806] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:05.725 [2024-07-12 07:30:39.501870] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:20:05.725 07:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:05.984 [2024-07-12 07:30:39.697791] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:05.984 [2024-07-12 07:30:39.697901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:05.984 [2024-07-12 07:30:39.697913] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:05.984 [2024-07-12 07:30:39.697932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:05.984 [2024-07-12 07:30:39.697939] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:05.984 [2024-07-12 07:30:39.697964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:05.984 07:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:06.243 [2024-07-12 07:30:39.910015] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:06.243 BaseBdev1 00:20:06.243 07:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:06.243 07:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:06.243 07:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:06.243 07:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:06.243 07:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:06.243 07:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:06.243 07:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:06.502 07:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:06.502 [ 00:20:06.502 { 00:20:06.502 "name": "BaseBdev1", 00:20:06.502 "aliases": [ 00:20:06.502 "18f85b9a-1433-4a16-ba9b-8e548f469152" 00:20:06.502 ], 00:20:06.502 "product_name": "Malloc disk", 00:20:06.502 "block_size": 512, 00:20:06.502 "num_blocks": 65536, 00:20:06.502 "uuid": "18f85b9a-1433-4a16-ba9b-8e548f469152", 00:20:06.502 "assigned_rate_limits": { 00:20:06.502 "rw_ios_per_sec": 0, 00:20:06.502 "rw_mbytes_per_sec": 0, 00:20:06.502 "r_mbytes_per_sec": 0, 00:20:06.502 "w_mbytes_per_sec": 0 00:20:06.502 }, 00:20:06.502 "claimed": true, 00:20:06.502 "claim_type": "exclusive_write", 00:20:06.502 "zoned": false, 00:20:06.502 "supported_io_types": { 00:20:06.502 "read": true, 00:20:06.502 "write": true, 00:20:06.502 "unmap": true, 00:20:06.502 "write_zeroes": true, 00:20:06.502 "flush": true, 00:20:06.502 "reset": true, 00:20:06.502 "compare": false, 00:20:06.502 "compare_and_write": false, 00:20:06.502 "abort": true, 00:20:06.502 "nvme_admin": false, 00:20:06.502 "nvme_io": false 00:20:06.502 }, 00:20:06.502 "memory_domains": [ 00:20:06.502 { 00:20:06.502 "dma_device_id": "system", 00:20:06.502 "dma_device_type": 1 00:20:06.502 }, 00:20:06.502 { 00:20:06.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.502 "dma_device_type": 2 00:20:06.502 } 00:20:06.502 ], 00:20:06.502 "driver_specific": {} 00:20:06.502 } 00:20:06.502 ] 00:20:06.502 07:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:06.502 07:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:06.502 07:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:06.502 07:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:06.502 07:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:06.502 07:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:06.502 07:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:06.502 07:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:06.502 07:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:06.502 07:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:06.502 07:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:06.502 07:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.502 07:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.761 07:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:06.761 "name": "Existed_Raid", 00:20:06.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.761 "strip_size_kb": 0, 00:20:06.761 "state": "configuring", 00:20:06.761 "raid_level": "raid1", 00:20:06.761 "superblock": false, 00:20:06.761 "num_base_bdevs": 3, 00:20:06.761 "num_base_bdevs_discovered": 1, 00:20:06.761 "num_base_bdevs_operational": 3, 00:20:06.761 "base_bdevs_list": [ 00:20:06.761 { 00:20:06.761 "name": "BaseBdev1", 00:20:06.761 "uuid": "18f85b9a-1433-4a16-ba9b-8e548f469152", 00:20:06.761 "is_configured": true, 00:20:06.761 "data_offset": 0, 00:20:06.761 "data_size": 65536 00:20:06.761 }, 00:20:06.761 { 00:20:06.761 "name": "BaseBdev2", 00:20:06.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.761 "is_configured": false, 00:20:06.761 "data_offset": 0, 00:20:06.761 "data_size": 0 00:20:06.761 }, 00:20:06.761 { 00:20:06.761 "name": "BaseBdev3", 00:20:06.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.761 "is_configured": false, 00:20:06.761 "data_offset": 0, 00:20:06.761 "data_size": 0 00:20:06.761 } 00:20:06.761 ] 00:20:06.761 }' 00:20:06.761 07:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:06.761 07:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.330 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:07.589 [2024-07-12 07:30:41.338366] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:07.589 [2024-07-12 07:30:41.338458] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:20:07.589 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:07.849 [2024-07-12 07:30:41.542488] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:07.849 [2024-07-12 07:30:41.544996] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:07.849 [2024-07-12 07:30:41.545073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:07.849 [2024-07-12 07:30:41.545084] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:07.849 [2024-07-12 07:30:41.545111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:07.849 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:07.849 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:07.849 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:07.849 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:07.849 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:07.849 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:07.849 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:07.849 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:07.849 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:07.849 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:07.849 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:07.849 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:07.849 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.849 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.107 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:08.107 "name": "Existed_Raid", 00:20:08.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.107 "strip_size_kb": 0, 00:20:08.107 "state": "configuring", 00:20:08.107 "raid_level": "raid1", 00:20:08.107 "superblock": false, 00:20:08.107 "num_base_bdevs": 3, 00:20:08.107 "num_base_bdevs_discovered": 1, 00:20:08.107 "num_base_bdevs_operational": 3, 00:20:08.107 "base_bdevs_list": [ 00:20:08.107 { 00:20:08.107 "name": "BaseBdev1", 00:20:08.107 "uuid": "18f85b9a-1433-4a16-ba9b-8e548f469152", 00:20:08.107 "is_configured": true, 00:20:08.107 "data_offset": 0, 00:20:08.107 "data_size": 65536 00:20:08.107 }, 00:20:08.107 { 00:20:08.107 "name": "BaseBdev2", 00:20:08.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.107 "is_configured": false, 00:20:08.107 "data_offset": 0, 00:20:08.107 "data_size": 0 00:20:08.107 }, 00:20:08.107 { 00:20:08.107 "name": "BaseBdev3", 00:20:08.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.107 "is_configured": false, 00:20:08.107 "data_offset": 0, 00:20:08.107 "data_size": 0 00:20:08.107 } 00:20:08.107 ] 00:20:08.107 }' 00:20:08.107 07:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:08.107 07:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.675 07:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:08.675 [2024-07-12 07:30:42.547276] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:08.675 BaseBdev2 00:20:08.934 07:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:08.934 07:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:08.934 07:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:08.934 07:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:08.934 07:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:08.934 07:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:08.934 07:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:08.934 07:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:09.193 [ 00:20:09.193 { 00:20:09.193 "name": "BaseBdev2", 00:20:09.193 "aliases": [ 00:20:09.193 "aad535dd-cd7d-4db9-aa0c-429148c305f6" 00:20:09.193 ], 00:20:09.193 "product_name": "Malloc disk", 00:20:09.193 "block_size": 512, 00:20:09.193 "num_blocks": 65536, 00:20:09.193 "uuid": "aad535dd-cd7d-4db9-aa0c-429148c305f6", 00:20:09.193 "assigned_rate_limits": { 00:20:09.193 "rw_ios_per_sec": 0, 00:20:09.193 "rw_mbytes_per_sec": 0, 00:20:09.193 "r_mbytes_per_sec": 0, 00:20:09.193 "w_mbytes_per_sec": 0 00:20:09.193 }, 00:20:09.193 "claimed": true, 00:20:09.193 "claim_type": "exclusive_write", 00:20:09.193 "zoned": false, 00:20:09.193 "supported_io_types": { 00:20:09.193 "read": true, 00:20:09.193 "write": true, 00:20:09.193 "unmap": true, 00:20:09.193 "write_zeroes": true, 00:20:09.193 "flush": true, 00:20:09.193 "reset": true, 00:20:09.193 "compare": false, 00:20:09.193 "compare_and_write": false, 00:20:09.193 "abort": true, 00:20:09.193 "nvme_admin": false, 00:20:09.193 "nvme_io": false 00:20:09.193 }, 00:20:09.193 "memory_domains": [ 00:20:09.193 { 00:20:09.193 "dma_device_id": "system", 00:20:09.193 "dma_device_type": 1 00:20:09.193 }, 00:20:09.193 { 00:20:09.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.193 "dma_device_type": 2 00:20:09.193 } 00:20:09.193 ], 00:20:09.193 "driver_specific": {} 00:20:09.193 } 00:20:09.193 ] 00:20:09.193 07:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:09.193 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:09.193 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:09.193 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:09.193 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:09.193 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:09.193 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:09.193 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:09.193 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:09.193 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:09.193 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:09.193 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:09.193 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:09.193 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.193 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.451 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:09.451 "name": "Existed_Raid", 00:20:09.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.451 "strip_size_kb": 0, 00:20:09.451 "state": "configuring", 00:20:09.451 "raid_level": "raid1", 00:20:09.451 "superblock": false, 00:20:09.451 "num_base_bdevs": 3, 00:20:09.451 "num_base_bdevs_discovered": 2, 00:20:09.451 "num_base_bdevs_operational": 3, 00:20:09.451 "base_bdevs_list": [ 00:20:09.451 { 00:20:09.451 "name": "BaseBdev1", 00:20:09.451 "uuid": "18f85b9a-1433-4a16-ba9b-8e548f469152", 00:20:09.451 "is_configured": true, 00:20:09.451 "data_offset": 0, 00:20:09.451 "data_size": 65536 00:20:09.451 }, 00:20:09.451 { 00:20:09.451 "name": "BaseBdev2", 00:20:09.451 "uuid": "aad535dd-cd7d-4db9-aa0c-429148c305f6", 00:20:09.451 "is_configured": true, 00:20:09.451 "data_offset": 0, 00:20:09.451 "data_size": 65536 00:20:09.451 }, 00:20:09.451 { 00:20:09.451 "name": "BaseBdev3", 00:20:09.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.451 "is_configured": false, 00:20:09.451 "data_offset": 0, 00:20:09.451 "data_size": 0 00:20:09.451 } 00:20:09.451 ] 00:20:09.451 }' 00:20:09.451 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:09.451 07:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.016 07:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:10.275 [2024-07-12 07:30:44.051247] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:10.275 [2024-07-12 07:30:44.051329] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:20:10.275 [2024-07-12 07:30:44.051341] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:10.275 [2024-07-12 07:30:44.051523] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:20:10.275 [2024-07-12 07:30:44.051938] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:20:10.275 [2024-07-12 07:30:44.051959] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:20:10.275 [2024-07-12 07:30:44.052207] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.275 BaseBdev3 00:20:10.275 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:10.275 07:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:10.275 07:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:10.275 07:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:10.275 07:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:10.275 07:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:10.275 07:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:10.532 07:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:10.790 [ 00:20:10.790 { 00:20:10.790 "name": "BaseBdev3", 00:20:10.790 "aliases": [ 00:20:10.790 "1e2b2f9f-c383-42a1-becc-0d14343aa4c3" 00:20:10.790 ], 00:20:10.790 "product_name": "Malloc disk", 00:20:10.790 "block_size": 512, 00:20:10.790 "num_blocks": 65536, 00:20:10.790 "uuid": "1e2b2f9f-c383-42a1-becc-0d14343aa4c3", 00:20:10.790 "assigned_rate_limits": { 00:20:10.790 "rw_ios_per_sec": 0, 00:20:10.790 "rw_mbytes_per_sec": 0, 00:20:10.790 "r_mbytes_per_sec": 0, 00:20:10.790 "w_mbytes_per_sec": 0 00:20:10.790 }, 00:20:10.790 "claimed": true, 00:20:10.790 "claim_type": "exclusive_write", 00:20:10.790 "zoned": false, 00:20:10.790 "supported_io_types": { 00:20:10.790 "read": true, 00:20:10.790 "write": true, 00:20:10.790 "unmap": true, 00:20:10.790 "write_zeroes": true, 00:20:10.790 "flush": true, 00:20:10.790 "reset": true, 00:20:10.790 "compare": false, 00:20:10.790 "compare_and_write": false, 00:20:10.790 "abort": true, 00:20:10.790 "nvme_admin": false, 00:20:10.790 "nvme_io": false 00:20:10.790 }, 00:20:10.790 "memory_domains": [ 00:20:10.790 { 00:20:10.790 "dma_device_id": "system", 00:20:10.790 "dma_device_type": 1 00:20:10.790 }, 00:20:10.790 { 00:20:10.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.790 "dma_device_type": 2 00:20:10.790 } 00:20:10.790 ], 00:20:10.790 "driver_specific": {} 00:20:10.790 } 00:20:10.790 ] 00:20:10.790 07:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:10.790 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:10.790 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:10.790 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:10.790 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:10.790 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:10.790 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:10.791 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:10.791 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:10.791 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.791 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.791 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.791 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.791 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.791 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.049 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:11.049 "name": "Existed_Raid", 00:20:11.049 "uuid": "22000398-e172-4f95-8f09-5ce4966068a5", 00:20:11.049 "strip_size_kb": 0, 00:20:11.049 "state": "online", 00:20:11.049 "raid_level": "raid1", 00:20:11.049 "superblock": false, 00:20:11.049 "num_base_bdevs": 3, 00:20:11.049 "num_base_bdevs_discovered": 3, 00:20:11.049 "num_base_bdevs_operational": 3, 00:20:11.049 "base_bdevs_list": [ 00:20:11.049 { 00:20:11.049 "name": "BaseBdev1", 00:20:11.049 "uuid": "18f85b9a-1433-4a16-ba9b-8e548f469152", 00:20:11.049 "is_configured": true, 00:20:11.049 "data_offset": 0, 00:20:11.049 "data_size": 65536 00:20:11.049 }, 00:20:11.049 { 00:20:11.049 "name": "BaseBdev2", 00:20:11.049 "uuid": "aad535dd-cd7d-4db9-aa0c-429148c305f6", 00:20:11.049 "is_configured": true, 00:20:11.049 "data_offset": 0, 00:20:11.049 "data_size": 65536 00:20:11.049 }, 00:20:11.049 { 00:20:11.049 "name": "BaseBdev3", 00:20:11.049 "uuid": "1e2b2f9f-c383-42a1-becc-0d14343aa4c3", 00:20:11.049 "is_configured": true, 00:20:11.049 "data_offset": 0, 00:20:11.049 "data_size": 65536 00:20:11.049 } 00:20:11.049 ] 00:20:11.049 }' 00:20:11.049 07:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:11.049 07:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.657 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:11.657 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:11.657 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:11.657 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:11.657 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:11.657 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:11.657 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:11.657 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:11.916 [2024-07-12 07:30:45.635950] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:11.916 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:11.916 "name": "Existed_Raid", 00:20:11.916 "aliases": [ 00:20:11.916 "22000398-e172-4f95-8f09-5ce4966068a5" 00:20:11.916 ], 00:20:11.916 "product_name": "Raid Volume", 00:20:11.916 "block_size": 512, 00:20:11.916 "num_blocks": 65536, 00:20:11.916 "uuid": "22000398-e172-4f95-8f09-5ce4966068a5", 00:20:11.916 "assigned_rate_limits": { 00:20:11.916 "rw_ios_per_sec": 0, 00:20:11.916 "rw_mbytes_per_sec": 0, 00:20:11.916 "r_mbytes_per_sec": 0, 00:20:11.916 "w_mbytes_per_sec": 0 00:20:11.916 }, 00:20:11.916 "claimed": false, 00:20:11.916 "zoned": false, 00:20:11.916 "supported_io_types": { 00:20:11.916 "read": true, 00:20:11.916 "write": true, 00:20:11.916 "unmap": false, 00:20:11.916 "write_zeroes": true, 00:20:11.916 "flush": false, 00:20:11.916 "reset": true, 00:20:11.916 "compare": false, 00:20:11.916 "compare_and_write": false, 00:20:11.916 "abort": false, 00:20:11.916 "nvme_admin": false, 00:20:11.916 "nvme_io": false 00:20:11.916 }, 00:20:11.916 "memory_domains": [ 00:20:11.916 { 00:20:11.916 "dma_device_id": "system", 00:20:11.916 "dma_device_type": 1 00:20:11.916 }, 00:20:11.916 { 00:20:11.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.916 "dma_device_type": 2 00:20:11.916 }, 00:20:11.916 { 00:20:11.916 "dma_device_id": "system", 00:20:11.916 "dma_device_type": 1 00:20:11.916 }, 00:20:11.916 { 00:20:11.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.916 "dma_device_type": 2 00:20:11.916 }, 00:20:11.916 { 00:20:11.916 "dma_device_id": "system", 00:20:11.916 "dma_device_type": 1 00:20:11.916 }, 00:20:11.916 { 00:20:11.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.916 "dma_device_type": 2 00:20:11.916 } 00:20:11.916 ], 00:20:11.916 "driver_specific": { 00:20:11.916 "raid": { 00:20:11.916 "uuid": "22000398-e172-4f95-8f09-5ce4966068a5", 00:20:11.916 "strip_size_kb": 0, 00:20:11.916 "state": "online", 00:20:11.916 "raid_level": "raid1", 00:20:11.916 "superblock": false, 00:20:11.916 "num_base_bdevs": 3, 00:20:11.916 "num_base_bdevs_discovered": 3, 00:20:11.916 "num_base_bdevs_operational": 3, 00:20:11.916 "base_bdevs_list": [ 00:20:11.916 { 00:20:11.916 "name": "BaseBdev1", 00:20:11.916 "uuid": "18f85b9a-1433-4a16-ba9b-8e548f469152", 00:20:11.916 "is_configured": true, 00:20:11.916 "data_offset": 0, 00:20:11.916 "data_size": 65536 00:20:11.916 }, 00:20:11.916 { 00:20:11.916 "name": "BaseBdev2", 00:20:11.916 "uuid": "aad535dd-cd7d-4db9-aa0c-429148c305f6", 00:20:11.916 "is_configured": true, 00:20:11.916 "data_offset": 0, 00:20:11.916 "data_size": 65536 00:20:11.916 }, 00:20:11.916 { 00:20:11.916 "name": "BaseBdev3", 00:20:11.916 "uuid": "1e2b2f9f-c383-42a1-becc-0d14343aa4c3", 00:20:11.916 "is_configured": true, 00:20:11.916 "data_offset": 0, 00:20:11.916 "data_size": 65536 00:20:11.916 } 00:20:11.916 ] 00:20:11.916 } 00:20:11.916 } 00:20:11.916 }' 00:20:11.916 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:11.916 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:11.916 BaseBdev2 00:20:11.916 BaseBdev3' 00:20:11.916 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:11.916 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:11.916 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:12.174 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:12.174 "name": "BaseBdev1", 00:20:12.174 "aliases": [ 00:20:12.174 "18f85b9a-1433-4a16-ba9b-8e548f469152" 00:20:12.174 ], 00:20:12.174 "product_name": "Malloc disk", 00:20:12.174 "block_size": 512, 00:20:12.174 "num_blocks": 65536, 00:20:12.174 "uuid": "18f85b9a-1433-4a16-ba9b-8e548f469152", 00:20:12.174 "assigned_rate_limits": { 00:20:12.174 "rw_ios_per_sec": 0, 00:20:12.174 "rw_mbytes_per_sec": 0, 00:20:12.174 "r_mbytes_per_sec": 0, 00:20:12.174 "w_mbytes_per_sec": 0 00:20:12.174 }, 00:20:12.174 "claimed": true, 00:20:12.174 "claim_type": "exclusive_write", 00:20:12.174 "zoned": false, 00:20:12.174 "supported_io_types": { 00:20:12.174 "read": true, 00:20:12.174 "write": true, 00:20:12.174 "unmap": true, 00:20:12.174 "write_zeroes": true, 00:20:12.174 "flush": true, 00:20:12.174 "reset": true, 00:20:12.174 "compare": false, 00:20:12.174 "compare_and_write": false, 00:20:12.174 "abort": true, 00:20:12.174 "nvme_admin": false, 00:20:12.174 "nvme_io": false 00:20:12.174 }, 00:20:12.174 "memory_domains": [ 00:20:12.174 { 00:20:12.174 "dma_device_id": "system", 00:20:12.174 "dma_device_type": 1 00:20:12.174 }, 00:20:12.174 { 00:20:12.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.174 "dma_device_type": 2 00:20:12.174 } 00:20:12.174 ], 00:20:12.174 "driver_specific": {} 00:20:12.174 }' 00:20:12.174 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:12.174 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:12.174 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:12.174 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:12.174 07:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:12.174 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:12.174 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.432 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.432 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:12.432 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.432 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.432 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:12.432 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:12.432 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:12.432 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:12.690 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:12.690 "name": "BaseBdev2", 00:20:12.690 "aliases": [ 00:20:12.690 "aad535dd-cd7d-4db9-aa0c-429148c305f6" 00:20:12.690 ], 00:20:12.690 "product_name": "Malloc disk", 00:20:12.690 "block_size": 512, 00:20:12.690 "num_blocks": 65536, 00:20:12.690 "uuid": "aad535dd-cd7d-4db9-aa0c-429148c305f6", 00:20:12.690 "assigned_rate_limits": { 00:20:12.690 "rw_ios_per_sec": 0, 00:20:12.690 "rw_mbytes_per_sec": 0, 00:20:12.690 "r_mbytes_per_sec": 0, 00:20:12.690 "w_mbytes_per_sec": 0 00:20:12.690 }, 00:20:12.690 "claimed": true, 00:20:12.690 "claim_type": "exclusive_write", 00:20:12.690 "zoned": false, 00:20:12.690 "supported_io_types": { 00:20:12.690 "read": true, 00:20:12.690 "write": true, 00:20:12.690 "unmap": true, 00:20:12.690 "write_zeroes": true, 00:20:12.690 "flush": true, 00:20:12.690 "reset": true, 00:20:12.690 "compare": false, 00:20:12.690 "compare_and_write": false, 00:20:12.690 "abort": true, 00:20:12.690 "nvme_admin": false, 00:20:12.690 "nvme_io": false 00:20:12.690 }, 00:20:12.690 "memory_domains": [ 00:20:12.690 { 00:20:12.690 "dma_device_id": "system", 00:20:12.690 "dma_device_type": 1 00:20:12.690 }, 00:20:12.690 { 00:20:12.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.690 "dma_device_type": 2 00:20:12.690 } 00:20:12.690 ], 00:20:12.690 "driver_specific": {} 00:20:12.690 }' 00:20:12.690 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:12.690 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:12.690 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:12.690 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:12.690 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:12.949 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:12.949 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.949 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.949 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:12.949 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.949 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.949 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:12.949 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:12.949 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:12.949 07:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:13.207 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:13.207 "name": "BaseBdev3", 00:20:13.207 "aliases": [ 00:20:13.207 "1e2b2f9f-c383-42a1-becc-0d14343aa4c3" 00:20:13.207 ], 00:20:13.207 "product_name": "Malloc disk", 00:20:13.207 "block_size": 512, 00:20:13.207 "num_blocks": 65536, 00:20:13.207 "uuid": "1e2b2f9f-c383-42a1-becc-0d14343aa4c3", 00:20:13.207 "assigned_rate_limits": { 00:20:13.207 "rw_ios_per_sec": 0, 00:20:13.207 "rw_mbytes_per_sec": 0, 00:20:13.207 "r_mbytes_per_sec": 0, 00:20:13.207 "w_mbytes_per_sec": 0 00:20:13.207 }, 00:20:13.207 "claimed": true, 00:20:13.207 "claim_type": "exclusive_write", 00:20:13.207 "zoned": false, 00:20:13.207 "supported_io_types": { 00:20:13.207 "read": true, 00:20:13.207 "write": true, 00:20:13.207 "unmap": true, 00:20:13.207 "write_zeroes": true, 00:20:13.207 "flush": true, 00:20:13.207 "reset": true, 00:20:13.207 "compare": false, 00:20:13.207 "compare_and_write": false, 00:20:13.207 "abort": true, 00:20:13.207 "nvme_admin": false, 00:20:13.207 "nvme_io": false 00:20:13.207 }, 00:20:13.207 "memory_domains": [ 00:20:13.207 { 00:20:13.207 "dma_device_id": "system", 00:20:13.207 "dma_device_type": 1 00:20:13.207 }, 00:20:13.207 { 00:20:13.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.207 "dma_device_type": 2 00:20:13.207 } 00:20:13.207 ], 00:20:13.207 "driver_specific": {} 00:20:13.207 }' 00:20:13.207 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:13.207 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:13.464 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:13.464 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:13.464 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:13.464 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:13.465 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:13.465 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:13.465 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:13.465 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:13.465 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:13.722 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:13.722 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:13.980 [2024-07-12 07:30:47.628117] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.980 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.239 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:14.239 "name": "Existed_Raid", 00:20:14.239 "uuid": "22000398-e172-4f95-8f09-5ce4966068a5", 00:20:14.239 "strip_size_kb": 0, 00:20:14.239 "state": "online", 00:20:14.239 "raid_level": "raid1", 00:20:14.239 "superblock": false, 00:20:14.239 "num_base_bdevs": 3, 00:20:14.239 "num_base_bdevs_discovered": 2, 00:20:14.239 "num_base_bdevs_operational": 2, 00:20:14.239 "base_bdevs_list": [ 00:20:14.239 { 00:20:14.239 "name": null, 00:20:14.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.239 "is_configured": false, 00:20:14.239 "data_offset": 0, 00:20:14.239 "data_size": 65536 00:20:14.239 }, 00:20:14.239 { 00:20:14.239 "name": "BaseBdev2", 00:20:14.239 "uuid": "aad535dd-cd7d-4db9-aa0c-429148c305f6", 00:20:14.239 "is_configured": true, 00:20:14.239 "data_offset": 0, 00:20:14.239 "data_size": 65536 00:20:14.239 }, 00:20:14.239 { 00:20:14.239 "name": "BaseBdev3", 00:20:14.239 "uuid": "1e2b2f9f-c383-42a1-becc-0d14343aa4c3", 00:20:14.239 "is_configured": true, 00:20:14.239 "data_offset": 0, 00:20:14.239 "data_size": 65536 00:20:14.239 } 00:20:14.239 ] 00:20:14.239 }' 00:20:14.239 07:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:14.239 07:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.805 07:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:14.805 07:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:14.805 07:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.805 07:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:15.064 07:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:15.064 07:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:15.064 07:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:15.322 [2024-07-12 07:30:48.977518] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:15.322 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:15.322 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:15.322 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.322 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:15.581 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:15.581 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:15.581 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:15.839 [2024-07-12 07:30:49.481857] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:15.839 [2024-07-12 07:30:49.482003] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:15.839 [2024-07-12 07:30:49.503559] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:15.839 [2024-07-12 07:30:49.503616] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:15.839 [2024-07-12 07:30:49.503644] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:20:15.839 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:15.839 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:15.839 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.839 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:15.839 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:15.839 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:15.839 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:15.839 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:15.839 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:16.098 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:16.098 BaseBdev2 00:20:16.098 07:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:16.098 07:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:16.098 07:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:16.098 07:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:16.098 07:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:16.098 07:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:16.098 07:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:16.374 07:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:16.633 [ 00:20:16.633 { 00:20:16.633 "name": "BaseBdev2", 00:20:16.633 "aliases": [ 00:20:16.633 "2a4148b9-3e86-4720-823a-6ac1e33664eb" 00:20:16.633 ], 00:20:16.633 "product_name": "Malloc disk", 00:20:16.633 "block_size": 512, 00:20:16.633 "num_blocks": 65536, 00:20:16.633 "uuid": "2a4148b9-3e86-4720-823a-6ac1e33664eb", 00:20:16.633 "assigned_rate_limits": { 00:20:16.633 "rw_ios_per_sec": 0, 00:20:16.633 "rw_mbytes_per_sec": 0, 00:20:16.633 "r_mbytes_per_sec": 0, 00:20:16.633 "w_mbytes_per_sec": 0 00:20:16.633 }, 00:20:16.633 "claimed": false, 00:20:16.633 "zoned": false, 00:20:16.633 "supported_io_types": { 00:20:16.633 "read": true, 00:20:16.633 "write": true, 00:20:16.633 "unmap": true, 00:20:16.633 "write_zeroes": true, 00:20:16.633 "flush": true, 00:20:16.633 "reset": true, 00:20:16.633 "compare": false, 00:20:16.633 "compare_and_write": false, 00:20:16.633 "abort": true, 00:20:16.633 "nvme_admin": false, 00:20:16.633 "nvme_io": false 00:20:16.633 }, 00:20:16.633 "memory_domains": [ 00:20:16.633 { 00:20:16.633 "dma_device_id": "system", 00:20:16.633 "dma_device_type": 1 00:20:16.633 }, 00:20:16.633 { 00:20:16.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.633 "dma_device_type": 2 00:20:16.633 } 00:20:16.633 ], 00:20:16.633 "driver_specific": {} 00:20:16.633 } 00:20:16.633 ] 00:20:16.633 07:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:16.633 07:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:16.633 07:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:16.633 07:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:16.633 BaseBdev3 00:20:16.893 07:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:16.893 07:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:16.893 07:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:16.893 07:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:16.893 07:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:16.893 07:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:16.893 07:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:16.893 07:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:17.152 [ 00:20:17.152 { 00:20:17.152 "name": "BaseBdev3", 00:20:17.152 "aliases": [ 00:20:17.152 "516f3b5c-a68f-4965-a64d-ad9c752f590e" 00:20:17.152 ], 00:20:17.152 "product_name": "Malloc disk", 00:20:17.152 "block_size": 512, 00:20:17.153 "num_blocks": 65536, 00:20:17.153 "uuid": "516f3b5c-a68f-4965-a64d-ad9c752f590e", 00:20:17.153 "assigned_rate_limits": { 00:20:17.153 "rw_ios_per_sec": 0, 00:20:17.153 "rw_mbytes_per_sec": 0, 00:20:17.153 "r_mbytes_per_sec": 0, 00:20:17.153 "w_mbytes_per_sec": 0 00:20:17.153 }, 00:20:17.153 "claimed": false, 00:20:17.153 "zoned": false, 00:20:17.153 "supported_io_types": { 00:20:17.153 "read": true, 00:20:17.153 "write": true, 00:20:17.153 "unmap": true, 00:20:17.153 "write_zeroes": true, 00:20:17.153 "flush": true, 00:20:17.153 "reset": true, 00:20:17.153 "compare": false, 00:20:17.153 "compare_and_write": false, 00:20:17.153 "abort": true, 00:20:17.153 "nvme_admin": false, 00:20:17.153 "nvme_io": false 00:20:17.153 }, 00:20:17.153 "memory_domains": [ 00:20:17.153 { 00:20:17.153 "dma_device_id": "system", 00:20:17.153 "dma_device_type": 1 00:20:17.153 }, 00:20:17.153 { 00:20:17.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.153 "dma_device_type": 2 00:20:17.153 } 00:20:17.153 ], 00:20:17.153 "driver_specific": {} 00:20:17.153 } 00:20:17.153 ] 00:20:17.153 07:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:17.153 07:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:17.153 07:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:17.153 07:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:17.411 [2024-07-12 07:30:51.077745] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:17.411 [2024-07-12 07:30:51.077881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:17.411 [2024-07-12 07:30:51.077917] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.411 [2024-07-12 07:30:51.080391] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:17.411 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:17.411 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:17.411 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:17.411 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:17.411 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:17.411 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:17.411 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:17.411 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:17.411 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:17.411 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:17.411 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.411 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.670 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:17.670 "name": "Existed_Raid", 00:20:17.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.670 "strip_size_kb": 0, 00:20:17.670 "state": "configuring", 00:20:17.670 "raid_level": "raid1", 00:20:17.670 "superblock": false, 00:20:17.670 "num_base_bdevs": 3, 00:20:17.670 "num_base_bdevs_discovered": 2, 00:20:17.670 "num_base_bdevs_operational": 3, 00:20:17.670 "base_bdevs_list": [ 00:20:17.670 { 00:20:17.670 "name": "BaseBdev1", 00:20:17.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.670 "is_configured": false, 00:20:17.670 "data_offset": 0, 00:20:17.670 "data_size": 0 00:20:17.670 }, 00:20:17.670 { 00:20:17.670 "name": "BaseBdev2", 00:20:17.670 "uuid": "2a4148b9-3e86-4720-823a-6ac1e33664eb", 00:20:17.670 "is_configured": true, 00:20:17.670 "data_offset": 0, 00:20:17.670 "data_size": 65536 00:20:17.670 }, 00:20:17.670 { 00:20:17.670 "name": "BaseBdev3", 00:20:17.670 "uuid": "516f3b5c-a68f-4965-a64d-ad9c752f590e", 00:20:17.670 "is_configured": true, 00:20:17.670 "data_offset": 0, 00:20:17.670 "data_size": 65536 00:20:17.670 } 00:20:17.670 ] 00:20:17.670 }' 00:20:17.670 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:17.670 07:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.929 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:18.188 [2024-07-12 07:30:51.945734] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:18.188 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:18.188 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:18.188 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:18.188 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:18.188 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:18.188 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:18.188 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:18.188 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:18.188 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:18.188 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:18.188 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.188 07:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.446 07:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.446 "name": "Existed_Raid", 00:20:18.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.446 "strip_size_kb": 0, 00:20:18.446 "state": "configuring", 00:20:18.446 "raid_level": "raid1", 00:20:18.446 "superblock": false, 00:20:18.446 "num_base_bdevs": 3, 00:20:18.446 "num_base_bdevs_discovered": 1, 00:20:18.446 "num_base_bdevs_operational": 3, 00:20:18.446 "base_bdevs_list": [ 00:20:18.446 { 00:20:18.446 "name": "BaseBdev1", 00:20:18.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.446 "is_configured": false, 00:20:18.446 "data_offset": 0, 00:20:18.446 "data_size": 0 00:20:18.446 }, 00:20:18.446 { 00:20:18.446 "name": null, 00:20:18.446 "uuid": "2a4148b9-3e86-4720-823a-6ac1e33664eb", 00:20:18.446 "is_configured": false, 00:20:18.446 "data_offset": 0, 00:20:18.446 "data_size": 65536 00:20:18.446 }, 00:20:18.446 { 00:20:18.446 "name": "BaseBdev3", 00:20:18.446 "uuid": "516f3b5c-a68f-4965-a64d-ad9c752f590e", 00:20:18.446 "is_configured": true, 00:20:18.446 "data_offset": 0, 00:20:18.446 "data_size": 65536 00:20:18.446 } 00:20:18.446 ] 00:20:18.446 }' 00:20:18.446 07:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.446 07:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.013 07:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.013 07:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:19.271 07:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:19.271 07:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:19.528 [2024-07-12 07:30:53.331461] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:19.528 BaseBdev1 00:20:19.528 07:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:19.528 07:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:19.528 07:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:19.528 07:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:19.528 07:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:19.528 07:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:19.528 07:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:19.784 07:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:20.041 [ 00:20:20.041 { 00:20:20.041 "name": "BaseBdev1", 00:20:20.041 "aliases": [ 00:20:20.041 "f4604414-857b-4b99-9698-f2cd43b161cf" 00:20:20.041 ], 00:20:20.041 "product_name": "Malloc disk", 00:20:20.041 "block_size": 512, 00:20:20.041 "num_blocks": 65536, 00:20:20.041 "uuid": "f4604414-857b-4b99-9698-f2cd43b161cf", 00:20:20.041 "assigned_rate_limits": { 00:20:20.041 "rw_ios_per_sec": 0, 00:20:20.041 "rw_mbytes_per_sec": 0, 00:20:20.041 "r_mbytes_per_sec": 0, 00:20:20.041 "w_mbytes_per_sec": 0 00:20:20.041 }, 00:20:20.041 "claimed": true, 00:20:20.041 "claim_type": "exclusive_write", 00:20:20.041 "zoned": false, 00:20:20.041 "supported_io_types": { 00:20:20.041 "read": true, 00:20:20.041 "write": true, 00:20:20.041 "unmap": true, 00:20:20.041 "write_zeroes": true, 00:20:20.041 "flush": true, 00:20:20.041 "reset": true, 00:20:20.041 "compare": false, 00:20:20.041 "compare_and_write": false, 00:20:20.041 "abort": true, 00:20:20.041 "nvme_admin": false, 00:20:20.041 "nvme_io": false 00:20:20.041 }, 00:20:20.041 "memory_domains": [ 00:20:20.041 { 00:20:20.041 "dma_device_id": "system", 00:20:20.041 "dma_device_type": 1 00:20:20.041 }, 00:20:20.041 { 00:20:20.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.041 "dma_device_type": 2 00:20:20.041 } 00:20:20.041 ], 00:20:20.041 "driver_specific": {} 00:20:20.041 } 00:20:20.041 ] 00:20:20.041 07:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:20.041 07:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:20.041 07:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:20.041 07:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:20.041 07:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:20.041 07:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:20.041 07:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:20.041 07:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:20.041 07:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:20.041 07:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:20.041 07:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:20.041 07:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.041 07:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.298 07:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:20.298 "name": "Existed_Raid", 00:20:20.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.298 "strip_size_kb": 0, 00:20:20.298 "state": "configuring", 00:20:20.298 "raid_level": "raid1", 00:20:20.298 "superblock": false, 00:20:20.298 "num_base_bdevs": 3, 00:20:20.298 "num_base_bdevs_discovered": 2, 00:20:20.298 "num_base_bdevs_operational": 3, 00:20:20.298 "base_bdevs_list": [ 00:20:20.298 { 00:20:20.298 "name": "BaseBdev1", 00:20:20.298 "uuid": "f4604414-857b-4b99-9698-f2cd43b161cf", 00:20:20.298 "is_configured": true, 00:20:20.298 "data_offset": 0, 00:20:20.298 "data_size": 65536 00:20:20.298 }, 00:20:20.298 { 00:20:20.298 "name": null, 00:20:20.298 "uuid": "2a4148b9-3e86-4720-823a-6ac1e33664eb", 00:20:20.298 "is_configured": false, 00:20:20.298 "data_offset": 0, 00:20:20.298 "data_size": 65536 00:20:20.298 }, 00:20:20.298 { 00:20:20.298 "name": "BaseBdev3", 00:20:20.298 "uuid": "516f3b5c-a68f-4965-a64d-ad9c752f590e", 00:20:20.298 "is_configured": true, 00:20:20.298 "data_offset": 0, 00:20:20.298 "data_size": 65536 00:20:20.298 } 00:20:20.298 ] 00:20:20.298 }' 00:20:20.298 07:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:20.298 07:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.865 07:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.865 07:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:21.123 07:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:21.123 07:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:21.381 [2024-07-12 07:30:55.143905] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:21.381 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:21.381 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:21.381 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:21.381 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:21.381 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:21.381 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:21.381 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:21.381 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:21.381 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:21.381 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:21.381 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.381 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.640 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:21.640 "name": "Existed_Raid", 00:20:21.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.640 "strip_size_kb": 0, 00:20:21.640 "state": "configuring", 00:20:21.640 "raid_level": "raid1", 00:20:21.640 "superblock": false, 00:20:21.640 "num_base_bdevs": 3, 00:20:21.640 "num_base_bdevs_discovered": 1, 00:20:21.640 "num_base_bdevs_operational": 3, 00:20:21.640 "base_bdevs_list": [ 00:20:21.640 { 00:20:21.640 "name": "BaseBdev1", 00:20:21.640 "uuid": "f4604414-857b-4b99-9698-f2cd43b161cf", 00:20:21.640 "is_configured": true, 00:20:21.640 "data_offset": 0, 00:20:21.640 "data_size": 65536 00:20:21.640 }, 00:20:21.640 { 00:20:21.640 "name": null, 00:20:21.640 "uuid": "2a4148b9-3e86-4720-823a-6ac1e33664eb", 00:20:21.640 "is_configured": false, 00:20:21.640 "data_offset": 0, 00:20:21.640 "data_size": 65536 00:20:21.640 }, 00:20:21.640 { 00:20:21.640 "name": null, 00:20:21.640 "uuid": "516f3b5c-a68f-4965-a64d-ad9c752f590e", 00:20:21.640 "is_configured": false, 00:20:21.640 "data_offset": 0, 00:20:21.640 "data_size": 65536 00:20:21.640 } 00:20:21.640 ] 00:20:21.640 }' 00:20:21.640 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:21.640 07:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.217 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.217 07:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:22.490 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:22.490 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:22.748 [2024-07-12 07:30:56.460275] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:22.748 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:22.748 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:22.748 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:22.748 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:22.748 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:22.748 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:22.748 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:22.748 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:22.748 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:22.748 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:22.748 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.748 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.007 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:23.007 "name": "Existed_Raid", 00:20:23.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.007 "strip_size_kb": 0, 00:20:23.007 "state": "configuring", 00:20:23.007 "raid_level": "raid1", 00:20:23.007 "superblock": false, 00:20:23.007 "num_base_bdevs": 3, 00:20:23.007 "num_base_bdevs_discovered": 2, 00:20:23.007 "num_base_bdevs_operational": 3, 00:20:23.007 "base_bdevs_list": [ 00:20:23.007 { 00:20:23.007 "name": "BaseBdev1", 00:20:23.007 "uuid": "f4604414-857b-4b99-9698-f2cd43b161cf", 00:20:23.007 "is_configured": true, 00:20:23.007 "data_offset": 0, 00:20:23.007 "data_size": 65536 00:20:23.007 }, 00:20:23.007 { 00:20:23.007 "name": null, 00:20:23.007 "uuid": "2a4148b9-3e86-4720-823a-6ac1e33664eb", 00:20:23.007 "is_configured": false, 00:20:23.007 "data_offset": 0, 00:20:23.007 "data_size": 65536 00:20:23.007 }, 00:20:23.007 { 00:20:23.007 "name": "BaseBdev3", 00:20:23.007 "uuid": "516f3b5c-a68f-4965-a64d-ad9c752f590e", 00:20:23.007 "is_configured": true, 00:20:23.007 "data_offset": 0, 00:20:23.007 "data_size": 65536 00:20:23.007 } 00:20:23.007 ] 00:20:23.007 }' 00:20:23.007 07:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:23.007 07:30:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.574 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.574 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:23.574 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:23.574 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:23.833 [2024-07-12 07:30:57.672533] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:23.833 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:23.833 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:23.833 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:23.833 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:23.833 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:23.833 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:23.833 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:23.833 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:23.833 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:23.833 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.091 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.091 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.091 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.091 "name": "Existed_Raid", 00:20:24.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.091 "strip_size_kb": 0, 00:20:24.091 "state": "configuring", 00:20:24.091 "raid_level": "raid1", 00:20:24.091 "superblock": false, 00:20:24.091 "num_base_bdevs": 3, 00:20:24.091 "num_base_bdevs_discovered": 1, 00:20:24.091 "num_base_bdevs_operational": 3, 00:20:24.091 "base_bdevs_list": [ 00:20:24.091 { 00:20:24.091 "name": null, 00:20:24.091 "uuid": "f4604414-857b-4b99-9698-f2cd43b161cf", 00:20:24.091 "is_configured": false, 00:20:24.091 "data_offset": 0, 00:20:24.091 "data_size": 65536 00:20:24.091 }, 00:20:24.091 { 00:20:24.091 "name": null, 00:20:24.091 "uuid": "2a4148b9-3e86-4720-823a-6ac1e33664eb", 00:20:24.091 "is_configured": false, 00:20:24.091 "data_offset": 0, 00:20:24.091 "data_size": 65536 00:20:24.091 }, 00:20:24.091 { 00:20:24.091 "name": "BaseBdev3", 00:20:24.091 "uuid": "516f3b5c-a68f-4965-a64d-ad9c752f590e", 00:20:24.091 "is_configured": true, 00:20:24.091 "data_offset": 0, 00:20:24.091 "data_size": 65536 00:20:24.091 } 00:20:24.091 ] 00:20:24.091 }' 00:20:24.091 07:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.091 07:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.658 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.658 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:24.917 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:24.917 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:25.176 [2024-07-12 07:30:58.904096] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:25.176 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:25.176 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:25.176 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:25.176 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:25.176 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:25.176 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:25.176 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:25.176 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:25.176 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:25.176 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:25.176 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.176 07:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.436 07:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:25.436 "name": "Existed_Raid", 00:20:25.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.436 "strip_size_kb": 0, 00:20:25.436 "state": "configuring", 00:20:25.436 "raid_level": "raid1", 00:20:25.436 "superblock": false, 00:20:25.436 "num_base_bdevs": 3, 00:20:25.436 "num_base_bdevs_discovered": 2, 00:20:25.436 "num_base_bdevs_operational": 3, 00:20:25.436 "base_bdevs_list": [ 00:20:25.436 { 00:20:25.436 "name": null, 00:20:25.436 "uuid": "f4604414-857b-4b99-9698-f2cd43b161cf", 00:20:25.436 "is_configured": false, 00:20:25.436 "data_offset": 0, 00:20:25.436 "data_size": 65536 00:20:25.436 }, 00:20:25.436 { 00:20:25.436 "name": "BaseBdev2", 00:20:25.436 "uuid": "2a4148b9-3e86-4720-823a-6ac1e33664eb", 00:20:25.436 "is_configured": true, 00:20:25.436 "data_offset": 0, 00:20:25.436 "data_size": 65536 00:20:25.436 }, 00:20:25.436 { 00:20:25.436 "name": "BaseBdev3", 00:20:25.436 "uuid": "516f3b5c-a68f-4965-a64d-ad9c752f590e", 00:20:25.436 "is_configured": true, 00:20:25.436 "data_offset": 0, 00:20:25.436 "data_size": 65536 00:20:25.436 } 00:20:25.436 ] 00:20:25.436 }' 00:20:25.436 07:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:25.436 07:30:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.005 07:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:26.005 07:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.264 07:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:26.264 07:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:26.264 07:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.523 07:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f4604414-857b-4b99-9698-f2cd43b161cf 00:20:26.782 [2024-07-12 07:31:00.573900] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:26.782 [2024-07-12 07:31:00.573966] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:20:26.782 [2024-07-12 07:31:00.573974] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:26.782 [2024-07-12 07:31:00.574062] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:20:26.782 [2024-07-12 07:31:00.574405] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:20:26.782 [2024-07-12 07:31:00.574424] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:20:26.782 [2024-07-12 07:31:00.574625] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.782 NewBaseBdev 00:20:26.782 07:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:26.782 07:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:20:26.782 07:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:26.782 07:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:20:26.782 07:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:26.782 07:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:26.782 07:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:27.041 07:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:27.300 [ 00:20:27.300 { 00:20:27.300 "name": "NewBaseBdev", 00:20:27.300 "aliases": [ 00:20:27.300 "f4604414-857b-4b99-9698-f2cd43b161cf" 00:20:27.300 ], 00:20:27.300 "product_name": "Malloc disk", 00:20:27.300 "block_size": 512, 00:20:27.300 "num_blocks": 65536, 00:20:27.300 "uuid": "f4604414-857b-4b99-9698-f2cd43b161cf", 00:20:27.300 "assigned_rate_limits": { 00:20:27.300 "rw_ios_per_sec": 0, 00:20:27.300 "rw_mbytes_per_sec": 0, 00:20:27.300 "r_mbytes_per_sec": 0, 00:20:27.300 "w_mbytes_per_sec": 0 00:20:27.300 }, 00:20:27.300 "claimed": true, 00:20:27.300 "claim_type": "exclusive_write", 00:20:27.301 "zoned": false, 00:20:27.301 "supported_io_types": { 00:20:27.301 "read": true, 00:20:27.301 "write": true, 00:20:27.301 "unmap": true, 00:20:27.301 "write_zeroes": true, 00:20:27.301 "flush": true, 00:20:27.301 "reset": true, 00:20:27.301 "compare": false, 00:20:27.301 "compare_and_write": false, 00:20:27.301 "abort": true, 00:20:27.301 "nvme_admin": false, 00:20:27.301 "nvme_io": false 00:20:27.301 }, 00:20:27.301 "memory_domains": [ 00:20:27.301 { 00:20:27.301 "dma_device_id": "system", 00:20:27.301 "dma_device_type": 1 00:20:27.301 }, 00:20:27.301 { 00:20:27.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.301 "dma_device_type": 2 00:20:27.301 } 00:20:27.301 ], 00:20:27.301 "driver_specific": {} 00:20:27.301 } 00:20:27.301 ] 00:20:27.301 07:31:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:20:27.301 07:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:27.301 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:27.301 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:27.301 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:27.301 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:27.301 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:27.301 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:27.301 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:27.301 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:27.301 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:27.301 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.301 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.560 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:27.560 "name": "Existed_Raid", 00:20:27.560 "uuid": "16f06f4f-144c-495e-ba63-6d9b069523cd", 00:20:27.560 "strip_size_kb": 0, 00:20:27.560 "state": "online", 00:20:27.560 "raid_level": "raid1", 00:20:27.560 "superblock": false, 00:20:27.560 "num_base_bdevs": 3, 00:20:27.560 "num_base_bdevs_discovered": 3, 00:20:27.560 "num_base_bdevs_operational": 3, 00:20:27.560 "base_bdevs_list": [ 00:20:27.560 { 00:20:27.560 "name": "NewBaseBdev", 00:20:27.560 "uuid": "f4604414-857b-4b99-9698-f2cd43b161cf", 00:20:27.560 "is_configured": true, 00:20:27.560 "data_offset": 0, 00:20:27.560 "data_size": 65536 00:20:27.560 }, 00:20:27.560 { 00:20:27.560 "name": "BaseBdev2", 00:20:27.560 "uuid": "2a4148b9-3e86-4720-823a-6ac1e33664eb", 00:20:27.560 "is_configured": true, 00:20:27.560 "data_offset": 0, 00:20:27.560 "data_size": 65536 00:20:27.560 }, 00:20:27.560 { 00:20:27.560 "name": "BaseBdev3", 00:20:27.560 "uuid": "516f3b5c-a68f-4965-a64d-ad9c752f590e", 00:20:27.560 "is_configured": true, 00:20:27.560 "data_offset": 0, 00:20:27.560 "data_size": 65536 00:20:27.560 } 00:20:27.560 ] 00:20:27.560 }' 00:20:27.560 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:27.560 07:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.128 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:28.128 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:28.128 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:28.128 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:28.128 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:28.128 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:28.128 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:28.128 07:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:28.128 [2024-07-12 07:31:02.002480] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:28.387 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:28.387 "name": "Existed_Raid", 00:20:28.387 "aliases": [ 00:20:28.387 "16f06f4f-144c-495e-ba63-6d9b069523cd" 00:20:28.387 ], 00:20:28.387 "product_name": "Raid Volume", 00:20:28.387 "block_size": 512, 00:20:28.387 "num_blocks": 65536, 00:20:28.387 "uuid": "16f06f4f-144c-495e-ba63-6d9b069523cd", 00:20:28.387 "assigned_rate_limits": { 00:20:28.387 "rw_ios_per_sec": 0, 00:20:28.387 "rw_mbytes_per_sec": 0, 00:20:28.387 "r_mbytes_per_sec": 0, 00:20:28.387 "w_mbytes_per_sec": 0 00:20:28.387 }, 00:20:28.387 "claimed": false, 00:20:28.387 "zoned": false, 00:20:28.387 "supported_io_types": { 00:20:28.387 "read": true, 00:20:28.387 "write": true, 00:20:28.387 "unmap": false, 00:20:28.387 "write_zeroes": true, 00:20:28.387 "flush": false, 00:20:28.387 "reset": true, 00:20:28.387 "compare": false, 00:20:28.387 "compare_and_write": false, 00:20:28.387 "abort": false, 00:20:28.387 "nvme_admin": false, 00:20:28.387 "nvme_io": false 00:20:28.387 }, 00:20:28.387 "memory_domains": [ 00:20:28.387 { 00:20:28.387 "dma_device_id": "system", 00:20:28.387 "dma_device_type": 1 00:20:28.387 }, 00:20:28.387 { 00:20:28.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.387 "dma_device_type": 2 00:20:28.387 }, 00:20:28.387 { 00:20:28.387 "dma_device_id": "system", 00:20:28.387 "dma_device_type": 1 00:20:28.387 }, 00:20:28.387 { 00:20:28.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.387 "dma_device_type": 2 00:20:28.387 }, 00:20:28.387 { 00:20:28.387 "dma_device_id": "system", 00:20:28.387 "dma_device_type": 1 00:20:28.387 }, 00:20:28.387 { 00:20:28.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.387 "dma_device_type": 2 00:20:28.387 } 00:20:28.387 ], 00:20:28.387 "driver_specific": { 00:20:28.387 "raid": { 00:20:28.387 "uuid": "16f06f4f-144c-495e-ba63-6d9b069523cd", 00:20:28.387 "strip_size_kb": 0, 00:20:28.387 "state": "online", 00:20:28.387 "raid_level": "raid1", 00:20:28.387 "superblock": false, 00:20:28.387 "num_base_bdevs": 3, 00:20:28.387 "num_base_bdevs_discovered": 3, 00:20:28.387 "num_base_bdevs_operational": 3, 00:20:28.387 "base_bdevs_list": [ 00:20:28.387 { 00:20:28.387 "name": "NewBaseBdev", 00:20:28.387 "uuid": "f4604414-857b-4b99-9698-f2cd43b161cf", 00:20:28.387 "is_configured": true, 00:20:28.387 "data_offset": 0, 00:20:28.387 "data_size": 65536 00:20:28.387 }, 00:20:28.387 { 00:20:28.387 "name": "BaseBdev2", 00:20:28.387 "uuid": "2a4148b9-3e86-4720-823a-6ac1e33664eb", 00:20:28.387 "is_configured": true, 00:20:28.387 "data_offset": 0, 00:20:28.387 "data_size": 65536 00:20:28.387 }, 00:20:28.387 { 00:20:28.387 "name": "BaseBdev3", 00:20:28.387 "uuid": "516f3b5c-a68f-4965-a64d-ad9c752f590e", 00:20:28.387 "is_configured": true, 00:20:28.387 "data_offset": 0, 00:20:28.387 "data_size": 65536 00:20:28.387 } 00:20:28.387 ] 00:20:28.387 } 00:20:28.387 } 00:20:28.387 }' 00:20:28.387 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:28.387 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:28.387 BaseBdev2 00:20:28.387 BaseBdev3' 00:20:28.387 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:28.387 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:28.387 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:28.387 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:28.387 "name": "NewBaseBdev", 00:20:28.387 "aliases": [ 00:20:28.387 "f4604414-857b-4b99-9698-f2cd43b161cf" 00:20:28.387 ], 00:20:28.387 "product_name": "Malloc disk", 00:20:28.387 "block_size": 512, 00:20:28.387 "num_blocks": 65536, 00:20:28.387 "uuid": "f4604414-857b-4b99-9698-f2cd43b161cf", 00:20:28.387 "assigned_rate_limits": { 00:20:28.387 "rw_ios_per_sec": 0, 00:20:28.387 "rw_mbytes_per_sec": 0, 00:20:28.387 "r_mbytes_per_sec": 0, 00:20:28.387 "w_mbytes_per_sec": 0 00:20:28.387 }, 00:20:28.387 "claimed": true, 00:20:28.387 "claim_type": "exclusive_write", 00:20:28.387 "zoned": false, 00:20:28.387 "supported_io_types": { 00:20:28.387 "read": true, 00:20:28.387 "write": true, 00:20:28.387 "unmap": true, 00:20:28.387 "write_zeroes": true, 00:20:28.387 "flush": true, 00:20:28.387 "reset": true, 00:20:28.387 "compare": false, 00:20:28.387 "compare_and_write": false, 00:20:28.387 "abort": true, 00:20:28.387 "nvme_admin": false, 00:20:28.387 "nvme_io": false 00:20:28.387 }, 00:20:28.387 "memory_domains": [ 00:20:28.387 { 00:20:28.387 "dma_device_id": "system", 00:20:28.387 "dma_device_type": 1 00:20:28.387 }, 00:20:28.387 { 00:20:28.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.387 "dma_device_type": 2 00:20:28.387 } 00:20:28.387 ], 00:20:28.387 "driver_specific": {} 00:20:28.387 }' 00:20:28.387 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:28.646 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:28.646 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:28.646 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.646 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.646 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:28.646 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:28.646 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:28.646 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:28.646 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:28.646 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:28.646 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:28.646 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:28.905 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:28.905 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:28.905 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:28.905 "name": "BaseBdev2", 00:20:28.905 "aliases": [ 00:20:28.905 "2a4148b9-3e86-4720-823a-6ac1e33664eb" 00:20:28.905 ], 00:20:28.905 "product_name": "Malloc disk", 00:20:28.905 "block_size": 512, 00:20:28.905 "num_blocks": 65536, 00:20:28.905 "uuid": "2a4148b9-3e86-4720-823a-6ac1e33664eb", 00:20:28.905 "assigned_rate_limits": { 00:20:28.905 "rw_ios_per_sec": 0, 00:20:28.905 "rw_mbytes_per_sec": 0, 00:20:28.905 "r_mbytes_per_sec": 0, 00:20:28.905 "w_mbytes_per_sec": 0 00:20:28.905 }, 00:20:28.905 "claimed": true, 00:20:28.905 "claim_type": "exclusive_write", 00:20:28.905 "zoned": false, 00:20:28.905 "supported_io_types": { 00:20:28.905 "read": true, 00:20:28.905 "write": true, 00:20:28.905 "unmap": true, 00:20:28.905 "write_zeroes": true, 00:20:28.905 "flush": true, 00:20:28.905 "reset": true, 00:20:28.905 "compare": false, 00:20:28.905 "compare_and_write": false, 00:20:28.905 "abort": true, 00:20:28.905 "nvme_admin": false, 00:20:28.905 "nvme_io": false 00:20:28.905 }, 00:20:28.905 "memory_domains": [ 00:20:28.905 { 00:20:28.905 "dma_device_id": "system", 00:20:28.905 "dma_device_type": 1 00:20:28.905 }, 00:20:28.905 { 00:20:28.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.905 "dma_device_type": 2 00:20:28.905 } 00:20:28.905 ], 00:20:28.905 "driver_specific": {} 00:20:28.905 }' 00:20:28.905 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:28.905 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:29.164 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:29.164 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:29.164 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:29.164 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:29.164 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:29.164 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:29.164 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:29.164 07:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:29.164 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:29.423 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:29.423 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:29.423 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:29.423 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:29.423 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:29.423 "name": "BaseBdev3", 00:20:29.423 "aliases": [ 00:20:29.423 "516f3b5c-a68f-4965-a64d-ad9c752f590e" 00:20:29.423 ], 00:20:29.423 "product_name": "Malloc disk", 00:20:29.423 "block_size": 512, 00:20:29.423 "num_blocks": 65536, 00:20:29.423 "uuid": "516f3b5c-a68f-4965-a64d-ad9c752f590e", 00:20:29.423 "assigned_rate_limits": { 00:20:29.423 "rw_ios_per_sec": 0, 00:20:29.423 "rw_mbytes_per_sec": 0, 00:20:29.423 "r_mbytes_per_sec": 0, 00:20:29.423 "w_mbytes_per_sec": 0 00:20:29.423 }, 00:20:29.423 "claimed": true, 00:20:29.423 "claim_type": "exclusive_write", 00:20:29.423 "zoned": false, 00:20:29.423 "supported_io_types": { 00:20:29.423 "read": true, 00:20:29.423 "write": true, 00:20:29.423 "unmap": true, 00:20:29.423 "write_zeroes": true, 00:20:29.423 "flush": true, 00:20:29.423 "reset": true, 00:20:29.423 "compare": false, 00:20:29.423 "compare_and_write": false, 00:20:29.423 "abort": true, 00:20:29.423 "nvme_admin": false, 00:20:29.423 "nvme_io": false 00:20:29.423 }, 00:20:29.423 "memory_domains": [ 00:20:29.423 { 00:20:29.423 "dma_device_id": "system", 00:20:29.423 "dma_device_type": 1 00:20:29.423 }, 00:20:29.423 { 00:20:29.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.423 "dma_device_type": 2 00:20:29.423 } 00:20:29.423 ], 00:20:29.423 "driver_specific": {} 00:20:29.423 }' 00:20:29.423 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:29.681 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:29.681 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:29.681 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:29.681 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:29.681 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:29.681 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:29.681 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:29.681 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:29.681 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:29.681 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:29.939 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:29.939 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:29.939 [2024-07-12 07:31:03.762576] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:29.939 [2024-07-12 07:31:03.762843] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:29.939 [2024-07-12 07:31:03.763070] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:29.939 [2024-07-12 07:31:03.763375] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:29.939 [2024-07-12 07:31:03.763463] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:20:29.939 07:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 141127 00:20:29.939 07:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 141127 ']' 00:20:29.939 07:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 141127 00:20:29.939 07:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:20:29.939 07:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:29.939 07:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 141127 00:20:29.939 07:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:29.939 07:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:29.939 07:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 141127' 00:20:29.939 killing process with pid 141127 00:20:29.939 07:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 141127 00:20:29.939 [2024-07-12 07:31:03.814524] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:29.939 07:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 141127 00:20:30.197 [2024-07-12 07:31:03.845570] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:30.455 00:20:30.455 real 0m26.873s 00:20:30.455 user 0m49.572s 00:20:30.455 sys 0m4.520s 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.455 ************************************ 00:20:30.455 END TEST raid_state_function_test 00:20:30.455 ************************************ 00:20:30.455 07:31:04 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:20:30.455 07:31:04 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:30.455 07:31:04 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:30.455 07:31:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:30.455 ************************************ 00:20:30.455 START TEST raid_state_function_test_sb 00:20:30.455 ************************************ 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 true 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=142075 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 142075' 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:30.455 Process raid pid: 142075 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 142075 /var/tmp/spdk-raid.sock 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 142075 ']' 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:30.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:30.455 07:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.455 [2024-07-12 07:31:04.272912] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:30.455 [2024-07-12 07:31:04.273490] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.714 [2024-07-12 07:31:04.434970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.714 [2024-07-12 07:31:04.526407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.972 [2024-07-12 07:31:04.612992] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:31.540 07:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:31.540 07:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:20:31.540 07:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:31.799 [2024-07-12 07:31:05.506058] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:31.799 [2024-07-12 07:31:05.506351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:31.799 [2024-07-12 07:31:05.506493] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:31.799 [2024-07-12 07:31:05.506550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:31.799 [2024-07-12 07:31:05.506631] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:31.799 [2024-07-12 07:31:05.506708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:31.799 07:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:31.799 07:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:31.799 07:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:31.799 07:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:31.799 07:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:31.799 07:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:31.799 07:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:31.799 07:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:31.799 07:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:31.799 07:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:31.799 07:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.799 07:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.059 07:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:32.059 "name": "Existed_Raid", 00:20:32.059 "uuid": "1481c5fe-0248-4da7-a155-7e028093af8a", 00:20:32.059 "strip_size_kb": 0, 00:20:32.059 "state": "configuring", 00:20:32.059 "raid_level": "raid1", 00:20:32.059 "superblock": true, 00:20:32.059 "num_base_bdevs": 3, 00:20:32.059 "num_base_bdevs_discovered": 0, 00:20:32.059 "num_base_bdevs_operational": 3, 00:20:32.059 "base_bdevs_list": [ 00:20:32.059 { 00:20:32.059 "name": "BaseBdev1", 00:20:32.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.059 "is_configured": false, 00:20:32.059 "data_offset": 0, 00:20:32.059 "data_size": 0 00:20:32.059 }, 00:20:32.059 { 00:20:32.059 "name": "BaseBdev2", 00:20:32.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.059 "is_configured": false, 00:20:32.059 "data_offset": 0, 00:20:32.059 "data_size": 0 00:20:32.059 }, 00:20:32.059 { 00:20:32.059 "name": "BaseBdev3", 00:20:32.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.059 "is_configured": false, 00:20:32.059 "data_offset": 0, 00:20:32.059 "data_size": 0 00:20:32.059 } 00:20:32.059 ] 00:20:32.059 }' 00:20:32.059 07:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:32.059 07:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.626 07:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:32.986 [2024-07-12 07:31:06.638047] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:32.986 [2024-07-12 07:31:06.638280] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:20:32.986 07:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:32.986 [2024-07-12 07:31:06.826141] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:32.986 [2024-07-12 07:31:06.826493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:32.986 [2024-07-12 07:31:06.826625] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:32.986 [2024-07-12 07:31:06.826683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:32.986 [2024-07-12 07:31:06.826763] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:32.986 [2024-07-12 07:31:06.826816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:32.986 07:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:33.275 [2024-07-12 07:31:07.090680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:33.275 BaseBdev1 00:20:33.275 07:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:33.275 07:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:33.275 07:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:33.275 07:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:33.275 07:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:33.275 07:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:33.275 07:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:33.533 07:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:33.792 [ 00:20:33.792 { 00:20:33.792 "name": "BaseBdev1", 00:20:33.792 "aliases": [ 00:20:33.792 "7bd33bb3-0cfb-44dd-a3b0-e6133e8bced9" 00:20:33.792 ], 00:20:33.792 "product_name": "Malloc disk", 00:20:33.792 "block_size": 512, 00:20:33.792 "num_blocks": 65536, 00:20:33.792 "uuid": "7bd33bb3-0cfb-44dd-a3b0-e6133e8bced9", 00:20:33.792 "assigned_rate_limits": { 00:20:33.792 "rw_ios_per_sec": 0, 00:20:33.792 "rw_mbytes_per_sec": 0, 00:20:33.792 "r_mbytes_per_sec": 0, 00:20:33.792 "w_mbytes_per_sec": 0 00:20:33.792 }, 00:20:33.792 "claimed": true, 00:20:33.792 "claim_type": "exclusive_write", 00:20:33.792 "zoned": false, 00:20:33.792 "supported_io_types": { 00:20:33.792 "read": true, 00:20:33.792 "write": true, 00:20:33.792 "unmap": true, 00:20:33.792 "write_zeroes": true, 00:20:33.792 "flush": true, 00:20:33.792 "reset": true, 00:20:33.792 "compare": false, 00:20:33.792 "compare_and_write": false, 00:20:33.792 "abort": true, 00:20:33.792 "nvme_admin": false, 00:20:33.792 "nvme_io": false 00:20:33.792 }, 00:20:33.792 "memory_domains": [ 00:20:33.792 { 00:20:33.792 "dma_device_id": "system", 00:20:33.792 "dma_device_type": 1 00:20:33.792 }, 00:20:33.792 { 00:20:33.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.792 "dma_device_type": 2 00:20:33.792 } 00:20:33.792 ], 00:20:33.792 "driver_specific": {} 00:20:33.792 } 00:20:33.792 ] 00:20:33.792 07:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:33.792 07:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:33.792 07:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:33.792 07:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:33.792 07:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:33.792 07:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:33.792 07:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:33.792 07:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.792 07:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.792 07:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.792 07:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.792 07:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.792 07:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.050 07:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:34.050 "name": "Existed_Raid", 00:20:34.050 "uuid": "484e92d6-face-4436-8a5f-2f54d3087415", 00:20:34.050 "strip_size_kb": 0, 00:20:34.050 "state": "configuring", 00:20:34.050 "raid_level": "raid1", 00:20:34.050 "superblock": true, 00:20:34.050 "num_base_bdevs": 3, 00:20:34.050 "num_base_bdevs_discovered": 1, 00:20:34.050 "num_base_bdevs_operational": 3, 00:20:34.050 "base_bdevs_list": [ 00:20:34.050 { 00:20:34.050 "name": "BaseBdev1", 00:20:34.050 "uuid": "7bd33bb3-0cfb-44dd-a3b0-e6133e8bced9", 00:20:34.050 "is_configured": true, 00:20:34.050 "data_offset": 2048, 00:20:34.050 "data_size": 63488 00:20:34.050 }, 00:20:34.050 { 00:20:34.050 "name": "BaseBdev2", 00:20:34.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.050 "is_configured": false, 00:20:34.050 "data_offset": 0, 00:20:34.050 "data_size": 0 00:20:34.050 }, 00:20:34.050 { 00:20:34.050 "name": "BaseBdev3", 00:20:34.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.050 "is_configured": false, 00:20:34.050 "data_offset": 0, 00:20:34.050 "data_size": 0 00:20:34.050 } 00:20:34.050 ] 00:20:34.050 }' 00:20:34.050 07:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:34.050 07:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.617 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:34.874 [2024-07-12 07:31:08.639091] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:34.874 [2024-07-12 07:31:08.639475] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:20:34.874 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:35.132 [2024-07-12 07:31:08.915256] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:35.132 [2024-07-12 07:31:08.918073] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:35.132 [2024-07-12 07:31:08.918299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:35.132 [2024-07-12 07:31:08.918414] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:35.132 [2024-07-12 07:31:08.918516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:35.132 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:35.132 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:35.132 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:35.132 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:35.132 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:35.132 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:35.132 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:35.132 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:35.132 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:35.132 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:35.132 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:35.132 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:35.132 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.132 07:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.390 07:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:35.390 "name": "Existed_Raid", 00:20:35.390 "uuid": "74733efe-4d6c-488c-bbee-969b9b945ea5", 00:20:35.390 "strip_size_kb": 0, 00:20:35.390 "state": "configuring", 00:20:35.390 "raid_level": "raid1", 00:20:35.390 "superblock": true, 00:20:35.390 "num_base_bdevs": 3, 00:20:35.390 "num_base_bdevs_discovered": 1, 00:20:35.390 "num_base_bdevs_operational": 3, 00:20:35.390 "base_bdevs_list": [ 00:20:35.390 { 00:20:35.390 "name": "BaseBdev1", 00:20:35.390 "uuid": "7bd33bb3-0cfb-44dd-a3b0-e6133e8bced9", 00:20:35.390 "is_configured": true, 00:20:35.390 "data_offset": 2048, 00:20:35.390 "data_size": 63488 00:20:35.390 }, 00:20:35.390 { 00:20:35.390 "name": "BaseBdev2", 00:20:35.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.390 "is_configured": false, 00:20:35.390 "data_offset": 0, 00:20:35.390 "data_size": 0 00:20:35.390 }, 00:20:35.390 { 00:20:35.390 "name": "BaseBdev3", 00:20:35.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.390 "is_configured": false, 00:20:35.390 "data_offset": 0, 00:20:35.390 "data_size": 0 00:20:35.390 } 00:20:35.390 ] 00:20:35.390 }' 00:20:35.390 07:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:35.390 07:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.955 07:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:36.213 [2024-07-12 07:31:10.051314] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:36.213 BaseBdev2 00:20:36.213 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:36.213 07:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:36.213 07:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:36.213 07:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:36.213 07:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:36.213 07:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:36.213 07:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:36.471 07:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:36.729 [ 00:20:36.729 { 00:20:36.729 "name": "BaseBdev2", 00:20:36.729 "aliases": [ 00:20:36.729 "503eba5d-4f25-4ea5-99ef-726a06cdb84e" 00:20:36.729 ], 00:20:36.729 "product_name": "Malloc disk", 00:20:36.729 "block_size": 512, 00:20:36.729 "num_blocks": 65536, 00:20:36.729 "uuid": "503eba5d-4f25-4ea5-99ef-726a06cdb84e", 00:20:36.729 "assigned_rate_limits": { 00:20:36.729 "rw_ios_per_sec": 0, 00:20:36.729 "rw_mbytes_per_sec": 0, 00:20:36.729 "r_mbytes_per_sec": 0, 00:20:36.729 "w_mbytes_per_sec": 0 00:20:36.729 }, 00:20:36.729 "claimed": true, 00:20:36.729 "claim_type": "exclusive_write", 00:20:36.729 "zoned": false, 00:20:36.729 "supported_io_types": { 00:20:36.729 "read": true, 00:20:36.729 "write": true, 00:20:36.729 "unmap": true, 00:20:36.729 "write_zeroes": true, 00:20:36.729 "flush": true, 00:20:36.729 "reset": true, 00:20:36.729 "compare": false, 00:20:36.729 "compare_and_write": false, 00:20:36.729 "abort": true, 00:20:36.729 "nvme_admin": false, 00:20:36.729 "nvme_io": false 00:20:36.729 }, 00:20:36.729 "memory_domains": [ 00:20:36.729 { 00:20:36.729 "dma_device_id": "system", 00:20:36.729 "dma_device_type": 1 00:20:36.729 }, 00:20:36.729 { 00:20:36.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.729 "dma_device_type": 2 00:20:36.729 } 00:20:36.729 ], 00:20:36.729 "driver_specific": {} 00:20:36.729 } 00:20:36.729 ] 00:20:36.986 07:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:36.986 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:36.986 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:36.986 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:36.986 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:36.986 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:36.986 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:36.986 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:36.986 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:36.986 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:36.986 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:36.987 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:36.987 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:36.987 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.987 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.244 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.244 "name": "Existed_Raid", 00:20:37.244 "uuid": "74733efe-4d6c-488c-bbee-969b9b945ea5", 00:20:37.244 "strip_size_kb": 0, 00:20:37.244 "state": "configuring", 00:20:37.244 "raid_level": "raid1", 00:20:37.244 "superblock": true, 00:20:37.244 "num_base_bdevs": 3, 00:20:37.244 "num_base_bdevs_discovered": 2, 00:20:37.244 "num_base_bdevs_operational": 3, 00:20:37.244 "base_bdevs_list": [ 00:20:37.244 { 00:20:37.244 "name": "BaseBdev1", 00:20:37.244 "uuid": "7bd33bb3-0cfb-44dd-a3b0-e6133e8bced9", 00:20:37.244 "is_configured": true, 00:20:37.244 "data_offset": 2048, 00:20:37.244 "data_size": 63488 00:20:37.244 }, 00:20:37.244 { 00:20:37.244 "name": "BaseBdev2", 00:20:37.244 "uuid": "503eba5d-4f25-4ea5-99ef-726a06cdb84e", 00:20:37.244 "is_configured": true, 00:20:37.244 "data_offset": 2048, 00:20:37.244 "data_size": 63488 00:20:37.244 }, 00:20:37.244 { 00:20:37.244 "name": "BaseBdev3", 00:20:37.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.244 "is_configured": false, 00:20:37.244 "data_offset": 0, 00:20:37.244 "data_size": 0 00:20:37.244 } 00:20:37.244 ] 00:20:37.244 }' 00:20:37.244 07:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.244 07:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.812 07:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:37.812 [2024-07-12 07:31:11.681480] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:37.812 [2024-07-12 07:31:11.682017] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:20:37.812 [2024-07-12 07:31:11.682132] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:37.812 [2024-07-12 07:31:11.682336] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:20:37.812 [2024-07-12 07:31:11.682888] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:20:37.812 [2024-07-12 07:31:11.682999] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:20:37.812 [2024-07-12 07:31:11.683245] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.812 BaseBdev3 00:20:38.071 07:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:38.071 07:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:38.071 07:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:38.071 07:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:38.071 07:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:38.071 07:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:38.071 07:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:38.071 07:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:38.330 [ 00:20:38.330 { 00:20:38.330 "name": "BaseBdev3", 00:20:38.330 "aliases": [ 00:20:38.330 "3e4d3650-fedc-4172-b2c0-389d5dd0ba3f" 00:20:38.330 ], 00:20:38.330 "product_name": "Malloc disk", 00:20:38.330 "block_size": 512, 00:20:38.330 "num_blocks": 65536, 00:20:38.330 "uuid": "3e4d3650-fedc-4172-b2c0-389d5dd0ba3f", 00:20:38.330 "assigned_rate_limits": { 00:20:38.330 "rw_ios_per_sec": 0, 00:20:38.330 "rw_mbytes_per_sec": 0, 00:20:38.330 "r_mbytes_per_sec": 0, 00:20:38.330 "w_mbytes_per_sec": 0 00:20:38.330 }, 00:20:38.330 "claimed": true, 00:20:38.330 "claim_type": "exclusive_write", 00:20:38.330 "zoned": false, 00:20:38.330 "supported_io_types": { 00:20:38.330 "read": true, 00:20:38.330 "write": true, 00:20:38.330 "unmap": true, 00:20:38.330 "write_zeroes": true, 00:20:38.330 "flush": true, 00:20:38.330 "reset": true, 00:20:38.330 "compare": false, 00:20:38.330 "compare_and_write": false, 00:20:38.330 "abort": true, 00:20:38.330 "nvme_admin": false, 00:20:38.330 "nvme_io": false 00:20:38.330 }, 00:20:38.330 "memory_domains": [ 00:20:38.330 { 00:20:38.330 "dma_device_id": "system", 00:20:38.330 "dma_device_type": 1 00:20:38.330 }, 00:20:38.330 { 00:20:38.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.330 "dma_device_type": 2 00:20:38.330 } 00:20:38.330 ], 00:20:38.330 "driver_specific": {} 00:20:38.330 } 00:20:38.330 ] 00:20:38.330 07:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:38.330 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:38.330 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:38.330 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:38.330 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:38.330 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:38.330 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:38.330 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:38.330 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:38.330 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:38.330 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:38.330 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:38.330 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:38.330 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.330 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.589 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.589 "name": "Existed_Raid", 00:20:38.589 "uuid": "74733efe-4d6c-488c-bbee-969b9b945ea5", 00:20:38.589 "strip_size_kb": 0, 00:20:38.589 "state": "online", 00:20:38.589 "raid_level": "raid1", 00:20:38.589 "superblock": true, 00:20:38.589 "num_base_bdevs": 3, 00:20:38.589 "num_base_bdevs_discovered": 3, 00:20:38.589 "num_base_bdevs_operational": 3, 00:20:38.589 "base_bdevs_list": [ 00:20:38.589 { 00:20:38.589 "name": "BaseBdev1", 00:20:38.589 "uuid": "7bd33bb3-0cfb-44dd-a3b0-e6133e8bced9", 00:20:38.589 "is_configured": true, 00:20:38.589 "data_offset": 2048, 00:20:38.589 "data_size": 63488 00:20:38.589 }, 00:20:38.589 { 00:20:38.589 "name": "BaseBdev2", 00:20:38.589 "uuid": "503eba5d-4f25-4ea5-99ef-726a06cdb84e", 00:20:38.589 "is_configured": true, 00:20:38.589 "data_offset": 2048, 00:20:38.589 "data_size": 63488 00:20:38.589 }, 00:20:38.589 { 00:20:38.589 "name": "BaseBdev3", 00:20:38.589 "uuid": "3e4d3650-fedc-4172-b2c0-389d5dd0ba3f", 00:20:38.589 "is_configured": true, 00:20:38.589 "data_offset": 2048, 00:20:38.589 "data_size": 63488 00:20:38.589 } 00:20:38.589 ] 00:20:38.589 }' 00:20:38.589 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.589 07:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.156 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:39.156 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:39.156 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:39.156 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:39.156 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:39.156 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:39.156 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:39.156 07:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:39.156 [2024-07-12 07:31:13.002057] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:39.156 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:39.156 "name": "Existed_Raid", 00:20:39.156 "aliases": [ 00:20:39.156 "74733efe-4d6c-488c-bbee-969b9b945ea5" 00:20:39.156 ], 00:20:39.156 "product_name": "Raid Volume", 00:20:39.156 "block_size": 512, 00:20:39.156 "num_blocks": 63488, 00:20:39.156 "uuid": "74733efe-4d6c-488c-bbee-969b9b945ea5", 00:20:39.156 "assigned_rate_limits": { 00:20:39.156 "rw_ios_per_sec": 0, 00:20:39.156 "rw_mbytes_per_sec": 0, 00:20:39.156 "r_mbytes_per_sec": 0, 00:20:39.156 "w_mbytes_per_sec": 0 00:20:39.156 }, 00:20:39.156 "claimed": false, 00:20:39.156 "zoned": false, 00:20:39.156 "supported_io_types": { 00:20:39.156 "read": true, 00:20:39.156 "write": true, 00:20:39.156 "unmap": false, 00:20:39.156 "write_zeroes": true, 00:20:39.156 "flush": false, 00:20:39.156 "reset": true, 00:20:39.156 "compare": false, 00:20:39.156 "compare_and_write": false, 00:20:39.156 "abort": false, 00:20:39.156 "nvme_admin": false, 00:20:39.156 "nvme_io": false 00:20:39.156 }, 00:20:39.156 "memory_domains": [ 00:20:39.156 { 00:20:39.156 "dma_device_id": "system", 00:20:39.156 "dma_device_type": 1 00:20:39.156 }, 00:20:39.156 { 00:20:39.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.156 "dma_device_type": 2 00:20:39.156 }, 00:20:39.156 { 00:20:39.156 "dma_device_id": "system", 00:20:39.156 "dma_device_type": 1 00:20:39.156 }, 00:20:39.156 { 00:20:39.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.156 "dma_device_type": 2 00:20:39.156 }, 00:20:39.156 { 00:20:39.156 "dma_device_id": "system", 00:20:39.156 "dma_device_type": 1 00:20:39.156 }, 00:20:39.156 { 00:20:39.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.156 "dma_device_type": 2 00:20:39.156 } 00:20:39.156 ], 00:20:39.156 "driver_specific": { 00:20:39.156 "raid": { 00:20:39.156 "uuid": "74733efe-4d6c-488c-bbee-969b9b945ea5", 00:20:39.156 "strip_size_kb": 0, 00:20:39.156 "state": "online", 00:20:39.156 "raid_level": "raid1", 00:20:39.156 "superblock": true, 00:20:39.156 "num_base_bdevs": 3, 00:20:39.156 "num_base_bdevs_discovered": 3, 00:20:39.156 "num_base_bdevs_operational": 3, 00:20:39.156 "base_bdevs_list": [ 00:20:39.156 { 00:20:39.156 "name": "BaseBdev1", 00:20:39.156 "uuid": "7bd33bb3-0cfb-44dd-a3b0-e6133e8bced9", 00:20:39.156 "is_configured": true, 00:20:39.156 "data_offset": 2048, 00:20:39.156 "data_size": 63488 00:20:39.156 }, 00:20:39.156 { 00:20:39.156 "name": "BaseBdev2", 00:20:39.156 "uuid": "503eba5d-4f25-4ea5-99ef-726a06cdb84e", 00:20:39.156 "is_configured": true, 00:20:39.157 "data_offset": 2048, 00:20:39.157 "data_size": 63488 00:20:39.157 }, 00:20:39.157 { 00:20:39.157 "name": "BaseBdev3", 00:20:39.157 "uuid": "3e4d3650-fedc-4172-b2c0-389d5dd0ba3f", 00:20:39.157 "is_configured": true, 00:20:39.157 "data_offset": 2048, 00:20:39.157 "data_size": 63488 00:20:39.157 } 00:20:39.157 ] 00:20:39.157 } 00:20:39.157 } 00:20:39.157 }' 00:20:39.157 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:39.414 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:39.414 BaseBdev2 00:20:39.414 BaseBdev3' 00:20:39.414 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:39.414 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:39.414 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:39.672 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:39.672 "name": "BaseBdev1", 00:20:39.672 "aliases": [ 00:20:39.672 "7bd33bb3-0cfb-44dd-a3b0-e6133e8bced9" 00:20:39.672 ], 00:20:39.672 "product_name": "Malloc disk", 00:20:39.672 "block_size": 512, 00:20:39.672 "num_blocks": 65536, 00:20:39.672 "uuid": "7bd33bb3-0cfb-44dd-a3b0-e6133e8bced9", 00:20:39.672 "assigned_rate_limits": { 00:20:39.672 "rw_ios_per_sec": 0, 00:20:39.672 "rw_mbytes_per_sec": 0, 00:20:39.672 "r_mbytes_per_sec": 0, 00:20:39.672 "w_mbytes_per_sec": 0 00:20:39.672 }, 00:20:39.672 "claimed": true, 00:20:39.672 "claim_type": "exclusive_write", 00:20:39.672 "zoned": false, 00:20:39.672 "supported_io_types": { 00:20:39.672 "read": true, 00:20:39.672 "write": true, 00:20:39.672 "unmap": true, 00:20:39.672 "write_zeroes": true, 00:20:39.672 "flush": true, 00:20:39.672 "reset": true, 00:20:39.672 "compare": false, 00:20:39.672 "compare_and_write": false, 00:20:39.672 "abort": true, 00:20:39.672 "nvme_admin": false, 00:20:39.672 "nvme_io": false 00:20:39.672 }, 00:20:39.672 "memory_domains": [ 00:20:39.672 { 00:20:39.672 "dma_device_id": "system", 00:20:39.672 "dma_device_type": 1 00:20:39.672 }, 00:20:39.672 { 00:20:39.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.672 "dma_device_type": 2 00:20:39.672 } 00:20:39.672 ], 00:20:39.672 "driver_specific": {} 00:20:39.672 }' 00:20:39.672 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.672 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.672 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:39.672 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.672 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.672 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:39.672 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.930 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.930 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:39.930 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.930 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.930 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:39.930 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:39.930 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:39.930 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:40.188 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:40.188 "name": "BaseBdev2", 00:20:40.188 "aliases": [ 00:20:40.188 "503eba5d-4f25-4ea5-99ef-726a06cdb84e" 00:20:40.188 ], 00:20:40.188 "product_name": "Malloc disk", 00:20:40.188 "block_size": 512, 00:20:40.188 "num_blocks": 65536, 00:20:40.188 "uuid": "503eba5d-4f25-4ea5-99ef-726a06cdb84e", 00:20:40.188 "assigned_rate_limits": { 00:20:40.188 "rw_ios_per_sec": 0, 00:20:40.188 "rw_mbytes_per_sec": 0, 00:20:40.188 "r_mbytes_per_sec": 0, 00:20:40.188 "w_mbytes_per_sec": 0 00:20:40.188 }, 00:20:40.188 "claimed": true, 00:20:40.188 "claim_type": "exclusive_write", 00:20:40.188 "zoned": false, 00:20:40.188 "supported_io_types": { 00:20:40.188 "read": true, 00:20:40.188 "write": true, 00:20:40.188 "unmap": true, 00:20:40.188 "write_zeroes": true, 00:20:40.188 "flush": true, 00:20:40.188 "reset": true, 00:20:40.188 "compare": false, 00:20:40.188 "compare_and_write": false, 00:20:40.188 "abort": true, 00:20:40.188 "nvme_admin": false, 00:20:40.188 "nvme_io": false 00:20:40.188 }, 00:20:40.188 "memory_domains": [ 00:20:40.188 { 00:20:40.188 "dma_device_id": "system", 00:20:40.188 "dma_device_type": 1 00:20:40.189 }, 00:20:40.189 { 00:20:40.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.189 "dma_device_type": 2 00:20:40.189 } 00:20:40.189 ], 00:20:40.189 "driver_specific": {} 00:20:40.189 }' 00:20:40.189 07:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.189 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.189 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:40.189 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.447 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.447 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:40.447 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.447 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.447 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:40.447 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.447 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:40.447 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:40.447 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:40.447 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:40.447 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:40.705 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:40.706 "name": "BaseBdev3", 00:20:40.706 "aliases": [ 00:20:40.706 "3e4d3650-fedc-4172-b2c0-389d5dd0ba3f" 00:20:40.706 ], 00:20:40.706 "product_name": "Malloc disk", 00:20:40.706 "block_size": 512, 00:20:40.706 "num_blocks": 65536, 00:20:40.706 "uuid": "3e4d3650-fedc-4172-b2c0-389d5dd0ba3f", 00:20:40.706 "assigned_rate_limits": { 00:20:40.706 "rw_ios_per_sec": 0, 00:20:40.706 "rw_mbytes_per_sec": 0, 00:20:40.706 "r_mbytes_per_sec": 0, 00:20:40.706 "w_mbytes_per_sec": 0 00:20:40.706 }, 00:20:40.706 "claimed": true, 00:20:40.706 "claim_type": "exclusive_write", 00:20:40.706 "zoned": false, 00:20:40.706 "supported_io_types": { 00:20:40.706 "read": true, 00:20:40.706 "write": true, 00:20:40.706 "unmap": true, 00:20:40.706 "write_zeroes": true, 00:20:40.706 "flush": true, 00:20:40.706 "reset": true, 00:20:40.706 "compare": false, 00:20:40.706 "compare_and_write": false, 00:20:40.706 "abort": true, 00:20:40.706 "nvme_admin": false, 00:20:40.706 "nvme_io": false 00:20:40.706 }, 00:20:40.706 "memory_domains": [ 00:20:40.706 { 00:20:40.706 "dma_device_id": "system", 00:20:40.706 "dma_device_type": 1 00:20:40.706 }, 00:20:40.706 { 00:20:40.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.706 "dma_device_type": 2 00:20:40.706 } 00:20:40.706 ], 00:20:40.706 "driver_specific": {} 00:20:40.706 }' 00:20:40.706 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.963 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:40.963 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:40.963 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.963 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:40.963 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:40.964 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.964 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:40.964 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:40.964 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.222 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:41.222 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:41.222 07:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:41.480 [2024-07-12 07:31:15.210388] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.480 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.739 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.739 "name": "Existed_Raid", 00:20:41.739 "uuid": "74733efe-4d6c-488c-bbee-969b9b945ea5", 00:20:41.739 "strip_size_kb": 0, 00:20:41.739 "state": "online", 00:20:41.739 "raid_level": "raid1", 00:20:41.739 "superblock": true, 00:20:41.739 "num_base_bdevs": 3, 00:20:41.739 "num_base_bdevs_discovered": 2, 00:20:41.739 "num_base_bdevs_operational": 2, 00:20:41.739 "base_bdevs_list": [ 00:20:41.739 { 00:20:41.739 "name": null, 00:20:41.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.739 "is_configured": false, 00:20:41.739 "data_offset": 2048, 00:20:41.739 "data_size": 63488 00:20:41.739 }, 00:20:41.739 { 00:20:41.739 "name": "BaseBdev2", 00:20:41.739 "uuid": "503eba5d-4f25-4ea5-99ef-726a06cdb84e", 00:20:41.739 "is_configured": true, 00:20:41.739 "data_offset": 2048, 00:20:41.739 "data_size": 63488 00:20:41.739 }, 00:20:41.739 { 00:20:41.739 "name": "BaseBdev3", 00:20:41.739 "uuid": "3e4d3650-fedc-4172-b2c0-389d5dd0ba3f", 00:20:41.739 "is_configured": true, 00:20:41.739 "data_offset": 2048, 00:20:41.739 "data_size": 63488 00:20:41.739 } 00:20:41.739 ] 00:20:41.739 }' 00:20:41.739 07:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.739 07:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.307 07:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:42.307 07:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:42.307 07:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.307 07:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:42.567 07:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:42.567 07:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:42.567 07:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:42.832 [2024-07-12 07:31:16.644536] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:42.832 07:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:42.832 07:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:42.832 07:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.832 07:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:43.107 07:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:43.107 07:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:43.107 07:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:43.366 [2024-07-12 07:31:17.158525] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:43.366 [2024-07-12 07:31:17.158842] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:43.366 [2024-07-12 07:31:17.180493] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.366 [2024-07-12 07:31:17.180732] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.366 [2024-07-12 07:31:17.180869] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:20:43.366 07:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:43.366 07:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:43.366 07:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.366 07:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:43.625 07:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:43.625 07:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:43.625 07:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:43.625 07:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:43.625 07:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:43.625 07:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:43.884 BaseBdev2 00:20:43.884 07:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:43.884 07:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:43.884 07:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:43.884 07:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:43.884 07:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:43.884 07:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:43.884 07:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:44.143 07:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:44.401 [ 00:20:44.401 { 00:20:44.401 "name": "BaseBdev2", 00:20:44.401 "aliases": [ 00:20:44.401 "48d35b70-0333-41ab-85af-8532107aaa23" 00:20:44.401 ], 00:20:44.401 "product_name": "Malloc disk", 00:20:44.401 "block_size": 512, 00:20:44.401 "num_blocks": 65536, 00:20:44.401 "uuid": "48d35b70-0333-41ab-85af-8532107aaa23", 00:20:44.401 "assigned_rate_limits": { 00:20:44.401 "rw_ios_per_sec": 0, 00:20:44.401 "rw_mbytes_per_sec": 0, 00:20:44.401 "r_mbytes_per_sec": 0, 00:20:44.401 "w_mbytes_per_sec": 0 00:20:44.401 }, 00:20:44.401 "claimed": false, 00:20:44.401 "zoned": false, 00:20:44.401 "supported_io_types": { 00:20:44.401 "read": true, 00:20:44.401 "write": true, 00:20:44.401 "unmap": true, 00:20:44.401 "write_zeroes": true, 00:20:44.401 "flush": true, 00:20:44.401 "reset": true, 00:20:44.401 "compare": false, 00:20:44.401 "compare_and_write": false, 00:20:44.401 "abort": true, 00:20:44.401 "nvme_admin": false, 00:20:44.401 "nvme_io": false 00:20:44.401 }, 00:20:44.401 "memory_domains": [ 00:20:44.401 { 00:20:44.401 "dma_device_id": "system", 00:20:44.401 "dma_device_type": 1 00:20:44.401 }, 00:20:44.401 { 00:20:44.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.401 "dma_device_type": 2 00:20:44.401 } 00:20:44.401 ], 00:20:44.401 "driver_specific": {} 00:20:44.401 } 00:20:44.401 ] 00:20:44.401 07:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:44.401 07:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:44.401 07:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:44.401 07:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:44.660 BaseBdev3 00:20:44.660 07:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:44.660 07:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:44.660 07:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:44.660 07:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:44.660 07:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:44.660 07:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:44.660 07:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:44.919 07:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:45.177 [ 00:20:45.177 { 00:20:45.177 "name": "BaseBdev3", 00:20:45.177 "aliases": [ 00:20:45.177 "ac6f3630-04f6-4a58-ae10-8ad0ab1a4401" 00:20:45.177 ], 00:20:45.177 "product_name": "Malloc disk", 00:20:45.177 "block_size": 512, 00:20:45.177 "num_blocks": 65536, 00:20:45.177 "uuid": "ac6f3630-04f6-4a58-ae10-8ad0ab1a4401", 00:20:45.177 "assigned_rate_limits": { 00:20:45.177 "rw_ios_per_sec": 0, 00:20:45.177 "rw_mbytes_per_sec": 0, 00:20:45.177 "r_mbytes_per_sec": 0, 00:20:45.177 "w_mbytes_per_sec": 0 00:20:45.177 }, 00:20:45.177 "claimed": false, 00:20:45.177 "zoned": false, 00:20:45.177 "supported_io_types": { 00:20:45.177 "read": true, 00:20:45.177 "write": true, 00:20:45.178 "unmap": true, 00:20:45.178 "write_zeroes": true, 00:20:45.178 "flush": true, 00:20:45.178 "reset": true, 00:20:45.178 "compare": false, 00:20:45.178 "compare_and_write": false, 00:20:45.178 "abort": true, 00:20:45.178 "nvme_admin": false, 00:20:45.178 "nvme_io": false 00:20:45.178 }, 00:20:45.178 "memory_domains": [ 00:20:45.178 { 00:20:45.178 "dma_device_id": "system", 00:20:45.178 "dma_device_type": 1 00:20:45.178 }, 00:20:45.178 { 00:20:45.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.178 "dma_device_type": 2 00:20:45.178 } 00:20:45.178 ], 00:20:45.178 "driver_specific": {} 00:20:45.178 } 00:20:45.178 ] 00:20:45.178 07:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:45.178 07:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:45.178 07:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:45.178 07:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:45.436 [2024-07-12 07:31:19.132296] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:45.436 [2024-07-12 07:31:19.132637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:45.436 [2024-07-12 07:31:19.132757] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:45.436 [2024-07-12 07:31:19.135224] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:45.436 07:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:45.436 07:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:45.436 07:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:45.436 07:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:45.436 07:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:45.436 07:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:45.436 07:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.436 07:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.436 07:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.436 07:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:45.436 07:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.436 07:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.719 07:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.720 "name": "Existed_Raid", 00:20:45.720 "uuid": "b165b5fc-7027-4f01-8103-0bd55115e13f", 00:20:45.720 "strip_size_kb": 0, 00:20:45.720 "state": "configuring", 00:20:45.720 "raid_level": "raid1", 00:20:45.720 "superblock": true, 00:20:45.720 "num_base_bdevs": 3, 00:20:45.720 "num_base_bdevs_discovered": 2, 00:20:45.720 "num_base_bdevs_operational": 3, 00:20:45.720 "base_bdevs_list": [ 00:20:45.720 { 00:20:45.720 "name": "BaseBdev1", 00:20:45.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.720 "is_configured": false, 00:20:45.720 "data_offset": 0, 00:20:45.720 "data_size": 0 00:20:45.720 }, 00:20:45.720 { 00:20:45.720 "name": "BaseBdev2", 00:20:45.720 "uuid": "48d35b70-0333-41ab-85af-8532107aaa23", 00:20:45.720 "is_configured": true, 00:20:45.720 "data_offset": 2048, 00:20:45.720 "data_size": 63488 00:20:45.720 }, 00:20:45.720 { 00:20:45.720 "name": "BaseBdev3", 00:20:45.720 "uuid": "ac6f3630-04f6-4a58-ae10-8ad0ab1a4401", 00:20:45.720 "is_configured": true, 00:20:45.720 "data_offset": 2048, 00:20:45.720 "data_size": 63488 00:20:45.720 } 00:20:45.720 ] 00:20:45.720 }' 00:20:45.720 07:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.720 07:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.285 07:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:46.543 [2024-07-12 07:31:20.260543] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:46.543 07:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:46.543 07:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:46.543 07:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:46.543 07:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:46.543 07:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:46.543 07:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:46.543 07:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.543 07:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.543 07:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.543 07:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.543 07:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.543 07:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.800 07:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.800 "name": "Existed_Raid", 00:20:46.800 "uuid": "b165b5fc-7027-4f01-8103-0bd55115e13f", 00:20:46.800 "strip_size_kb": 0, 00:20:46.800 "state": "configuring", 00:20:46.800 "raid_level": "raid1", 00:20:46.800 "superblock": true, 00:20:46.800 "num_base_bdevs": 3, 00:20:46.800 "num_base_bdevs_discovered": 1, 00:20:46.800 "num_base_bdevs_operational": 3, 00:20:46.800 "base_bdevs_list": [ 00:20:46.800 { 00:20:46.800 "name": "BaseBdev1", 00:20:46.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.800 "is_configured": false, 00:20:46.800 "data_offset": 0, 00:20:46.800 "data_size": 0 00:20:46.800 }, 00:20:46.800 { 00:20:46.800 "name": null, 00:20:46.800 "uuid": "48d35b70-0333-41ab-85af-8532107aaa23", 00:20:46.800 "is_configured": false, 00:20:46.800 "data_offset": 2048, 00:20:46.800 "data_size": 63488 00:20:46.800 }, 00:20:46.800 { 00:20:46.800 "name": "BaseBdev3", 00:20:46.800 "uuid": "ac6f3630-04f6-4a58-ae10-8ad0ab1a4401", 00:20:46.800 "is_configured": true, 00:20:46.800 "data_offset": 2048, 00:20:46.800 "data_size": 63488 00:20:46.800 } 00:20:46.800 ] 00:20:46.800 }' 00:20:46.800 07:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.800 07:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.363 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.363 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:47.620 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:47.620 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:47.876 [2024-07-12 07:31:21.526226] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:47.876 BaseBdev1 00:20:47.876 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:47.876 07:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:47.876 07:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:47.876 07:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:47.876 07:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:47.876 07:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:47.876 07:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:48.134 07:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:48.134 [ 00:20:48.134 { 00:20:48.134 "name": "BaseBdev1", 00:20:48.134 "aliases": [ 00:20:48.134 "546714d7-c959-461c-87e6-9eadf2fae041" 00:20:48.134 ], 00:20:48.134 "product_name": "Malloc disk", 00:20:48.134 "block_size": 512, 00:20:48.134 "num_blocks": 65536, 00:20:48.134 "uuid": "546714d7-c959-461c-87e6-9eadf2fae041", 00:20:48.134 "assigned_rate_limits": { 00:20:48.134 "rw_ios_per_sec": 0, 00:20:48.134 "rw_mbytes_per_sec": 0, 00:20:48.134 "r_mbytes_per_sec": 0, 00:20:48.134 "w_mbytes_per_sec": 0 00:20:48.134 }, 00:20:48.134 "claimed": true, 00:20:48.134 "claim_type": "exclusive_write", 00:20:48.134 "zoned": false, 00:20:48.134 "supported_io_types": { 00:20:48.134 "read": true, 00:20:48.134 "write": true, 00:20:48.134 "unmap": true, 00:20:48.134 "write_zeroes": true, 00:20:48.134 "flush": true, 00:20:48.134 "reset": true, 00:20:48.134 "compare": false, 00:20:48.134 "compare_and_write": false, 00:20:48.134 "abort": true, 00:20:48.134 "nvme_admin": false, 00:20:48.134 "nvme_io": false 00:20:48.134 }, 00:20:48.134 "memory_domains": [ 00:20:48.134 { 00:20:48.134 "dma_device_id": "system", 00:20:48.134 "dma_device_type": 1 00:20:48.134 }, 00:20:48.134 { 00:20:48.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.134 "dma_device_type": 2 00:20:48.134 } 00:20:48.134 ], 00:20:48.134 "driver_specific": {} 00:20:48.134 } 00:20:48.134 ] 00:20:48.134 07:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:48.134 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:48.134 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:48.134 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:48.134 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:48.134 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:48.134 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:48.134 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.134 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.134 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.134 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:48.134 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.134 07:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.392 07:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.392 "name": "Existed_Raid", 00:20:48.392 "uuid": "b165b5fc-7027-4f01-8103-0bd55115e13f", 00:20:48.392 "strip_size_kb": 0, 00:20:48.392 "state": "configuring", 00:20:48.392 "raid_level": "raid1", 00:20:48.392 "superblock": true, 00:20:48.392 "num_base_bdevs": 3, 00:20:48.392 "num_base_bdevs_discovered": 2, 00:20:48.392 "num_base_bdevs_operational": 3, 00:20:48.392 "base_bdevs_list": [ 00:20:48.392 { 00:20:48.392 "name": "BaseBdev1", 00:20:48.392 "uuid": "546714d7-c959-461c-87e6-9eadf2fae041", 00:20:48.392 "is_configured": true, 00:20:48.392 "data_offset": 2048, 00:20:48.392 "data_size": 63488 00:20:48.392 }, 00:20:48.392 { 00:20:48.392 "name": null, 00:20:48.392 "uuid": "48d35b70-0333-41ab-85af-8532107aaa23", 00:20:48.392 "is_configured": false, 00:20:48.392 "data_offset": 2048, 00:20:48.392 "data_size": 63488 00:20:48.392 }, 00:20:48.392 { 00:20:48.392 "name": "BaseBdev3", 00:20:48.392 "uuid": "ac6f3630-04f6-4a58-ae10-8ad0ab1a4401", 00:20:48.392 "is_configured": true, 00:20:48.392 "data_offset": 2048, 00:20:48.392 "data_size": 63488 00:20:48.392 } 00:20:48.392 ] 00:20:48.392 }' 00:20:48.392 07:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.392 07:31:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.958 07:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.958 07:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:49.216 07:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:49.216 07:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:49.474 [2024-07-12 07:31:23.250716] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:49.474 07:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:49.474 07:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:49.474 07:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:49.474 07:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:49.474 07:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:49.474 07:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:49.474 07:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.474 07:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.474 07:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.474 07:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.474 07:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.474 07:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.733 07:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.733 "name": "Existed_Raid", 00:20:49.733 "uuid": "b165b5fc-7027-4f01-8103-0bd55115e13f", 00:20:49.733 "strip_size_kb": 0, 00:20:49.733 "state": "configuring", 00:20:49.733 "raid_level": "raid1", 00:20:49.733 "superblock": true, 00:20:49.733 "num_base_bdevs": 3, 00:20:49.733 "num_base_bdevs_discovered": 1, 00:20:49.733 "num_base_bdevs_operational": 3, 00:20:49.733 "base_bdevs_list": [ 00:20:49.733 { 00:20:49.733 "name": "BaseBdev1", 00:20:49.733 "uuid": "546714d7-c959-461c-87e6-9eadf2fae041", 00:20:49.733 "is_configured": true, 00:20:49.733 "data_offset": 2048, 00:20:49.733 "data_size": 63488 00:20:49.733 }, 00:20:49.733 { 00:20:49.733 "name": null, 00:20:49.733 "uuid": "48d35b70-0333-41ab-85af-8532107aaa23", 00:20:49.733 "is_configured": false, 00:20:49.733 "data_offset": 2048, 00:20:49.733 "data_size": 63488 00:20:49.733 }, 00:20:49.733 { 00:20:49.733 "name": null, 00:20:49.733 "uuid": "ac6f3630-04f6-4a58-ae10-8ad0ab1a4401", 00:20:49.733 "is_configured": false, 00:20:49.733 "data_offset": 2048, 00:20:49.733 "data_size": 63488 00:20:49.733 } 00:20:49.733 ] 00:20:49.733 }' 00:20:49.733 07:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.733 07:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.300 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.300 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:50.558 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:50.558 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:50.817 [2024-07-12 07:31:24.571090] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:50.817 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:50.817 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:50.817 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:50.817 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:50.817 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:50.817 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:50.817 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.817 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.817 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.817 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.817 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.817 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.082 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:51.082 "name": "Existed_Raid", 00:20:51.082 "uuid": "b165b5fc-7027-4f01-8103-0bd55115e13f", 00:20:51.082 "strip_size_kb": 0, 00:20:51.082 "state": "configuring", 00:20:51.082 "raid_level": "raid1", 00:20:51.082 "superblock": true, 00:20:51.082 "num_base_bdevs": 3, 00:20:51.082 "num_base_bdevs_discovered": 2, 00:20:51.082 "num_base_bdevs_operational": 3, 00:20:51.082 "base_bdevs_list": [ 00:20:51.082 { 00:20:51.082 "name": "BaseBdev1", 00:20:51.082 "uuid": "546714d7-c959-461c-87e6-9eadf2fae041", 00:20:51.082 "is_configured": true, 00:20:51.082 "data_offset": 2048, 00:20:51.082 "data_size": 63488 00:20:51.082 }, 00:20:51.082 { 00:20:51.082 "name": null, 00:20:51.082 "uuid": "48d35b70-0333-41ab-85af-8532107aaa23", 00:20:51.082 "is_configured": false, 00:20:51.082 "data_offset": 2048, 00:20:51.082 "data_size": 63488 00:20:51.082 }, 00:20:51.082 { 00:20:51.082 "name": "BaseBdev3", 00:20:51.082 "uuid": "ac6f3630-04f6-4a58-ae10-8ad0ab1a4401", 00:20:51.082 "is_configured": true, 00:20:51.082 "data_offset": 2048, 00:20:51.082 "data_size": 63488 00:20:51.082 } 00:20:51.082 ] 00:20:51.082 }' 00:20:51.082 07:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:51.082 07:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.025 07:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:52.025 07:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.025 07:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:52.025 07:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:52.283 [2024-07-12 07:31:26.131530] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:52.541 07:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:52.541 07:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:52.541 07:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:52.541 07:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:52.541 07:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:52.541 07:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:52.541 07:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.541 07:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.541 07:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.541 07:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:52.541 07:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.541 07:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.798 07:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:52.798 "name": "Existed_Raid", 00:20:52.798 "uuid": "b165b5fc-7027-4f01-8103-0bd55115e13f", 00:20:52.798 "strip_size_kb": 0, 00:20:52.798 "state": "configuring", 00:20:52.798 "raid_level": "raid1", 00:20:52.798 "superblock": true, 00:20:52.798 "num_base_bdevs": 3, 00:20:52.798 "num_base_bdevs_discovered": 1, 00:20:52.798 "num_base_bdevs_operational": 3, 00:20:52.798 "base_bdevs_list": [ 00:20:52.798 { 00:20:52.798 "name": null, 00:20:52.798 "uuid": "546714d7-c959-461c-87e6-9eadf2fae041", 00:20:52.798 "is_configured": false, 00:20:52.798 "data_offset": 2048, 00:20:52.798 "data_size": 63488 00:20:52.798 }, 00:20:52.798 { 00:20:52.798 "name": null, 00:20:52.798 "uuid": "48d35b70-0333-41ab-85af-8532107aaa23", 00:20:52.798 "is_configured": false, 00:20:52.798 "data_offset": 2048, 00:20:52.798 "data_size": 63488 00:20:52.798 }, 00:20:52.798 { 00:20:52.798 "name": "BaseBdev3", 00:20:52.798 "uuid": "ac6f3630-04f6-4a58-ae10-8ad0ab1a4401", 00:20:52.798 "is_configured": true, 00:20:52.798 "data_offset": 2048, 00:20:52.798 "data_size": 63488 00:20:52.798 } 00:20:52.798 ] 00:20:52.798 }' 00:20:52.798 07:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:52.798 07:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.362 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:53.362 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.618 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:53.618 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:53.874 [2024-07-12 07:31:27.737124] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:54.130 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:54.131 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:54.131 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:54.131 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:54.131 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:54.131 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:54.131 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.131 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.131 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.131 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.131 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.131 07:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.388 07:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.388 "name": "Existed_Raid", 00:20:54.388 "uuid": "b165b5fc-7027-4f01-8103-0bd55115e13f", 00:20:54.388 "strip_size_kb": 0, 00:20:54.388 "state": "configuring", 00:20:54.388 "raid_level": "raid1", 00:20:54.388 "superblock": true, 00:20:54.388 "num_base_bdevs": 3, 00:20:54.388 "num_base_bdevs_discovered": 2, 00:20:54.388 "num_base_bdevs_operational": 3, 00:20:54.388 "base_bdevs_list": [ 00:20:54.388 { 00:20:54.388 "name": null, 00:20:54.388 "uuid": "546714d7-c959-461c-87e6-9eadf2fae041", 00:20:54.388 "is_configured": false, 00:20:54.388 "data_offset": 2048, 00:20:54.388 "data_size": 63488 00:20:54.388 }, 00:20:54.388 { 00:20:54.388 "name": "BaseBdev2", 00:20:54.388 "uuid": "48d35b70-0333-41ab-85af-8532107aaa23", 00:20:54.388 "is_configured": true, 00:20:54.388 "data_offset": 2048, 00:20:54.388 "data_size": 63488 00:20:54.388 }, 00:20:54.388 { 00:20:54.388 "name": "BaseBdev3", 00:20:54.388 "uuid": "ac6f3630-04f6-4a58-ae10-8ad0ab1a4401", 00:20:54.388 "is_configured": true, 00:20:54.388 "data_offset": 2048, 00:20:54.388 "data_size": 63488 00:20:54.388 } 00:20:54.388 ] 00:20:54.388 }' 00:20:54.388 07:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.388 07:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.952 07:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.952 07:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:55.209 07:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:55.209 07:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:55.209 07:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.467 07:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 546714d7-c959-461c-87e6-9eadf2fae041 00:20:55.725 [2024-07-12 07:31:29.587469] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:55.725 [2024-07-12 07:31:29.588063] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:20:55.725 [2024-07-12 07:31:29.588214] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:55.725 [2024-07-12 07:31:29.588332] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:20:55.725 [2024-07-12 07:31:29.588884] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:20:55.725 [2024-07-12 07:31:29.588995] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:20:55.725 [2024-07-12 07:31:29.589183] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.725 NewBaseBdev 00:20:55.983 07:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:55.983 07:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:20:55.983 07:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:55.983 07:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:55.983 07:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:55.983 07:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:55.983 07:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:56.242 07:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:56.242 [ 00:20:56.242 { 00:20:56.242 "name": "NewBaseBdev", 00:20:56.242 "aliases": [ 00:20:56.242 "546714d7-c959-461c-87e6-9eadf2fae041" 00:20:56.242 ], 00:20:56.242 "product_name": "Malloc disk", 00:20:56.242 "block_size": 512, 00:20:56.242 "num_blocks": 65536, 00:20:56.242 "uuid": "546714d7-c959-461c-87e6-9eadf2fae041", 00:20:56.242 "assigned_rate_limits": { 00:20:56.242 "rw_ios_per_sec": 0, 00:20:56.242 "rw_mbytes_per_sec": 0, 00:20:56.242 "r_mbytes_per_sec": 0, 00:20:56.242 "w_mbytes_per_sec": 0 00:20:56.242 }, 00:20:56.242 "claimed": true, 00:20:56.242 "claim_type": "exclusive_write", 00:20:56.242 "zoned": false, 00:20:56.242 "supported_io_types": { 00:20:56.242 "read": true, 00:20:56.242 "write": true, 00:20:56.242 "unmap": true, 00:20:56.242 "write_zeroes": true, 00:20:56.242 "flush": true, 00:20:56.242 "reset": true, 00:20:56.242 "compare": false, 00:20:56.242 "compare_and_write": false, 00:20:56.242 "abort": true, 00:20:56.242 "nvme_admin": false, 00:20:56.242 "nvme_io": false 00:20:56.242 }, 00:20:56.242 "memory_domains": [ 00:20:56.242 { 00:20:56.242 "dma_device_id": "system", 00:20:56.242 "dma_device_type": 1 00:20:56.242 }, 00:20:56.242 { 00:20:56.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.242 "dma_device_type": 2 00:20:56.242 } 00:20:56.242 ], 00:20:56.242 "driver_specific": {} 00:20:56.242 } 00:20:56.242 ] 00:20:56.242 07:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:56.242 07:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:56.242 07:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:56.242 07:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:56.242 07:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:56.242 07:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:56.242 07:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:56.242 07:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:56.242 07:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:56.242 07:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:56.242 07:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:56.242 07:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.242 07:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.809 07:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:56.809 "name": "Existed_Raid", 00:20:56.809 "uuid": "b165b5fc-7027-4f01-8103-0bd55115e13f", 00:20:56.809 "strip_size_kb": 0, 00:20:56.809 "state": "online", 00:20:56.809 "raid_level": "raid1", 00:20:56.809 "superblock": true, 00:20:56.809 "num_base_bdevs": 3, 00:20:56.809 "num_base_bdevs_discovered": 3, 00:20:56.809 "num_base_bdevs_operational": 3, 00:20:56.809 "base_bdevs_list": [ 00:20:56.809 { 00:20:56.809 "name": "NewBaseBdev", 00:20:56.809 "uuid": "546714d7-c959-461c-87e6-9eadf2fae041", 00:20:56.809 "is_configured": true, 00:20:56.809 "data_offset": 2048, 00:20:56.809 "data_size": 63488 00:20:56.809 }, 00:20:56.809 { 00:20:56.809 "name": "BaseBdev2", 00:20:56.809 "uuid": "48d35b70-0333-41ab-85af-8532107aaa23", 00:20:56.809 "is_configured": true, 00:20:56.809 "data_offset": 2048, 00:20:56.809 "data_size": 63488 00:20:56.809 }, 00:20:56.809 { 00:20:56.809 "name": "BaseBdev3", 00:20:56.809 "uuid": "ac6f3630-04f6-4a58-ae10-8ad0ab1a4401", 00:20:56.809 "is_configured": true, 00:20:56.809 "data_offset": 2048, 00:20:56.809 "data_size": 63488 00:20:56.809 } 00:20:56.809 ] 00:20:56.809 }' 00:20:56.809 07:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:56.809 07:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.376 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:57.376 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:57.376 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:57.376 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:57.376 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:57.376 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:57.376 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:57.376 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:57.635 [2024-07-12 07:31:31.300346] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:57.635 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:57.635 "name": "Existed_Raid", 00:20:57.635 "aliases": [ 00:20:57.635 "b165b5fc-7027-4f01-8103-0bd55115e13f" 00:20:57.635 ], 00:20:57.635 "product_name": "Raid Volume", 00:20:57.635 "block_size": 512, 00:20:57.635 "num_blocks": 63488, 00:20:57.635 "uuid": "b165b5fc-7027-4f01-8103-0bd55115e13f", 00:20:57.635 "assigned_rate_limits": { 00:20:57.635 "rw_ios_per_sec": 0, 00:20:57.635 "rw_mbytes_per_sec": 0, 00:20:57.635 "r_mbytes_per_sec": 0, 00:20:57.635 "w_mbytes_per_sec": 0 00:20:57.635 }, 00:20:57.635 "claimed": false, 00:20:57.635 "zoned": false, 00:20:57.635 "supported_io_types": { 00:20:57.635 "read": true, 00:20:57.635 "write": true, 00:20:57.635 "unmap": false, 00:20:57.635 "write_zeroes": true, 00:20:57.635 "flush": false, 00:20:57.635 "reset": true, 00:20:57.635 "compare": false, 00:20:57.635 "compare_and_write": false, 00:20:57.635 "abort": false, 00:20:57.635 "nvme_admin": false, 00:20:57.635 "nvme_io": false 00:20:57.635 }, 00:20:57.635 "memory_domains": [ 00:20:57.635 { 00:20:57.635 "dma_device_id": "system", 00:20:57.635 "dma_device_type": 1 00:20:57.635 }, 00:20:57.635 { 00:20:57.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.635 "dma_device_type": 2 00:20:57.635 }, 00:20:57.635 { 00:20:57.635 "dma_device_id": "system", 00:20:57.635 "dma_device_type": 1 00:20:57.635 }, 00:20:57.635 { 00:20:57.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.635 "dma_device_type": 2 00:20:57.635 }, 00:20:57.635 { 00:20:57.635 "dma_device_id": "system", 00:20:57.635 "dma_device_type": 1 00:20:57.635 }, 00:20:57.635 { 00:20:57.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.635 "dma_device_type": 2 00:20:57.635 } 00:20:57.635 ], 00:20:57.635 "driver_specific": { 00:20:57.635 "raid": { 00:20:57.635 "uuid": "b165b5fc-7027-4f01-8103-0bd55115e13f", 00:20:57.635 "strip_size_kb": 0, 00:20:57.635 "state": "online", 00:20:57.635 "raid_level": "raid1", 00:20:57.635 "superblock": true, 00:20:57.635 "num_base_bdevs": 3, 00:20:57.635 "num_base_bdevs_discovered": 3, 00:20:57.635 "num_base_bdevs_operational": 3, 00:20:57.635 "base_bdevs_list": [ 00:20:57.635 { 00:20:57.635 "name": "NewBaseBdev", 00:20:57.635 "uuid": "546714d7-c959-461c-87e6-9eadf2fae041", 00:20:57.635 "is_configured": true, 00:20:57.635 "data_offset": 2048, 00:20:57.635 "data_size": 63488 00:20:57.635 }, 00:20:57.635 { 00:20:57.635 "name": "BaseBdev2", 00:20:57.635 "uuid": "48d35b70-0333-41ab-85af-8532107aaa23", 00:20:57.635 "is_configured": true, 00:20:57.635 "data_offset": 2048, 00:20:57.635 "data_size": 63488 00:20:57.635 }, 00:20:57.635 { 00:20:57.635 "name": "BaseBdev3", 00:20:57.635 "uuid": "ac6f3630-04f6-4a58-ae10-8ad0ab1a4401", 00:20:57.635 "is_configured": true, 00:20:57.635 "data_offset": 2048, 00:20:57.635 "data_size": 63488 00:20:57.635 } 00:20:57.635 ] 00:20:57.635 } 00:20:57.635 } 00:20:57.635 }' 00:20:57.635 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:57.635 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:57.635 BaseBdev2 00:20:57.635 BaseBdev3' 00:20:57.635 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:57.635 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:57.635 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:57.894 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:57.894 "name": "NewBaseBdev", 00:20:57.894 "aliases": [ 00:20:57.894 "546714d7-c959-461c-87e6-9eadf2fae041" 00:20:57.894 ], 00:20:57.894 "product_name": "Malloc disk", 00:20:57.894 "block_size": 512, 00:20:57.894 "num_blocks": 65536, 00:20:57.894 "uuid": "546714d7-c959-461c-87e6-9eadf2fae041", 00:20:57.894 "assigned_rate_limits": { 00:20:57.894 "rw_ios_per_sec": 0, 00:20:57.894 "rw_mbytes_per_sec": 0, 00:20:57.894 "r_mbytes_per_sec": 0, 00:20:57.894 "w_mbytes_per_sec": 0 00:20:57.894 }, 00:20:57.894 "claimed": true, 00:20:57.894 "claim_type": "exclusive_write", 00:20:57.894 "zoned": false, 00:20:57.894 "supported_io_types": { 00:20:57.894 "read": true, 00:20:57.894 "write": true, 00:20:57.894 "unmap": true, 00:20:57.894 "write_zeroes": true, 00:20:57.894 "flush": true, 00:20:57.894 "reset": true, 00:20:57.894 "compare": false, 00:20:57.894 "compare_and_write": false, 00:20:57.894 "abort": true, 00:20:57.894 "nvme_admin": false, 00:20:57.894 "nvme_io": false 00:20:57.894 }, 00:20:57.894 "memory_domains": [ 00:20:57.894 { 00:20:57.894 "dma_device_id": "system", 00:20:57.894 "dma_device_type": 1 00:20:57.894 }, 00:20:57.894 { 00:20:57.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.894 "dma_device_type": 2 00:20:57.894 } 00:20:57.894 ], 00:20:57.894 "driver_specific": {} 00:20:57.894 }' 00:20:57.894 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.894 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:57.894 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:57.894 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.152 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.152 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:58.152 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.152 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.152 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.152 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.152 07:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.152 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:58.152 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:58.411 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:58.411 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:58.411 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:58.411 "name": "BaseBdev2", 00:20:58.411 "aliases": [ 00:20:58.411 "48d35b70-0333-41ab-85af-8532107aaa23" 00:20:58.411 ], 00:20:58.411 "product_name": "Malloc disk", 00:20:58.411 "block_size": 512, 00:20:58.411 "num_blocks": 65536, 00:20:58.411 "uuid": "48d35b70-0333-41ab-85af-8532107aaa23", 00:20:58.411 "assigned_rate_limits": { 00:20:58.411 "rw_ios_per_sec": 0, 00:20:58.411 "rw_mbytes_per_sec": 0, 00:20:58.411 "r_mbytes_per_sec": 0, 00:20:58.411 "w_mbytes_per_sec": 0 00:20:58.411 }, 00:20:58.411 "claimed": true, 00:20:58.411 "claim_type": "exclusive_write", 00:20:58.411 "zoned": false, 00:20:58.411 "supported_io_types": { 00:20:58.411 "read": true, 00:20:58.411 "write": true, 00:20:58.411 "unmap": true, 00:20:58.411 "write_zeroes": true, 00:20:58.411 "flush": true, 00:20:58.411 "reset": true, 00:20:58.411 "compare": false, 00:20:58.411 "compare_and_write": false, 00:20:58.411 "abort": true, 00:20:58.411 "nvme_admin": false, 00:20:58.411 "nvme_io": false 00:20:58.411 }, 00:20:58.411 "memory_domains": [ 00:20:58.411 { 00:20:58.411 "dma_device_id": "system", 00:20:58.411 "dma_device_type": 1 00:20:58.411 }, 00:20:58.411 { 00:20:58.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.411 "dma_device_type": 2 00:20:58.411 } 00:20:58.411 ], 00:20:58.411 "driver_specific": {} 00:20:58.411 }' 00:20:58.411 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.670 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:58.670 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:58.670 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.670 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:58.670 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:58.670 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.670 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:58.929 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:58.929 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.929 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:58.929 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:58.929 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:58.929 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:58.929 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:59.188 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:59.188 "name": "BaseBdev3", 00:20:59.188 "aliases": [ 00:20:59.188 "ac6f3630-04f6-4a58-ae10-8ad0ab1a4401" 00:20:59.188 ], 00:20:59.188 "product_name": "Malloc disk", 00:20:59.188 "block_size": 512, 00:20:59.188 "num_blocks": 65536, 00:20:59.188 "uuid": "ac6f3630-04f6-4a58-ae10-8ad0ab1a4401", 00:20:59.188 "assigned_rate_limits": { 00:20:59.188 "rw_ios_per_sec": 0, 00:20:59.188 "rw_mbytes_per_sec": 0, 00:20:59.188 "r_mbytes_per_sec": 0, 00:20:59.188 "w_mbytes_per_sec": 0 00:20:59.188 }, 00:20:59.188 "claimed": true, 00:20:59.188 "claim_type": "exclusive_write", 00:20:59.188 "zoned": false, 00:20:59.188 "supported_io_types": { 00:20:59.188 "read": true, 00:20:59.188 "write": true, 00:20:59.188 "unmap": true, 00:20:59.188 "write_zeroes": true, 00:20:59.188 "flush": true, 00:20:59.188 "reset": true, 00:20:59.188 "compare": false, 00:20:59.188 "compare_and_write": false, 00:20:59.188 "abort": true, 00:20:59.188 "nvme_admin": false, 00:20:59.188 "nvme_io": false 00:20:59.188 }, 00:20:59.188 "memory_domains": [ 00:20:59.188 { 00:20:59.189 "dma_device_id": "system", 00:20:59.189 "dma_device_type": 1 00:20:59.189 }, 00:20:59.189 { 00:20:59.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.189 "dma_device_type": 2 00:20:59.189 } 00:20:59.189 ], 00:20:59.189 "driver_specific": {} 00:20:59.189 }' 00:20:59.189 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:59.189 07:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:59.189 07:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:59.189 07:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:59.446 07:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:59.446 07:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:59.446 07:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:59.446 07:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:59.446 07:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:59.446 07:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:59.446 07:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:59.703 07:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:59.703 07:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:59.961 [2024-07-12 07:31:33.596483] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:59.961 [2024-07-12 07:31:33.596771] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:59.961 [2024-07-12 07:31:33.596944] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:59.961 [2024-07-12 07:31:33.597333] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:59.961 [2024-07-12 07:31:33.597436] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:20:59.961 07:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 142075 00:20:59.961 07:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 142075 ']' 00:20:59.961 07:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 142075 00:20:59.961 07:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:20:59.961 07:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:59.961 07:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 142075 00:20:59.961 killing process with pid 142075 00:20:59.961 07:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:59.961 07:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:59.961 07:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 142075' 00:20:59.961 07:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 142075 00:20:59.961 07:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 142075 00:20:59.961 [2024-07-12 07:31:33.654156] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:59.961 [2024-07-12 07:31:33.714766] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:00.525 07:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:00.525 ************************************ 00:21:00.525 END TEST raid_state_function_test_sb 00:21:00.525 ************************************ 00:21:00.525 00:21:00.525 real 0m29.944s 00:21:00.525 user 0m55.091s 00:21:00.525 sys 0m5.114s 00:21:00.525 07:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:00.525 07:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.525 07:31:34 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:21:00.525 07:31:34 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:21:00.525 07:31:34 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:00.525 07:31:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:00.525 ************************************ 00:21:00.525 START TEST raid_superblock_test 00:21:00.525 ************************************ 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 3 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=143054 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 143054 /var/tmp/spdk-raid.sock 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 143054 ']' 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:00.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:00.525 07:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.525 [2024-07-12 07:31:34.288714] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:21:00.525 [2024-07-12 07:31:34.289304] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143054 ] 00:21:00.781 [2024-07-12 07:31:34.450840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.781 [2024-07-12 07:31:34.550793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.781 [2024-07-12 07:31:34.638977] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:01.713 07:31:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:01.713 07:31:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:21:01.713 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:01.713 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:01.713 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:01.713 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:01.713 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:01.713 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:01.713 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:01.713 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:01.713 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:01.971 malloc1 00:21:01.971 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:02.231 [2024-07-12 07:31:35.929281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:02.231 [2024-07-12 07:31:35.930884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.231 [2024-07-12 07:31:35.931012] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:21:02.231 [2024-07-12 07:31:35.931180] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.231 [2024-07-12 07:31:35.934392] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.231 [2024-07-12 07:31:35.934600] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:02.231 pt1 00:21:02.231 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:02.231 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:02.231 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:02.231 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:02.231 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:02.231 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:02.231 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:02.231 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:02.231 07:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:02.489 malloc2 00:21:02.489 07:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:02.775 [2024-07-12 07:31:36.531842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:02.775 [2024-07-12 07:31:36.532145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.775 [2024-07-12 07:31:36.532237] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:21:02.775 [2024-07-12 07:31:36.532414] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.775 [2024-07-12 07:31:36.535698] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.775 [2024-07-12 07:31:36.535883] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:02.775 pt2 00:21:02.775 07:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:02.775 07:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:02.775 07:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:21:02.775 07:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:21:02.775 07:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:02.775 07:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:02.775 07:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:02.775 07:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:02.775 07:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:03.045 malloc3 00:21:03.045 07:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:03.304 [2024-07-12 07:31:37.146276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:03.304 [2024-07-12 07:31:37.146666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.304 [2024-07-12 07:31:37.146757] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:03.304 [2024-07-12 07:31:37.147028] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.304 [2024-07-12 07:31:37.150022] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.304 [2024-07-12 07:31:37.150231] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:03.304 pt3 00:21:03.304 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:03.304 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:03.304 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:03.562 [2024-07-12 07:31:37.442752] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:03.562 [2024-07-12 07:31:37.445746] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:03.562 [2024-07-12 07:31:37.445953] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:03.820 [2024-07-12 07:31:37.446335] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:21:03.820 [2024-07-12 07:31:37.446450] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:03.820 [2024-07-12 07:31:37.446729] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:21:03.820 [2024-07-12 07:31:37.447357] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:21:03.820 [2024-07-12 07:31:37.447481] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:21:03.820 [2024-07-12 07:31:37.447819] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.820 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:03.820 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:03.820 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:03.820 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:03.820 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:03.820 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:03.820 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:03.820 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:03.820 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:03.820 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:03.820 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.820 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.820 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:03.820 "name": "raid_bdev1", 00:21:03.820 "uuid": "e79f3089-d96a-47c8-a3de-44214b942b01", 00:21:03.820 "strip_size_kb": 0, 00:21:03.820 "state": "online", 00:21:03.820 "raid_level": "raid1", 00:21:03.820 "superblock": true, 00:21:03.820 "num_base_bdevs": 3, 00:21:03.820 "num_base_bdevs_discovered": 3, 00:21:03.820 "num_base_bdevs_operational": 3, 00:21:03.820 "base_bdevs_list": [ 00:21:03.820 { 00:21:03.820 "name": "pt1", 00:21:03.820 "uuid": "1c648eb3-ee7e-54c3-8110-6c6c0d805874", 00:21:03.820 "is_configured": true, 00:21:03.820 "data_offset": 2048, 00:21:03.820 "data_size": 63488 00:21:03.820 }, 00:21:03.820 { 00:21:03.820 "name": "pt2", 00:21:03.820 "uuid": "0dfc9b1f-809b-57ee-a852-2ec20b0ed9dd", 00:21:03.820 "is_configured": true, 00:21:03.820 "data_offset": 2048, 00:21:03.820 "data_size": 63488 00:21:03.820 }, 00:21:03.820 { 00:21:03.820 "name": "pt3", 00:21:03.820 "uuid": "137f8263-d459-53ef-8ecc-979611f25832", 00:21:03.820 "is_configured": true, 00:21:03.820 "data_offset": 2048, 00:21:03.820 "data_size": 63488 00:21:03.820 } 00:21:03.820 ] 00:21:03.820 }' 00:21:03.820 07:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:03.820 07:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.756 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:04.756 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:04.756 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:04.756 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:04.756 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:04.756 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:04.756 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:04.756 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:04.756 [2024-07-12 07:31:38.572274] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:04.756 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:04.756 "name": "raid_bdev1", 00:21:04.756 "aliases": [ 00:21:04.756 "e79f3089-d96a-47c8-a3de-44214b942b01" 00:21:04.756 ], 00:21:04.756 "product_name": "Raid Volume", 00:21:04.756 "block_size": 512, 00:21:04.756 "num_blocks": 63488, 00:21:04.756 "uuid": "e79f3089-d96a-47c8-a3de-44214b942b01", 00:21:04.756 "assigned_rate_limits": { 00:21:04.756 "rw_ios_per_sec": 0, 00:21:04.756 "rw_mbytes_per_sec": 0, 00:21:04.756 "r_mbytes_per_sec": 0, 00:21:04.756 "w_mbytes_per_sec": 0 00:21:04.756 }, 00:21:04.756 "claimed": false, 00:21:04.756 "zoned": false, 00:21:04.756 "supported_io_types": { 00:21:04.756 "read": true, 00:21:04.756 "write": true, 00:21:04.756 "unmap": false, 00:21:04.756 "write_zeroes": true, 00:21:04.756 "flush": false, 00:21:04.756 "reset": true, 00:21:04.756 "compare": false, 00:21:04.756 "compare_and_write": false, 00:21:04.756 "abort": false, 00:21:04.756 "nvme_admin": false, 00:21:04.756 "nvme_io": false 00:21:04.756 }, 00:21:04.756 "memory_domains": [ 00:21:04.756 { 00:21:04.756 "dma_device_id": "system", 00:21:04.756 "dma_device_type": 1 00:21:04.756 }, 00:21:04.756 { 00:21:04.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.756 "dma_device_type": 2 00:21:04.756 }, 00:21:04.756 { 00:21:04.756 "dma_device_id": "system", 00:21:04.756 "dma_device_type": 1 00:21:04.756 }, 00:21:04.756 { 00:21:04.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.756 "dma_device_type": 2 00:21:04.756 }, 00:21:04.756 { 00:21:04.756 "dma_device_id": "system", 00:21:04.756 "dma_device_type": 1 00:21:04.756 }, 00:21:04.756 { 00:21:04.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.756 "dma_device_type": 2 00:21:04.756 } 00:21:04.756 ], 00:21:04.756 "driver_specific": { 00:21:04.756 "raid": { 00:21:04.756 "uuid": "e79f3089-d96a-47c8-a3de-44214b942b01", 00:21:04.756 "strip_size_kb": 0, 00:21:04.756 "state": "online", 00:21:04.756 "raid_level": "raid1", 00:21:04.756 "superblock": true, 00:21:04.756 "num_base_bdevs": 3, 00:21:04.756 "num_base_bdevs_discovered": 3, 00:21:04.756 "num_base_bdevs_operational": 3, 00:21:04.756 "base_bdevs_list": [ 00:21:04.756 { 00:21:04.756 "name": "pt1", 00:21:04.756 "uuid": "1c648eb3-ee7e-54c3-8110-6c6c0d805874", 00:21:04.756 "is_configured": true, 00:21:04.756 "data_offset": 2048, 00:21:04.756 "data_size": 63488 00:21:04.756 }, 00:21:04.756 { 00:21:04.756 "name": "pt2", 00:21:04.756 "uuid": "0dfc9b1f-809b-57ee-a852-2ec20b0ed9dd", 00:21:04.756 "is_configured": true, 00:21:04.756 "data_offset": 2048, 00:21:04.756 "data_size": 63488 00:21:04.756 }, 00:21:04.756 { 00:21:04.756 "name": "pt3", 00:21:04.756 "uuid": "137f8263-d459-53ef-8ecc-979611f25832", 00:21:04.756 "is_configured": true, 00:21:04.756 "data_offset": 2048, 00:21:04.756 "data_size": 63488 00:21:04.756 } 00:21:04.756 ] 00:21:04.756 } 00:21:04.756 } 00:21:04.756 }' 00:21:04.756 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:04.756 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:04.756 pt2 00:21:04.756 pt3' 00:21:04.756 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:04.756 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:04.756 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:05.323 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:05.323 "name": "pt1", 00:21:05.323 "aliases": [ 00:21:05.323 "1c648eb3-ee7e-54c3-8110-6c6c0d805874" 00:21:05.323 ], 00:21:05.323 "product_name": "passthru", 00:21:05.323 "block_size": 512, 00:21:05.323 "num_blocks": 65536, 00:21:05.323 "uuid": "1c648eb3-ee7e-54c3-8110-6c6c0d805874", 00:21:05.323 "assigned_rate_limits": { 00:21:05.323 "rw_ios_per_sec": 0, 00:21:05.323 "rw_mbytes_per_sec": 0, 00:21:05.323 "r_mbytes_per_sec": 0, 00:21:05.323 "w_mbytes_per_sec": 0 00:21:05.323 }, 00:21:05.323 "claimed": true, 00:21:05.323 "claim_type": "exclusive_write", 00:21:05.323 "zoned": false, 00:21:05.323 "supported_io_types": { 00:21:05.323 "read": true, 00:21:05.323 "write": true, 00:21:05.323 "unmap": true, 00:21:05.323 "write_zeroes": true, 00:21:05.323 "flush": true, 00:21:05.323 "reset": true, 00:21:05.323 "compare": false, 00:21:05.323 "compare_and_write": false, 00:21:05.323 "abort": true, 00:21:05.323 "nvme_admin": false, 00:21:05.323 "nvme_io": false 00:21:05.323 }, 00:21:05.323 "memory_domains": [ 00:21:05.323 { 00:21:05.323 "dma_device_id": "system", 00:21:05.323 "dma_device_type": 1 00:21:05.323 }, 00:21:05.323 { 00:21:05.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.323 "dma_device_type": 2 00:21:05.323 } 00:21:05.323 ], 00:21:05.323 "driver_specific": { 00:21:05.323 "passthru": { 00:21:05.323 "name": "pt1", 00:21:05.323 "base_bdev_name": "malloc1" 00:21:05.323 } 00:21:05.323 } 00:21:05.323 }' 00:21:05.323 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.323 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.323 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:05.323 07:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.323 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.323 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:05.323 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.323 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.323 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:05.323 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.323 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:05.581 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:05.581 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:05.581 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:05.581 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:05.581 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:05.581 "name": "pt2", 00:21:05.581 "aliases": [ 00:21:05.581 "0dfc9b1f-809b-57ee-a852-2ec20b0ed9dd" 00:21:05.581 ], 00:21:05.581 "product_name": "passthru", 00:21:05.581 "block_size": 512, 00:21:05.581 "num_blocks": 65536, 00:21:05.581 "uuid": "0dfc9b1f-809b-57ee-a852-2ec20b0ed9dd", 00:21:05.581 "assigned_rate_limits": { 00:21:05.581 "rw_ios_per_sec": 0, 00:21:05.581 "rw_mbytes_per_sec": 0, 00:21:05.581 "r_mbytes_per_sec": 0, 00:21:05.581 "w_mbytes_per_sec": 0 00:21:05.581 }, 00:21:05.581 "claimed": true, 00:21:05.581 "claim_type": "exclusive_write", 00:21:05.581 "zoned": false, 00:21:05.581 "supported_io_types": { 00:21:05.581 "read": true, 00:21:05.581 "write": true, 00:21:05.581 "unmap": true, 00:21:05.581 "write_zeroes": true, 00:21:05.581 "flush": true, 00:21:05.581 "reset": true, 00:21:05.581 "compare": false, 00:21:05.581 "compare_and_write": false, 00:21:05.581 "abort": true, 00:21:05.581 "nvme_admin": false, 00:21:05.581 "nvme_io": false 00:21:05.581 }, 00:21:05.581 "memory_domains": [ 00:21:05.581 { 00:21:05.581 "dma_device_id": "system", 00:21:05.581 "dma_device_type": 1 00:21:05.581 }, 00:21:05.581 { 00:21:05.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.581 "dma_device_type": 2 00:21:05.581 } 00:21:05.581 ], 00:21:05.581 "driver_specific": { 00:21:05.581 "passthru": { 00:21:05.581 "name": "pt2", 00:21:05.581 "base_bdev_name": "malloc2" 00:21:05.581 } 00:21:05.581 } 00:21:05.581 }' 00:21:05.581 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.838 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:05.838 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:05.838 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.838 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:05.838 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:05.838 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:05.838 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.095 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:06.095 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.095 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.095 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:06.095 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:06.095 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:06.095 07:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:06.359 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:06.359 "name": "pt3", 00:21:06.359 "aliases": [ 00:21:06.359 "137f8263-d459-53ef-8ecc-979611f25832" 00:21:06.359 ], 00:21:06.359 "product_name": "passthru", 00:21:06.359 "block_size": 512, 00:21:06.359 "num_blocks": 65536, 00:21:06.359 "uuid": "137f8263-d459-53ef-8ecc-979611f25832", 00:21:06.359 "assigned_rate_limits": { 00:21:06.359 "rw_ios_per_sec": 0, 00:21:06.359 "rw_mbytes_per_sec": 0, 00:21:06.359 "r_mbytes_per_sec": 0, 00:21:06.359 "w_mbytes_per_sec": 0 00:21:06.359 }, 00:21:06.359 "claimed": true, 00:21:06.359 "claim_type": "exclusive_write", 00:21:06.359 "zoned": false, 00:21:06.359 "supported_io_types": { 00:21:06.359 "read": true, 00:21:06.359 "write": true, 00:21:06.359 "unmap": true, 00:21:06.359 "write_zeroes": true, 00:21:06.359 "flush": true, 00:21:06.359 "reset": true, 00:21:06.359 "compare": false, 00:21:06.359 "compare_and_write": false, 00:21:06.359 "abort": true, 00:21:06.359 "nvme_admin": false, 00:21:06.359 "nvme_io": false 00:21:06.359 }, 00:21:06.360 "memory_domains": [ 00:21:06.360 { 00:21:06.360 "dma_device_id": "system", 00:21:06.360 "dma_device_type": 1 00:21:06.360 }, 00:21:06.360 { 00:21:06.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.360 "dma_device_type": 2 00:21:06.360 } 00:21:06.360 ], 00:21:06.360 "driver_specific": { 00:21:06.360 "passthru": { 00:21:06.360 "name": "pt3", 00:21:06.360 "base_bdev_name": "malloc3" 00:21:06.360 } 00:21:06.360 } 00:21:06.360 }' 00:21:06.360 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.360 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.360 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:06.360 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.360 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.360 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:06.360 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.618 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.618 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:06.618 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.618 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.618 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:06.618 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:06.618 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:06.877 [2024-07-12 07:31:40.624704] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.877 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=e79f3089-d96a-47c8-a3de-44214b942b01 00:21:06.877 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z e79f3089-d96a-47c8-a3de-44214b942b01 ']' 00:21:06.877 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:07.135 [2024-07-12 07:31:40.892609] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:07.136 [2024-07-12 07:31:40.892888] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:07.136 [2024-07-12 07:31:40.893155] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:07.136 [2024-07-12 07:31:40.893365] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:07.136 [2024-07-12 07:31:40.893454] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:21:07.136 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.136 07:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:07.394 07:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:07.394 07:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:07.394 07:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:07.394 07:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:07.654 07:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:07.654 07:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:07.654 07:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:07.654 07:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:07.913 07:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:07.913 07:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:08.172 07:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:08.172 07:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:08.172 07:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:21:08.172 07:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:08.172 07:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.172 07:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:08.172 07:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.172 07:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:08.172 07:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.172 07:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:08.172 07:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.172 07:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:08.172 07:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:08.432 [2024-07-12 07:31:42.172819] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:08.432 [2024-07-12 07:31:42.175597] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:08.432 [2024-07-12 07:31:42.175786] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:08.432 [2024-07-12 07:31:42.175873] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:08.432 [2024-07-12 07:31:42.176079] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:08.432 [2024-07-12 07:31:42.176146] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:08.432 [2024-07-12 07:31:42.176333] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:08.432 [2024-07-12 07:31:42.176411] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:21:08.432 request: 00:21:08.432 { 00:21:08.432 "name": "raid_bdev1", 00:21:08.432 "raid_level": "raid1", 00:21:08.432 "base_bdevs": [ 00:21:08.432 "malloc1", 00:21:08.432 "malloc2", 00:21:08.432 "malloc3" 00:21:08.432 ], 00:21:08.432 "superblock": false, 00:21:08.432 "method": "bdev_raid_create", 00:21:08.432 "req_id": 1 00:21:08.432 } 00:21:08.432 Got JSON-RPC error response 00:21:08.432 response: 00:21:08.432 { 00:21:08.432 "code": -17, 00:21:08.432 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:08.432 } 00:21:08.432 07:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:21:08.432 07:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:08.432 07:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:08.432 07:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:08.432 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.432 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:08.691 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:08.691 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:08.691 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:08.951 [2024-07-12 07:31:42.576907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:08.951 [2024-07-12 07:31:42.577171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.951 [2024-07-12 07:31:42.577321] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:08.951 [2024-07-12 07:31:42.577418] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.951 [2024-07-12 07:31:42.580517] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.951 [2024-07-12 07:31:42.580683] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:08.951 [2024-07-12 07:31:42.580908] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:08.951 [2024-07-12 07:31:42.581054] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:08.951 pt1 00:21:08.951 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:08.951 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:08.951 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:08.951 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:08.951 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:08.951 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:08.951 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:08.951 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:08.951 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:08.951 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:08.951 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.951 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.210 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.210 "name": "raid_bdev1", 00:21:09.211 "uuid": "e79f3089-d96a-47c8-a3de-44214b942b01", 00:21:09.211 "strip_size_kb": 0, 00:21:09.211 "state": "configuring", 00:21:09.211 "raid_level": "raid1", 00:21:09.211 "superblock": true, 00:21:09.211 "num_base_bdevs": 3, 00:21:09.211 "num_base_bdevs_discovered": 1, 00:21:09.211 "num_base_bdevs_operational": 3, 00:21:09.211 "base_bdevs_list": [ 00:21:09.211 { 00:21:09.211 "name": "pt1", 00:21:09.211 "uuid": "1c648eb3-ee7e-54c3-8110-6c6c0d805874", 00:21:09.211 "is_configured": true, 00:21:09.211 "data_offset": 2048, 00:21:09.211 "data_size": 63488 00:21:09.211 }, 00:21:09.211 { 00:21:09.211 "name": null, 00:21:09.211 "uuid": "0dfc9b1f-809b-57ee-a852-2ec20b0ed9dd", 00:21:09.211 "is_configured": false, 00:21:09.211 "data_offset": 2048, 00:21:09.211 "data_size": 63488 00:21:09.211 }, 00:21:09.211 { 00:21:09.211 "name": null, 00:21:09.211 "uuid": "137f8263-d459-53ef-8ecc-979611f25832", 00:21:09.211 "is_configured": false, 00:21:09.211 "data_offset": 2048, 00:21:09.211 "data_size": 63488 00:21:09.211 } 00:21:09.211 ] 00:21:09.211 }' 00:21:09.211 07:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.211 07:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.780 07:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:21:09.780 07:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:10.038 [2024-07-12 07:31:43.693239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:10.038 [2024-07-12 07:31:43.693514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:10.038 [2024-07-12 07:31:43.693667] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:21:10.038 [2024-07-12 07:31:43.693781] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:10.038 [2024-07-12 07:31:43.694303] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:10.038 [2024-07-12 07:31:43.694446] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:10.038 [2024-07-12 07:31:43.694680] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:10.038 [2024-07-12 07:31:43.694804] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:10.038 pt2 00:21:10.038 07:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:10.038 [2024-07-12 07:31:43.897321] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:10.038 07:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:10.038 07:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:10.038 07:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:10.038 07:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:10.038 07:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:10.038 07:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:10.038 07:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:10.038 07:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:10.038 07:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:10.038 07:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:10.038 07:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.297 07:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.556 07:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.556 "name": "raid_bdev1", 00:21:10.556 "uuid": "e79f3089-d96a-47c8-a3de-44214b942b01", 00:21:10.556 "strip_size_kb": 0, 00:21:10.556 "state": "configuring", 00:21:10.556 "raid_level": "raid1", 00:21:10.556 "superblock": true, 00:21:10.556 "num_base_bdevs": 3, 00:21:10.556 "num_base_bdevs_discovered": 1, 00:21:10.556 "num_base_bdevs_operational": 3, 00:21:10.556 "base_bdevs_list": [ 00:21:10.556 { 00:21:10.556 "name": "pt1", 00:21:10.556 "uuid": "1c648eb3-ee7e-54c3-8110-6c6c0d805874", 00:21:10.556 "is_configured": true, 00:21:10.556 "data_offset": 2048, 00:21:10.556 "data_size": 63488 00:21:10.556 }, 00:21:10.556 { 00:21:10.556 "name": null, 00:21:10.556 "uuid": "0dfc9b1f-809b-57ee-a852-2ec20b0ed9dd", 00:21:10.556 "is_configured": false, 00:21:10.556 "data_offset": 2048, 00:21:10.556 "data_size": 63488 00:21:10.556 }, 00:21:10.556 { 00:21:10.556 "name": null, 00:21:10.556 "uuid": "137f8263-d459-53ef-8ecc-979611f25832", 00:21:10.556 "is_configured": false, 00:21:10.556 "data_offset": 2048, 00:21:10.556 "data_size": 63488 00:21:10.556 } 00:21:10.556 ] 00:21:10.556 }' 00:21:10.556 07:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.556 07:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.122 07:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:11.122 07:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:11.122 07:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:11.122 [2024-07-12 07:31:44.965468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:11.122 [2024-07-12 07:31:44.966048] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.122 [2024-07-12 07:31:44.966131] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:11.122 [2024-07-12 07:31:44.966261] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.122 [2024-07-12 07:31:44.966797] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.122 [2024-07-12 07:31:44.966948] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:11.122 [2024-07-12 07:31:44.967174] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:11.122 [2024-07-12 07:31:44.967291] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:11.122 pt2 00:21:11.123 07:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:11.123 07:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:11.123 07:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:11.381 [2024-07-12 07:31:45.213534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:11.381 [2024-07-12 07:31:45.213870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.381 [2024-07-12 07:31:45.213956] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:11.381 [2024-07-12 07:31:45.214079] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.381 [2024-07-12 07:31:45.214623] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.381 [2024-07-12 07:31:45.214784] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:11.381 [2024-07-12 07:31:45.215003] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:11.381 [2024-07-12 07:31:45.215159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:11.381 [2024-07-12 07:31:45.215366] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:21:11.381 [2024-07-12 07:31:45.215462] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:11.381 [2024-07-12 07:31:45.215578] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:21:11.381 [2024-07-12 07:31:45.216073] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:21:11.381 [2024-07-12 07:31:45.216189] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:21:11.381 [2024-07-12 07:31:45.216392] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.381 pt3 00:21:11.381 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:11.381 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:11.381 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:11.381 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:11.381 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:11.381 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:11.381 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:11.382 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:11.382 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:11.382 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:11.382 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:11.382 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:11.382 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.382 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.641 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:11.641 "name": "raid_bdev1", 00:21:11.641 "uuid": "e79f3089-d96a-47c8-a3de-44214b942b01", 00:21:11.641 "strip_size_kb": 0, 00:21:11.641 "state": "online", 00:21:11.641 "raid_level": "raid1", 00:21:11.641 "superblock": true, 00:21:11.641 "num_base_bdevs": 3, 00:21:11.641 "num_base_bdevs_discovered": 3, 00:21:11.641 "num_base_bdevs_operational": 3, 00:21:11.641 "base_bdevs_list": [ 00:21:11.641 { 00:21:11.641 "name": "pt1", 00:21:11.641 "uuid": "1c648eb3-ee7e-54c3-8110-6c6c0d805874", 00:21:11.641 "is_configured": true, 00:21:11.641 "data_offset": 2048, 00:21:11.641 "data_size": 63488 00:21:11.641 }, 00:21:11.641 { 00:21:11.641 "name": "pt2", 00:21:11.641 "uuid": "0dfc9b1f-809b-57ee-a852-2ec20b0ed9dd", 00:21:11.641 "is_configured": true, 00:21:11.641 "data_offset": 2048, 00:21:11.641 "data_size": 63488 00:21:11.641 }, 00:21:11.641 { 00:21:11.641 "name": "pt3", 00:21:11.641 "uuid": "137f8263-d459-53ef-8ecc-979611f25832", 00:21:11.641 "is_configured": true, 00:21:11.641 "data_offset": 2048, 00:21:11.641 "data_size": 63488 00:21:11.641 } 00:21:11.641 ] 00:21:11.641 }' 00:21:11.641 07:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:11.641 07:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.208 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:12.208 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:12.208 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:12.208 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:12.208 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:12.208 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:12.208 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:12.208 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:12.466 [2024-07-12 07:31:46.241995] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:12.466 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:12.466 "name": "raid_bdev1", 00:21:12.466 "aliases": [ 00:21:12.466 "e79f3089-d96a-47c8-a3de-44214b942b01" 00:21:12.466 ], 00:21:12.466 "product_name": "Raid Volume", 00:21:12.466 "block_size": 512, 00:21:12.466 "num_blocks": 63488, 00:21:12.466 "uuid": "e79f3089-d96a-47c8-a3de-44214b942b01", 00:21:12.466 "assigned_rate_limits": { 00:21:12.466 "rw_ios_per_sec": 0, 00:21:12.466 "rw_mbytes_per_sec": 0, 00:21:12.466 "r_mbytes_per_sec": 0, 00:21:12.466 "w_mbytes_per_sec": 0 00:21:12.466 }, 00:21:12.466 "claimed": false, 00:21:12.466 "zoned": false, 00:21:12.466 "supported_io_types": { 00:21:12.466 "read": true, 00:21:12.466 "write": true, 00:21:12.466 "unmap": false, 00:21:12.466 "write_zeroes": true, 00:21:12.466 "flush": false, 00:21:12.466 "reset": true, 00:21:12.466 "compare": false, 00:21:12.466 "compare_and_write": false, 00:21:12.466 "abort": false, 00:21:12.466 "nvme_admin": false, 00:21:12.466 "nvme_io": false 00:21:12.466 }, 00:21:12.466 "memory_domains": [ 00:21:12.466 { 00:21:12.466 "dma_device_id": "system", 00:21:12.466 "dma_device_type": 1 00:21:12.466 }, 00:21:12.466 { 00:21:12.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.466 "dma_device_type": 2 00:21:12.466 }, 00:21:12.466 { 00:21:12.466 "dma_device_id": "system", 00:21:12.466 "dma_device_type": 1 00:21:12.466 }, 00:21:12.466 { 00:21:12.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.466 "dma_device_type": 2 00:21:12.466 }, 00:21:12.466 { 00:21:12.466 "dma_device_id": "system", 00:21:12.466 "dma_device_type": 1 00:21:12.466 }, 00:21:12.466 { 00:21:12.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.466 "dma_device_type": 2 00:21:12.466 } 00:21:12.466 ], 00:21:12.466 "driver_specific": { 00:21:12.466 "raid": { 00:21:12.466 "uuid": "e79f3089-d96a-47c8-a3de-44214b942b01", 00:21:12.466 "strip_size_kb": 0, 00:21:12.466 "state": "online", 00:21:12.466 "raid_level": "raid1", 00:21:12.466 "superblock": true, 00:21:12.466 "num_base_bdevs": 3, 00:21:12.466 "num_base_bdevs_discovered": 3, 00:21:12.466 "num_base_bdevs_operational": 3, 00:21:12.466 "base_bdevs_list": [ 00:21:12.466 { 00:21:12.466 "name": "pt1", 00:21:12.466 "uuid": "1c648eb3-ee7e-54c3-8110-6c6c0d805874", 00:21:12.466 "is_configured": true, 00:21:12.466 "data_offset": 2048, 00:21:12.466 "data_size": 63488 00:21:12.466 }, 00:21:12.466 { 00:21:12.466 "name": "pt2", 00:21:12.466 "uuid": "0dfc9b1f-809b-57ee-a852-2ec20b0ed9dd", 00:21:12.466 "is_configured": true, 00:21:12.466 "data_offset": 2048, 00:21:12.466 "data_size": 63488 00:21:12.466 }, 00:21:12.466 { 00:21:12.466 "name": "pt3", 00:21:12.466 "uuid": "137f8263-d459-53ef-8ecc-979611f25832", 00:21:12.466 "is_configured": true, 00:21:12.466 "data_offset": 2048, 00:21:12.466 "data_size": 63488 00:21:12.466 } 00:21:12.466 ] 00:21:12.467 } 00:21:12.467 } 00:21:12.467 }' 00:21:12.467 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:12.467 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:12.467 pt2 00:21:12.467 pt3' 00:21:12.467 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:12.467 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:12.467 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:12.725 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:12.725 "name": "pt1", 00:21:12.725 "aliases": [ 00:21:12.725 "1c648eb3-ee7e-54c3-8110-6c6c0d805874" 00:21:12.725 ], 00:21:12.725 "product_name": "passthru", 00:21:12.725 "block_size": 512, 00:21:12.725 "num_blocks": 65536, 00:21:12.725 "uuid": "1c648eb3-ee7e-54c3-8110-6c6c0d805874", 00:21:12.725 "assigned_rate_limits": { 00:21:12.725 "rw_ios_per_sec": 0, 00:21:12.725 "rw_mbytes_per_sec": 0, 00:21:12.725 "r_mbytes_per_sec": 0, 00:21:12.725 "w_mbytes_per_sec": 0 00:21:12.725 }, 00:21:12.725 "claimed": true, 00:21:12.725 "claim_type": "exclusive_write", 00:21:12.725 "zoned": false, 00:21:12.725 "supported_io_types": { 00:21:12.725 "read": true, 00:21:12.725 "write": true, 00:21:12.725 "unmap": true, 00:21:12.725 "write_zeroes": true, 00:21:12.725 "flush": true, 00:21:12.725 "reset": true, 00:21:12.725 "compare": false, 00:21:12.725 "compare_and_write": false, 00:21:12.725 "abort": true, 00:21:12.725 "nvme_admin": false, 00:21:12.725 "nvme_io": false 00:21:12.725 }, 00:21:12.725 "memory_domains": [ 00:21:12.725 { 00:21:12.725 "dma_device_id": "system", 00:21:12.725 "dma_device_type": 1 00:21:12.725 }, 00:21:12.725 { 00:21:12.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.725 "dma_device_type": 2 00:21:12.725 } 00:21:12.725 ], 00:21:12.725 "driver_specific": { 00:21:12.725 "passthru": { 00:21:12.725 "name": "pt1", 00:21:12.725 "base_bdev_name": "malloc1" 00:21:12.725 } 00:21:12.725 } 00:21:12.725 }' 00:21:12.725 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.725 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.725 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:12.725 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.984 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.985 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:12.985 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.985 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.985 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:12.985 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.985 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.985 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:12.985 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:12.985 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:12.985 07:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:13.244 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:13.244 "name": "pt2", 00:21:13.244 "aliases": [ 00:21:13.244 "0dfc9b1f-809b-57ee-a852-2ec20b0ed9dd" 00:21:13.244 ], 00:21:13.244 "product_name": "passthru", 00:21:13.244 "block_size": 512, 00:21:13.244 "num_blocks": 65536, 00:21:13.244 "uuid": "0dfc9b1f-809b-57ee-a852-2ec20b0ed9dd", 00:21:13.244 "assigned_rate_limits": { 00:21:13.244 "rw_ios_per_sec": 0, 00:21:13.244 "rw_mbytes_per_sec": 0, 00:21:13.244 "r_mbytes_per_sec": 0, 00:21:13.244 "w_mbytes_per_sec": 0 00:21:13.244 }, 00:21:13.244 "claimed": true, 00:21:13.244 "claim_type": "exclusive_write", 00:21:13.244 "zoned": false, 00:21:13.244 "supported_io_types": { 00:21:13.244 "read": true, 00:21:13.244 "write": true, 00:21:13.244 "unmap": true, 00:21:13.244 "write_zeroes": true, 00:21:13.244 "flush": true, 00:21:13.244 "reset": true, 00:21:13.244 "compare": false, 00:21:13.244 "compare_and_write": false, 00:21:13.244 "abort": true, 00:21:13.244 "nvme_admin": false, 00:21:13.244 "nvme_io": false 00:21:13.244 }, 00:21:13.244 "memory_domains": [ 00:21:13.244 { 00:21:13.244 "dma_device_id": "system", 00:21:13.244 "dma_device_type": 1 00:21:13.244 }, 00:21:13.244 { 00:21:13.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.244 "dma_device_type": 2 00:21:13.244 } 00:21:13.244 ], 00:21:13.244 "driver_specific": { 00:21:13.244 "passthru": { 00:21:13.244 "name": "pt2", 00:21:13.244 "base_bdev_name": "malloc2" 00:21:13.244 } 00:21:13.244 } 00:21:13.244 }' 00:21:13.244 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.244 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.244 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:13.244 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.503 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.503 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:13.503 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.503 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.503 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:13.503 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.503 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.761 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:13.762 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:13.762 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:13.762 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:13.762 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:13.762 "name": "pt3", 00:21:13.762 "aliases": [ 00:21:13.762 "137f8263-d459-53ef-8ecc-979611f25832" 00:21:13.762 ], 00:21:13.762 "product_name": "passthru", 00:21:13.762 "block_size": 512, 00:21:13.762 "num_blocks": 65536, 00:21:13.762 "uuid": "137f8263-d459-53ef-8ecc-979611f25832", 00:21:13.762 "assigned_rate_limits": { 00:21:13.762 "rw_ios_per_sec": 0, 00:21:13.762 "rw_mbytes_per_sec": 0, 00:21:13.762 "r_mbytes_per_sec": 0, 00:21:13.762 "w_mbytes_per_sec": 0 00:21:13.762 }, 00:21:13.762 "claimed": true, 00:21:13.762 "claim_type": "exclusive_write", 00:21:13.762 "zoned": false, 00:21:13.762 "supported_io_types": { 00:21:13.762 "read": true, 00:21:13.762 "write": true, 00:21:13.762 "unmap": true, 00:21:13.762 "write_zeroes": true, 00:21:13.762 "flush": true, 00:21:13.762 "reset": true, 00:21:13.762 "compare": false, 00:21:13.762 "compare_and_write": false, 00:21:13.762 "abort": true, 00:21:13.762 "nvme_admin": false, 00:21:13.762 "nvme_io": false 00:21:13.762 }, 00:21:13.762 "memory_domains": [ 00:21:13.762 { 00:21:13.762 "dma_device_id": "system", 00:21:13.762 "dma_device_type": 1 00:21:13.762 }, 00:21:13.762 { 00:21:13.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.762 "dma_device_type": 2 00:21:13.762 } 00:21:13.762 ], 00:21:13.762 "driver_specific": { 00:21:13.762 "passthru": { 00:21:13.762 "name": "pt3", 00:21:13.762 "base_bdev_name": "malloc3" 00:21:13.762 } 00:21:13.762 } 00:21:13.762 }' 00:21:13.762 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.762 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:14.019 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:14.019 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:14.019 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:14.019 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:14.019 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.019 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.019 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:14.019 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:14.277 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:14.277 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:14.277 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:14.277 07:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:14.535 [2024-07-12 07:31:48.246367] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.535 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' e79f3089-d96a-47c8-a3de-44214b942b01 '!=' e79f3089-d96a-47c8-a3de-44214b942b01 ']' 00:21:14.535 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:21:14.535 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:14.535 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:14.535 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:14.794 [2024-07-12 07:31:48.450220] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:14.794 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:14.794 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:14.794 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:14.794 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:14.794 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:14.794 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:14.794 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:14.794 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:14.794 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:14.794 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:14.794 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.794 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.052 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:15.052 "name": "raid_bdev1", 00:21:15.052 "uuid": "e79f3089-d96a-47c8-a3de-44214b942b01", 00:21:15.052 "strip_size_kb": 0, 00:21:15.052 "state": "online", 00:21:15.052 "raid_level": "raid1", 00:21:15.052 "superblock": true, 00:21:15.052 "num_base_bdevs": 3, 00:21:15.052 "num_base_bdevs_discovered": 2, 00:21:15.052 "num_base_bdevs_operational": 2, 00:21:15.052 "base_bdevs_list": [ 00:21:15.052 { 00:21:15.052 "name": null, 00:21:15.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.052 "is_configured": false, 00:21:15.052 "data_offset": 2048, 00:21:15.052 "data_size": 63488 00:21:15.052 }, 00:21:15.052 { 00:21:15.052 "name": "pt2", 00:21:15.052 "uuid": "0dfc9b1f-809b-57ee-a852-2ec20b0ed9dd", 00:21:15.052 "is_configured": true, 00:21:15.052 "data_offset": 2048, 00:21:15.052 "data_size": 63488 00:21:15.052 }, 00:21:15.052 { 00:21:15.052 "name": "pt3", 00:21:15.052 "uuid": "137f8263-d459-53ef-8ecc-979611f25832", 00:21:15.052 "is_configured": true, 00:21:15.052 "data_offset": 2048, 00:21:15.052 "data_size": 63488 00:21:15.052 } 00:21:15.052 ] 00:21:15.052 }' 00:21:15.052 07:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:15.052 07:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.620 07:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:15.620 [2024-07-12 07:31:49.490392] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:15.620 [2024-07-12 07:31:49.490632] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:15.620 [2024-07-12 07:31:49.490839] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:15.620 [2024-07-12 07:31:49.490950] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:15.620 [2024-07-12 07:31:49.491235] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:21:15.878 07:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.878 07:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:21:15.878 07:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:21:15.878 07:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:21:15.878 07:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:21:15.878 07:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:15.878 07:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:16.137 07:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:16.137 07:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:16.137 07:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:16.395 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:16.395 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:16.395 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:21:16.395 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:16.395 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:16.654 [2024-07-12 07:31:50.382545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:16.654 [2024-07-12 07:31:50.382939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.654 [2024-07-12 07:31:50.383017] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:16.654 [2024-07-12 07:31:50.383118] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.654 [2024-07-12 07:31:50.385952] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.654 [2024-07-12 07:31:50.386126] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:16.654 [2024-07-12 07:31:50.386338] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:16.654 [2024-07-12 07:31:50.386476] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:16.654 pt2 00:21:16.654 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:16.654 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:16.654 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:16.654 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:16.654 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:16.654 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:16.654 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:16.654 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:16.654 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:16.654 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:16.654 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.654 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.912 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:16.912 "name": "raid_bdev1", 00:21:16.912 "uuid": "e79f3089-d96a-47c8-a3de-44214b942b01", 00:21:16.912 "strip_size_kb": 0, 00:21:16.912 "state": "configuring", 00:21:16.912 "raid_level": "raid1", 00:21:16.912 "superblock": true, 00:21:16.912 "num_base_bdevs": 3, 00:21:16.912 "num_base_bdevs_discovered": 1, 00:21:16.912 "num_base_bdevs_operational": 2, 00:21:16.912 "base_bdevs_list": [ 00:21:16.912 { 00:21:16.912 "name": null, 00:21:16.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.912 "is_configured": false, 00:21:16.912 "data_offset": 2048, 00:21:16.912 "data_size": 63488 00:21:16.912 }, 00:21:16.912 { 00:21:16.912 "name": "pt2", 00:21:16.912 "uuid": "0dfc9b1f-809b-57ee-a852-2ec20b0ed9dd", 00:21:16.912 "is_configured": true, 00:21:16.912 "data_offset": 2048, 00:21:16.912 "data_size": 63488 00:21:16.912 }, 00:21:16.912 { 00:21:16.912 "name": null, 00:21:16.912 "uuid": "137f8263-d459-53ef-8ecc-979611f25832", 00:21:16.912 "is_configured": false, 00:21:16.912 "data_offset": 2048, 00:21:16.912 "data_size": 63488 00:21:16.912 } 00:21:16.912 ] 00:21:16.912 }' 00:21:16.912 07:31:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:16.912 07:31:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.477 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:21:17.477 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:17.477 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:21:17.477 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:17.735 [2024-07-12 07:31:51.390918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:17.735 [2024-07-12 07:31:51.391282] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.735 [2024-07-12 07:31:51.391367] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:17.735 [2024-07-12 07:31:51.391479] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.735 [2024-07-12 07:31:51.392025] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.735 [2024-07-12 07:31:51.392173] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:17.735 [2024-07-12 07:31:51.392386] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:17.735 [2024-07-12 07:31:51.392516] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:17.735 [2024-07-12 07:31:51.392680] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:21:17.735 [2024-07-12 07:31:51.392769] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:17.735 [2024-07-12 07:31:51.392888] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:21:17.735 [2024-07-12 07:31:51.393413] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:21:17.735 [2024-07-12 07:31:51.393526] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:21:17.735 [2024-07-12 07:31:51.393727] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.735 pt3 00:21:17.735 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:17.735 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:17.735 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:17.735 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:17.735 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:17.735 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:17.735 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:17.735 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:17.735 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:17.735 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:17.735 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.735 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.735 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:17.735 "name": "raid_bdev1", 00:21:17.735 "uuid": "e79f3089-d96a-47c8-a3de-44214b942b01", 00:21:17.735 "strip_size_kb": 0, 00:21:17.735 "state": "online", 00:21:17.735 "raid_level": "raid1", 00:21:17.735 "superblock": true, 00:21:17.735 "num_base_bdevs": 3, 00:21:17.735 "num_base_bdevs_discovered": 2, 00:21:17.735 "num_base_bdevs_operational": 2, 00:21:17.735 "base_bdevs_list": [ 00:21:17.735 { 00:21:17.735 "name": null, 00:21:17.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.735 "is_configured": false, 00:21:17.735 "data_offset": 2048, 00:21:17.735 "data_size": 63488 00:21:17.735 }, 00:21:17.735 { 00:21:17.735 "name": "pt2", 00:21:17.735 "uuid": "0dfc9b1f-809b-57ee-a852-2ec20b0ed9dd", 00:21:17.735 "is_configured": true, 00:21:17.735 "data_offset": 2048, 00:21:17.735 "data_size": 63488 00:21:17.735 }, 00:21:17.735 { 00:21:17.735 "name": "pt3", 00:21:17.735 "uuid": "137f8263-d459-53ef-8ecc-979611f25832", 00:21:17.735 "is_configured": true, 00:21:17.735 "data_offset": 2048, 00:21:17.735 "data_size": 63488 00:21:17.735 } 00:21:17.735 ] 00:21:17.735 }' 00:21:17.735 07:31:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:17.735 07:31:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.301 07:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:18.559 [2024-07-12 07:31:52.435049] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:18.559 [2024-07-12 07:31:52.435327] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:18.559 [2024-07-12 07:31:52.435499] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:18.559 [2024-07-12 07:31:52.435642] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:18.559 [2024-07-12 07:31:52.435734] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:21:18.816 07:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.816 07:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:21:19.074 07:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:21:19.074 07:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:21:19.074 07:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:21:19.074 07:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:21:19.075 07:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:19.075 07:31:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:19.351 [2024-07-12 07:31:53.159229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:19.351 [2024-07-12 07:31:53.159529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.351 [2024-07-12 07:31:53.159673] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:19.351 [2024-07-12 07:31:53.159776] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.351 [2024-07-12 07:31:53.162894] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.351 [2024-07-12 07:31:53.163068] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:19.351 [2024-07-12 07:31:53.163355] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:19.351 [2024-07-12 07:31:53.163495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:19.351 [2024-07-12 07:31:53.163821] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:19.351 [2024-07-12 07:31:53.163943] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.351 [2024-07-12 07:31:53.164012] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:21:19.351 [2024-07-12 07:31:53.164206] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:19.351 pt1 00:21:19.351 07:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:21:19.351 07:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:19.351 07:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:19.351 07:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:19.351 07:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:19.351 07:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:19.351 07:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:19.351 07:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:19.351 07:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:19.351 07:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:19.351 07:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:19.351 07:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.351 07:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.610 07:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.610 "name": "raid_bdev1", 00:21:19.610 "uuid": "e79f3089-d96a-47c8-a3de-44214b942b01", 00:21:19.610 "strip_size_kb": 0, 00:21:19.610 "state": "configuring", 00:21:19.610 "raid_level": "raid1", 00:21:19.610 "superblock": true, 00:21:19.610 "num_base_bdevs": 3, 00:21:19.610 "num_base_bdevs_discovered": 1, 00:21:19.610 "num_base_bdevs_operational": 2, 00:21:19.610 "base_bdevs_list": [ 00:21:19.610 { 00:21:19.610 "name": null, 00:21:19.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.610 "is_configured": false, 00:21:19.610 "data_offset": 2048, 00:21:19.610 "data_size": 63488 00:21:19.610 }, 00:21:19.610 { 00:21:19.610 "name": "pt2", 00:21:19.610 "uuid": "0dfc9b1f-809b-57ee-a852-2ec20b0ed9dd", 00:21:19.610 "is_configured": true, 00:21:19.610 "data_offset": 2048, 00:21:19.610 "data_size": 63488 00:21:19.610 }, 00:21:19.610 { 00:21:19.610 "name": null, 00:21:19.610 "uuid": "137f8263-d459-53ef-8ecc-979611f25832", 00:21:19.610 "is_configured": false, 00:21:19.610 "data_offset": 2048, 00:21:19.610 "data_size": 63488 00:21:19.610 } 00:21:19.610 ] 00:21:19.610 }' 00:21:19.610 07:31:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.610 07:31:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.176 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:21:20.176 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:20.434 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:21:20.434 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:20.692 [2024-07-12 07:31:54.555735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:20.693 [2024-07-12 07:31:54.555948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.693 [2024-07-12 07:31:54.556023] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:20.693 [2024-07-12 07:31:54.557909] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.693 [2024-07-12 07:31:54.558556] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.693 [2024-07-12 07:31:54.558725] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:20.693 [2024-07-12 07:31:54.558930] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:20.693 [2024-07-12 07:31:54.559047] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:20.693 [2024-07-12 07:31:54.559231] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:21:20.693 [2024-07-12 07:31:54.559326] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:20.693 [2024-07-12 07:31:54.559445] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:21:20.693 [2024-07-12 07:31:54.559868] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:21:20.693 [2024-07-12 07:31:54.559980] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:21:20.693 [2024-07-12 07:31:54.560210] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.693 pt3 00:21:20.960 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:20.960 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:20.960 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:20.960 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:20.960 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:20.960 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:20.960 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.960 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.960 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.960 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:20.960 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.961 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.961 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:20.961 "name": "raid_bdev1", 00:21:20.961 "uuid": "e79f3089-d96a-47c8-a3de-44214b942b01", 00:21:20.961 "strip_size_kb": 0, 00:21:20.961 "state": "online", 00:21:20.961 "raid_level": "raid1", 00:21:20.961 "superblock": true, 00:21:20.961 "num_base_bdevs": 3, 00:21:20.961 "num_base_bdevs_discovered": 2, 00:21:20.961 "num_base_bdevs_operational": 2, 00:21:20.961 "base_bdevs_list": [ 00:21:20.961 { 00:21:20.961 "name": null, 00:21:20.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.961 "is_configured": false, 00:21:20.961 "data_offset": 2048, 00:21:20.961 "data_size": 63488 00:21:20.961 }, 00:21:20.961 { 00:21:20.961 "name": "pt2", 00:21:20.961 "uuid": "0dfc9b1f-809b-57ee-a852-2ec20b0ed9dd", 00:21:20.961 "is_configured": true, 00:21:20.961 "data_offset": 2048, 00:21:20.961 "data_size": 63488 00:21:20.961 }, 00:21:20.961 { 00:21:20.961 "name": "pt3", 00:21:20.961 "uuid": "137f8263-d459-53ef-8ecc-979611f25832", 00:21:20.961 "is_configured": true, 00:21:20.961 "data_offset": 2048, 00:21:20.961 "data_size": 63488 00:21:20.961 } 00:21:20.961 ] 00:21:20.961 }' 00:21:20.961 07:31:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:20.961 07:31:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.538 07:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:21:21.539 07:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:21.797 07:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:21:21.797 07:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:21.797 07:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:21:22.056 [2024-07-12 07:31:55.864669] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.056 07:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' e79f3089-d96a-47c8-a3de-44214b942b01 '!=' e79f3089-d96a-47c8-a3de-44214b942b01 ']' 00:21:22.056 07:31:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 143054 00:21:22.056 07:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 143054 ']' 00:21:22.056 07:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 143054 00:21:22.056 07:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:21:22.056 07:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:22.056 07:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 143054 00:21:22.056 killing process with pid 143054 00:21:22.056 07:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:22.056 07:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:22.056 07:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 143054' 00:21:22.056 07:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 143054 00:21:22.056 07:31:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 143054 00:21:22.056 [2024-07-12 07:31:55.913439] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:22.056 [2024-07-12 07:31:55.913532] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:22.056 [2024-07-12 07:31:55.913605] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:22.056 [2024-07-12 07:31:55.913615] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:21:22.316 [2024-07-12 07:31:55.975720] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:22.574 07:31:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:21:22.574 00:21:22.574 real 0m22.174s 00:21:22.574 user 0m40.534s 00:21:22.574 sys 0m3.813s 00:21:22.574 07:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:22.574 ************************************ 00:21:22.574 END TEST raid_superblock_test 00:21:22.574 ************************************ 00:21:22.574 07:31:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.574 07:31:56 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:21:22.574 07:31:56 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:21:22.574 07:31:56 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:22.574 07:31:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:22.834 ************************************ 00:21:22.834 START TEST raid_read_error_test 00:21:22.834 ************************************ 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 3 read 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.hJVbAz2TZB 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=143790 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 143790 /var/tmp/spdk-raid.sock 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 143790 ']' 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:22.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:22.834 07:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.834 [2024-07-12 07:31:56.550515] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:21:22.834 [2024-07-12 07:31:56.551047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143790 ] 00:21:22.834 [2024-07-12 07:31:56.713916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.093 [2024-07-12 07:31:56.811763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.093 [2024-07-12 07:31:56.898766] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.661 07:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:23.661 07:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:21:23.661 07:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:23.661 07:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:23.920 BaseBdev1_malloc 00:21:23.920 07:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:24.180 true 00:21:24.180 07:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:24.438 [2024-07-12 07:31:58.182068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:24.438 [2024-07-12 07:31:58.182404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.438 [2024-07-12 07:31:58.182490] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:21:24.438 [2024-07-12 07:31:58.182636] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.438 [2024-07-12 07:31:58.185875] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.438 [2024-07-12 07:31:58.186079] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:24.438 BaseBdev1 00:21:24.438 07:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:24.438 07:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:24.697 BaseBdev2_malloc 00:21:24.697 07:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:24.956 true 00:21:24.956 07:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:25.215 [2024-07-12 07:31:58.846436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:25.215 [2024-07-12 07:31:58.846771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.215 [2024-07-12 07:31:58.846856] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:25.215 [2024-07-12 07:31:58.846986] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.215 [2024-07-12 07:31:58.849864] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.215 [2024-07-12 07:31:58.850028] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:25.215 BaseBdev2 00:21:25.215 07:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:25.215 07:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:25.474 BaseBdev3_malloc 00:21:25.474 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:25.474 true 00:21:25.474 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:25.732 [2024-07-12 07:31:59.581821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:25.732 [2024-07-12 07:31:59.582172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.732 [2024-07-12 07:31:59.582260] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:21:25.732 [2024-07-12 07:31:59.582393] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.732 [2024-07-12 07:31:59.585237] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.732 [2024-07-12 07:31:59.585444] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:25.732 BaseBdev3 00:21:25.732 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:25.991 [2024-07-12 07:31:59.778061] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.991 [2024-07-12 07:31:59.780804] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:25.991 [2024-07-12 07:31:59.781025] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:25.991 [2024-07-12 07:31:59.781353] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:21:25.991 [2024-07-12 07:31:59.781398] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:25.991 [2024-07-12 07:31:59.781630] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:21:25.991 [2024-07-12 07:31:59.782249] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:21:25.991 [2024-07-12 07:31:59.782365] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:21:25.991 [2024-07-12 07:31:59.782667] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.991 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:25.991 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:25.991 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:25.991 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:25.992 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:25.992 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:25.992 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:25.992 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:25.992 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:25.992 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:25.992 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.992 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.250 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:26.250 "name": "raid_bdev1", 00:21:26.250 "uuid": "dff1014e-4ad0-4cae-b9dc-a91fd078ef16", 00:21:26.250 "strip_size_kb": 0, 00:21:26.250 "state": "online", 00:21:26.250 "raid_level": "raid1", 00:21:26.250 "superblock": true, 00:21:26.250 "num_base_bdevs": 3, 00:21:26.250 "num_base_bdevs_discovered": 3, 00:21:26.250 "num_base_bdevs_operational": 3, 00:21:26.250 "base_bdevs_list": [ 00:21:26.250 { 00:21:26.250 "name": "BaseBdev1", 00:21:26.250 "uuid": "de4502e4-0d9b-5109-82d1-61d603f75ab0", 00:21:26.250 "is_configured": true, 00:21:26.250 "data_offset": 2048, 00:21:26.250 "data_size": 63488 00:21:26.250 }, 00:21:26.250 { 00:21:26.250 "name": "BaseBdev2", 00:21:26.250 "uuid": "a89dd128-6a8e-5d42-a8b8-528098bf04da", 00:21:26.250 "is_configured": true, 00:21:26.250 "data_offset": 2048, 00:21:26.250 "data_size": 63488 00:21:26.250 }, 00:21:26.250 { 00:21:26.250 "name": "BaseBdev3", 00:21:26.250 "uuid": "5d2d8911-efbc-5d66-b129-7108fe6ddc00", 00:21:26.250 "is_configured": true, 00:21:26.250 "data_offset": 2048, 00:21:26.250 "data_size": 63488 00:21:26.250 } 00:21:26.250 ] 00:21:26.250 }' 00:21:26.250 07:31:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:26.250 07:31:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.816 07:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:26.816 07:32:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:26.816 [2024-07-12 07:32:00.591325] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:21:27.752 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:28.011 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:28.011 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:28.011 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:21:28.011 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:21:28.011 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:28.011 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:28.011 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:28.011 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:28.011 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:28.011 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:28.011 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:28.011 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:28.012 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:28.012 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:28.012 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.012 07:32:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.269 07:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:28.269 "name": "raid_bdev1", 00:21:28.269 "uuid": "dff1014e-4ad0-4cae-b9dc-a91fd078ef16", 00:21:28.269 "strip_size_kb": 0, 00:21:28.269 "state": "online", 00:21:28.269 "raid_level": "raid1", 00:21:28.269 "superblock": true, 00:21:28.269 "num_base_bdevs": 3, 00:21:28.269 "num_base_bdevs_discovered": 3, 00:21:28.269 "num_base_bdevs_operational": 3, 00:21:28.269 "base_bdevs_list": [ 00:21:28.269 { 00:21:28.269 "name": "BaseBdev1", 00:21:28.269 "uuid": "de4502e4-0d9b-5109-82d1-61d603f75ab0", 00:21:28.269 "is_configured": true, 00:21:28.269 "data_offset": 2048, 00:21:28.269 "data_size": 63488 00:21:28.269 }, 00:21:28.269 { 00:21:28.269 "name": "BaseBdev2", 00:21:28.269 "uuid": "a89dd128-6a8e-5d42-a8b8-528098bf04da", 00:21:28.269 "is_configured": true, 00:21:28.269 "data_offset": 2048, 00:21:28.269 "data_size": 63488 00:21:28.269 }, 00:21:28.269 { 00:21:28.269 "name": "BaseBdev3", 00:21:28.269 "uuid": "5d2d8911-efbc-5d66-b129-7108fe6ddc00", 00:21:28.269 "is_configured": true, 00:21:28.269 "data_offset": 2048, 00:21:28.269 "data_size": 63488 00:21:28.269 } 00:21:28.269 ] 00:21:28.269 }' 00:21:28.269 07:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:28.269 07:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.835 07:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:29.094 [2024-07-12 07:32:02.942842] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:29.094 [2024-07-12 07:32:02.943135] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:29.094 [2024-07-12 07:32:02.945862] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:29.094 [2024-07-12 07:32:02.946043] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.094 [2024-07-12 07:32:02.946186] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:29.094 [2024-07-12 07:32:02.946385] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:21:29.094 0 00:21:29.094 07:32:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 143790 00:21:29.094 07:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 143790 ']' 00:21:29.094 07:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 143790 00:21:29.094 07:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:21:29.094 07:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:29.094 07:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 143790 00:21:29.352 07:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:29.352 07:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:29.352 07:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 143790' 00:21:29.352 killing process with pid 143790 00:21:29.352 07:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 143790 00:21:29.352 [2024-07-12 07:32:02.992578] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:29.352 07:32:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 143790 00:21:29.352 [2024-07-12 07:32:03.041826] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:29.611 07:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.hJVbAz2TZB 00:21:29.611 07:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:29.611 07:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:29.611 07:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:21:29.611 07:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:21:29.611 07:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:29.611 07:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:29.611 07:32:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:29.611 00:21:29.611 real 0m7.008s 00:21:29.611 user 0m10.837s 00:21:29.611 sys 0m1.195s 00:21:29.611 07:32:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:29.611 07:32:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.611 ************************************ 00:21:29.611 END TEST raid_read_error_test 00:21:29.611 ************************************ 00:21:29.915 07:32:03 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:21:29.915 07:32:03 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:21:29.915 07:32:03 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:29.915 07:32:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:29.915 ************************************ 00:21:29.915 START TEST raid_write_error_test 00:21:29.915 ************************************ 00:21:29.915 07:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 3 write 00:21:29.915 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:21:29.915 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:21:29.915 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:21:29.915 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:29.915 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:29.915 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:21:29.915 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:29.915 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:29.915 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:21:29.915 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:29.915 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.M8uWlYXOve 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=143985 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 143985 /var/tmp/spdk-raid.sock 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 143985 ']' 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:29.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:29.916 07:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.916 [2024-07-12 07:32:03.627223] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:21:29.916 [2024-07-12 07:32:03.627810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143985 ] 00:21:29.916 [2024-07-12 07:32:03.782456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.175 [2024-07-12 07:32:03.872355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.175 [2024-07-12 07:32:03.952680] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:30.743 07:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:30.743 07:32:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:21:30.743 07:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:30.743 07:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:31.002 BaseBdev1_malloc 00:21:31.002 07:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:31.261 true 00:21:31.261 07:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:31.520 [2024-07-12 07:32:05.238625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:31.520 [2024-07-12 07:32:05.238969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.520 [2024-07-12 07:32:05.239066] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:21:31.520 [2024-07-12 07:32:05.239197] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.520 [2024-07-12 07:32:05.242311] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.520 [2024-07-12 07:32:05.242483] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:31.520 BaseBdev1 00:21:31.520 07:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:31.520 07:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:31.779 BaseBdev2_malloc 00:21:31.779 07:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:32.037 true 00:21:32.037 07:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:32.296 [2024-07-12 07:32:05.926944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:32.296 [2024-07-12 07:32:05.927309] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.296 [2024-07-12 07:32:05.927397] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:32.296 [2024-07-12 07:32:05.927527] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.296 [2024-07-12 07:32:05.930386] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.296 [2024-07-12 07:32:05.930557] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:32.296 BaseBdev2 00:21:32.296 07:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:32.296 07:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:32.555 BaseBdev3_malloc 00:21:32.555 07:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:32.555 true 00:21:32.815 07:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:32.815 [2024-07-12 07:32:06.621392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:32.815 [2024-07-12 07:32:06.621714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.815 [2024-07-12 07:32:06.621801] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:21:32.815 [2024-07-12 07:32:06.621918] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.815 [2024-07-12 07:32:06.624794] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.815 [2024-07-12 07:32:06.624970] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:32.815 BaseBdev3 00:21:32.815 07:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:33.074 [2024-07-12 07:32:06.857524] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:33.074 [2024-07-12 07:32:06.860270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:33.074 [2024-07-12 07:32:06.860492] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:33.074 [2024-07-12 07:32:06.860780] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:21:33.074 [2024-07-12 07:32:06.860823] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:33.074 [2024-07-12 07:32:06.861084] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:21:33.074 [2024-07-12 07:32:06.861676] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:21:33.074 [2024-07-12 07:32:06.861795] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:21:33.074 [2024-07-12 07:32:06.862117] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:33.074 07:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:33.074 07:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:33.074 07:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:33.074 07:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:33.074 07:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:33.074 07:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:33.074 07:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:33.074 07:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:33.074 07:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:33.074 07:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:33.074 07:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.074 07:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.333 07:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:33.333 "name": "raid_bdev1", 00:21:33.333 "uuid": "89dc0d3b-2e7f-4441-8074-fff8f955d5b4", 00:21:33.333 "strip_size_kb": 0, 00:21:33.333 "state": "online", 00:21:33.333 "raid_level": "raid1", 00:21:33.333 "superblock": true, 00:21:33.333 "num_base_bdevs": 3, 00:21:33.333 "num_base_bdevs_discovered": 3, 00:21:33.333 "num_base_bdevs_operational": 3, 00:21:33.333 "base_bdevs_list": [ 00:21:33.333 { 00:21:33.333 "name": "BaseBdev1", 00:21:33.333 "uuid": "399e5028-2c01-5298-9b7e-4f29a8912df6", 00:21:33.333 "is_configured": true, 00:21:33.333 "data_offset": 2048, 00:21:33.333 "data_size": 63488 00:21:33.333 }, 00:21:33.333 { 00:21:33.333 "name": "BaseBdev2", 00:21:33.333 "uuid": "e54dc3e2-35c9-57cc-8787-c122bd4b9675", 00:21:33.333 "is_configured": true, 00:21:33.333 "data_offset": 2048, 00:21:33.333 "data_size": 63488 00:21:33.333 }, 00:21:33.333 { 00:21:33.333 "name": "BaseBdev3", 00:21:33.333 "uuid": "a39afce1-f327-5d3d-97d9-4c55e3a8f00f", 00:21:33.333 "is_configured": true, 00:21:33.333 "data_offset": 2048, 00:21:33.333 "data_size": 63488 00:21:33.333 } 00:21:33.333 ] 00:21:33.333 }' 00:21:33.333 07:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:33.333 07:32:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.900 07:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:33.900 07:32:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:33.900 [2024-07-12 07:32:07.634725] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:21:34.837 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:35.095 [2024-07-12 07:32:08.751304] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:21:35.095 [2024-07-12 07:32:08.751661] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:35.095 [2024-07-12 07:32:08.752021] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002460 00:21:35.095 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:35.095 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:35.096 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:21:35.096 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:21:35.096 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:35.096 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:35.096 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:35.096 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:35.096 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:35.096 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:35.096 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:35.096 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:35.096 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:35.096 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:35.096 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.096 07:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.355 07:32:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:35.355 "name": "raid_bdev1", 00:21:35.355 "uuid": "89dc0d3b-2e7f-4441-8074-fff8f955d5b4", 00:21:35.355 "strip_size_kb": 0, 00:21:35.355 "state": "online", 00:21:35.355 "raid_level": "raid1", 00:21:35.355 "superblock": true, 00:21:35.355 "num_base_bdevs": 3, 00:21:35.355 "num_base_bdevs_discovered": 2, 00:21:35.355 "num_base_bdevs_operational": 2, 00:21:35.355 "base_bdevs_list": [ 00:21:35.355 { 00:21:35.355 "name": null, 00:21:35.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.355 "is_configured": false, 00:21:35.355 "data_offset": 2048, 00:21:35.355 "data_size": 63488 00:21:35.355 }, 00:21:35.355 { 00:21:35.355 "name": "BaseBdev2", 00:21:35.355 "uuid": "e54dc3e2-35c9-57cc-8787-c122bd4b9675", 00:21:35.355 "is_configured": true, 00:21:35.355 "data_offset": 2048, 00:21:35.355 "data_size": 63488 00:21:35.355 }, 00:21:35.355 { 00:21:35.355 "name": "BaseBdev3", 00:21:35.355 "uuid": "a39afce1-f327-5d3d-97d9-4c55e3a8f00f", 00:21:35.355 "is_configured": true, 00:21:35.355 "data_offset": 2048, 00:21:35.355 "data_size": 63488 00:21:35.355 } 00:21:35.355 ] 00:21:35.355 }' 00:21:35.355 07:32:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:35.355 07:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.921 07:32:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:35.921 [2024-07-12 07:32:09.783336] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:35.921 [2024-07-12 07:32:09.783652] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:35.921 [2024-07-12 07:32:09.786291] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.921 [2024-07-12 07:32:09.786465] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.921 [2024-07-12 07:32:09.786585] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.922 [2024-07-12 07:32:09.786677] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:21:35.922 0 00:21:35.922 07:32:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 143985 00:21:35.922 07:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 143985 ']' 00:21:35.922 07:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 143985 00:21:36.180 07:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:21:36.180 07:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:36.180 07:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 143985 00:21:36.180 07:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:36.180 07:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:36.180 07:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 143985' 00:21:36.180 killing process with pid 143985 00:21:36.180 07:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 143985 00:21:36.180 [2024-07-12 07:32:09.839305] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:36.180 07:32:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 143985 00:21:36.180 [2024-07-12 07:32:09.887383] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:36.439 07:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.M8uWlYXOve 00:21:36.439 07:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:36.439 07:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:36.439 ************************************ 00:21:36.439 END TEST raid_write_error_test 00:21:36.439 ************************************ 00:21:36.439 07:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:21:36.439 07:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:21:36.439 07:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:36.439 07:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:36.439 07:32:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:36.439 00:21:36.439 real 0m6.776s 00:21:36.439 user 0m10.350s 00:21:36.439 sys 0m1.199s 00:21:36.439 07:32:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:36.439 07:32:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.698 07:32:10 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:21:36.698 07:32:10 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:21:36.698 07:32:10 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:21:36.698 07:32:10 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:21:36.698 07:32:10 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:36.698 07:32:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:36.698 ************************************ 00:21:36.698 START TEST raid_state_function_test 00:21:36.698 ************************************ 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 false 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=144170 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:36.698 Process raid pid: 144170 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 144170' 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 144170 /var/tmp/spdk-raid.sock 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 144170 ']' 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:36.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:36.698 07:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.698 [2024-07-12 07:32:10.441211] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:21:36.698 [2024-07-12 07:32:10.441445] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.956 [2024-07-12 07:32:10.586405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.956 [2024-07-12 07:32:10.679264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.956 [2024-07-12 07:32:10.758695] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.891 07:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:37.892 07:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:21:37.892 07:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:37.892 [2024-07-12 07:32:11.663445] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:37.892 [2024-07-12 07:32:11.663547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:37.892 [2024-07-12 07:32:11.663560] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.892 [2024-07-12 07:32:11.663581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.892 [2024-07-12 07:32:11.663589] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:37.892 [2024-07-12 07:32:11.663638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:37.892 [2024-07-12 07:32:11.663646] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:37.892 [2024-07-12 07:32:11.663673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:37.892 07:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:37.892 07:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:37.892 07:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:37.892 07:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:37.892 07:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:37.892 07:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:37.892 07:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:37.892 07:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:37.892 07:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:37.892 07:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:37.892 07:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.892 07:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.458 07:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.458 "name": "Existed_Raid", 00:21:38.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.458 "strip_size_kb": 64, 00:21:38.458 "state": "configuring", 00:21:38.458 "raid_level": "raid0", 00:21:38.458 "superblock": false, 00:21:38.458 "num_base_bdevs": 4, 00:21:38.458 "num_base_bdevs_discovered": 0, 00:21:38.458 "num_base_bdevs_operational": 4, 00:21:38.458 "base_bdevs_list": [ 00:21:38.458 { 00:21:38.458 "name": "BaseBdev1", 00:21:38.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.458 "is_configured": false, 00:21:38.458 "data_offset": 0, 00:21:38.458 "data_size": 0 00:21:38.458 }, 00:21:38.458 { 00:21:38.458 "name": "BaseBdev2", 00:21:38.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.458 "is_configured": false, 00:21:38.458 "data_offset": 0, 00:21:38.458 "data_size": 0 00:21:38.458 }, 00:21:38.458 { 00:21:38.458 "name": "BaseBdev3", 00:21:38.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.458 "is_configured": false, 00:21:38.458 "data_offset": 0, 00:21:38.458 "data_size": 0 00:21:38.458 }, 00:21:38.458 { 00:21:38.458 "name": "BaseBdev4", 00:21:38.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.458 "is_configured": false, 00:21:38.458 "data_offset": 0, 00:21:38.458 "data_size": 0 00:21:38.458 } 00:21:38.458 ] 00:21:38.458 }' 00:21:38.458 07:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.458 07:32:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.726 07:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:39.016 [2024-07-12 07:32:12.727466] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:39.016 [2024-07-12 07:32:12.727531] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:21:39.016 07:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:39.274 [2024-07-12 07:32:12.931532] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:39.274 [2024-07-12 07:32:12.931628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:39.274 [2024-07-12 07:32:12.931640] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:39.274 [2024-07-12 07:32:12.931668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:39.274 [2024-07-12 07:32:12.931676] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:39.274 [2024-07-12 07:32:12.931695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:39.274 [2024-07-12 07:32:12.931702] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:39.274 [2024-07-12 07:32:12.931731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:39.274 07:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:39.274 [2024-07-12 07:32:13.143866] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:39.274 BaseBdev1 00:21:39.532 07:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:39.532 07:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:39.532 07:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:39.532 07:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:39.532 07:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:39.532 07:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:39.532 07:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:39.790 07:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:40.049 [ 00:21:40.049 { 00:21:40.049 "name": "BaseBdev1", 00:21:40.049 "aliases": [ 00:21:40.049 "a48884b4-fac5-48e6-ac60-5d273ef27d3a" 00:21:40.049 ], 00:21:40.049 "product_name": "Malloc disk", 00:21:40.049 "block_size": 512, 00:21:40.049 "num_blocks": 65536, 00:21:40.049 "uuid": "a48884b4-fac5-48e6-ac60-5d273ef27d3a", 00:21:40.049 "assigned_rate_limits": { 00:21:40.049 "rw_ios_per_sec": 0, 00:21:40.049 "rw_mbytes_per_sec": 0, 00:21:40.049 "r_mbytes_per_sec": 0, 00:21:40.049 "w_mbytes_per_sec": 0 00:21:40.049 }, 00:21:40.049 "claimed": true, 00:21:40.049 "claim_type": "exclusive_write", 00:21:40.049 "zoned": false, 00:21:40.049 "supported_io_types": { 00:21:40.049 "read": true, 00:21:40.049 "write": true, 00:21:40.049 "unmap": true, 00:21:40.049 "write_zeroes": true, 00:21:40.049 "flush": true, 00:21:40.049 "reset": true, 00:21:40.049 "compare": false, 00:21:40.049 "compare_and_write": false, 00:21:40.049 "abort": true, 00:21:40.049 "nvme_admin": false, 00:21:40.049 "nvme_io": false 00:21:40.049 }, 00:21:40.049 "memory_domains": [ 00:21:40.049 { 00:21:40.049 "dma_device_id": "system", 00:21:40.049 "dma_device_type": 1 00:21:40.049 }, 00:21:40.049 { 00:21:40.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.049 "dma_device_type": 2 00:21:40.049 } 00:21:40.049 ], 00:21:40.049 "driver_specific": {} 00:21:40.049 } 00:21:40.049 ] 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:40.049 "name": "Existed_Raid", 00:21:40.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.049 "strip_size_kb": 64, 00:21:40.049 "state": "configuring", 00:21:40.049 "raid_level": "raid0", 00:21:40.049 "superblock": false, 00:21:40.049 "num_base_bdevs": 4, 00:21:40.049 "num_base_bdevs_discovered": 1, 00:21:40.049 "num_base_bdevs_operational": 4, 00:21:40.049 "base_bdevs_list": [ 00:21:40.049 { 00:21:40.049 "name": "BaseBdev1", 00:21:40.049 "uuid": "a48884b4-fac5-48e6-ac60-5d273ef27d3a", 00:21:40.049 "is_configured": true, 00:21:40.049 "data_offset": 0, 00:21:40.049 "data_size": 65536 00:21:40.049 }, 00:21:40.049 { 00:21:40.049 "name": "BaseBdev2", 00:21:40.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.049 "is_configured": false, 00:21:40.049 "data_offset": 0, 00:21:40.049 "data_size": 0 00:21:40.049 }, 00:21:40.049 { 00:21:40.049 "name": "BaseBdev3", 00:21:40.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.049 "is_configured": false, 00:21:40.049 "data_offset": 0, 00:21:40.049 "data_size": 0 00:21:40.049 }, 00:21:40.049 { 00:21:40.049 "name": "BaseBdev4", 00:21:40.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.049 "is_configured": false, 00:21:40.049 "data_offset": 0, 00:21:40.049 "data_size": 0 00:21:40.049 } 00:21:40.049 ] 00:21:40.049 }' 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:40.049 07:32:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.616 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:40.874 [2024-07-12 07:32:14.652253] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:40.874 [2024-07-12 07:32:14.652357] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:21:40.875 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:41.133 [2024-07-12 07:32:14.848389] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:41.133 [2024-07-12 07:32:14.850879] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:41.133 [2024-07-12 07:32:14.850978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:41.133 [2024-07-12 07:32:14.850989] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:41.133 [2024-07-12 07:32:14.851021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:41.133 [2024-07-12 07:32:14.851029] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:41.133 [2024-07-12 07:32:14.851052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:41.133 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:41.133 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:41.133 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:41.133 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:41.133 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:41.133 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:41.133 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:41.133 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:41.133 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:41.133 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:41.133 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:41.133 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:41.133 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.133 07:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.391 07:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:41.391 "name": "Existed_Raid", 00:21:41.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.391 "strip_size_kb": 64, 00:21:41.391 "state": "configuring", 00:21:41.391 "raid_level": "raid0", 00:21:41.391 "superblock": false, 00:21:41.391 "num_base_bdevs": 4, 00:21:41.391 "num_base_bdevs_discovered": 1, 00:21:41.391 "num_base_bdevs_operational": 4, 00:21:41.391 "base_bdevs_list": [ 00:21:41.391 { 00:21:41.391 "name": "BaseBdev1", 00:21:41.391 "uuid": "a48884b4-fac5-48e6-ac60-5d273ef27d3a", 00:21:41.391 "is_configured": true, 00:21:41.391 "data_offset": 0, 00:21:41.391 "data_size": 65536 00:21:41.391 }, 00:21:41.391 { 00:21:41.391 "name": "BaseBdev2", 00:21:41.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.391 "is_configured": false, 00:21:41.391 "data_offset": 0, 00:21:41.391 "data_size": 0 00:21:41.391 }, 00:21:41.391 { 00:21:41.391 "name": "BaseBdev3", 00:21:41.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.391 "is_configured": false, 00:21:41.391 "data_offset": 0, 00:21:41.391 "data_size": 0 00:21:41.391 }, 00:21:41.391 { 00:21:41.391 "name": "BaseBdev4", 00:21:41.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.392 "is_configured": false, 00:21:41.392 "data_offset": 0, 00:21:41.392 "data_size": 0 00:21:41.392 } 00:21:41.392 ] 00:21:41.392 }' 00:21:41.392 07:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:41.392 07:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.956 07:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:42.213 [2024-07-12 07:32:15.851263] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:42.213 BaseBdev2 00:21:42.213 07:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:42.213 07:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:42.213 07:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:42.213 07:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:42.213 07:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:42.213 07:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:42.213 07:32:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:42.471 07:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:42.471 [ 00:21:42.471 { 00:21:42.471 "name": "BaseBdev2", 00:21:42.471 "aliases": [ 00:21:42.471 "6e346846-c123-4841-895d-4ae1fc27d965" 00:21:42.471 ], 00:21:42.471 "product_name": "Malloc disk", 00:21:42.471 "block_size": 512, 00:21:42.471 "num_blocks": 65536, 00:21:42.471 "uuid": "6e346846-c123-4841-895d-4ae1fc27d965", 00:21:42.471 "assigned_rate_limits": { 00:21:42.471 "rw_ios_per_sec": 0, 00:21:42.471 "rw_mbytes_per_sec": 0, 00:21:42.471 "r_mbytes_per_sec": 0, 00:21:42.471 "w_mbytes_per_sec": 0 00:21:42.471 }, 00:21:42.471 "claimed": true, 00:21:42.471 "claim_type": "exclusive_write", 00:21:42.471 "zoned": false, 00:21:42.471 "supported_io_types": { 00:21:42.471 "read": true, 00:21:42.471 "write": true, 00:21:42.471 "unmap": true, 00:21:42.471 "write_zeroes": true, 00:21:42.471 "flush": true, 00:21:42.471 "reset": true, 00:21:42.471 "compare": false, 00:21:42.471 "compare_and_write": false, 00:21:42.471 "abort": true, 00:21:42.471 "nvme_admin": false, 00:21:42.471 "nvme_io": false 00:21:42.471 }, 00:21:42.471 "memory_domains": [ 00:21:42.471 { 00:21:42.471 "dma_device_id": "system", 00:21:42.471 "dma_device_type": 1 00:21:42.471 }, 00:21:42.471 { 00:21:42.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.471 "dma_device_type": 2 00:21:42.471 } 00:21:42.471 ], 00:21:42.471 "driver_specific": {} 00:21:42.471 } 00:21:42.471 ] 00:21:42.471 07:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:42.471 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:42.471 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:42.471 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:42.471 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:42.471 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:42.471 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:42.471 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:42.471 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:42.471 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:42.471 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:42.471 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:42.471 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:42.729 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.729 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.729 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:42.729 "name": "Existed_Raid", 00:21:42.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.729 "strip_size_kb": 64, 00:21:42.729 "state": "configuring", 00:21:42.729 "raid_level": "raid0", 00:21:42.729 "superblock": false, 00:21:42.729 "num_base_bdevs": 4, 00:21:42.729 "num_base_bdevs_discovered": 2, 00:21:42.729 "num_base_bdevs_operational": 4, 00:21:42.729 "base_bdevs_list": [ 00:21:42.729 { 00:21:42.729 "name": "BaseBdev1", 00:21:42.729 "uuid": "a48884b4-fac5-48e6-ac60-5d273ef27d3a", 00:21:42.729 "is_configured": true, 00:21:42.729 "data_offset": 0, 00:21:42.729 "data_size": 65536 00:21:42.729 }, 00:21:42.729 { 00:21:42.729 "name": "BaseBdev2", 00:21:42.729 "uuid": "6e346846-c123-4841-895d-4ae1fc27d965", 00:21:42.729 "is_configured": true, 00:21:42.729 "data_offset": 0, 00:21:42.729 "data_size": 65536 00:21:42.729 }, 00:21:42.729 { 00:21:42.729 "name": "BaseBdev3", 00:21:42.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.729 "is_configured": false, 00:21:42.729 "data_offset": 0, 00:21:42.729 "data_size": 0 00:21:42.729 }, 00:21:42.729 { 00:21:42.729 "name": "BaseBdev4", 00:21:42.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.729 "is_configured": false, 00:21:42.729 "data_offset": 0, 00:21:42.729 "data_size": 0 00:21:42.729 } 00:21:42.729 ] 00:21:42.729 }' 00:21:42.729 07:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:42.729 07:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.294 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:43.552 [2024-07-12 07:32:17.349212] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:43.552 BaseBdev3 00:21:43.552 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:43.552 07:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:43.552 07:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:43.552 07:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:43.552 07:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:43.552 07:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:43.552 07:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:43.810 07:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:44.068 [ 00:21:44.068 { 00:21:44.068 "name": "BaseBdev3", 00:21:44.068 "aliases": [ 00:21:44.068 "0b9cbfef-6173-4624-b69f-354b67cce9d8" 00:21:44.068 ], 00:21:44.068 "product_name": "Malloc disk", 00:21:44.068 "block_size": 512, 00:21:44.068 "num_blocks": 65536, 00:21:44.068 "uuid": "0b9cbfef-6173-4624-b69f-354b67cce9d8", 00:21:44.068 "assigned_rate_limits": { 00:21:44.068 "rw_ios_per_sec": 0, 00:21:44.068 "rw_mbytes_per_sec": 0, 00:21:44.068 "r_mbytes_per_sec": 0, 00:21:44.068 "w_mbytes_per_sec": 0 00:21:44.068 }, 00:21:44.068 "claimed": true, 00:21:44.068 "claim_type": "exclusive_write", 00:21:44.068 "zoned": false, 00:21:44.068 "supported_io_types": { 00:21:44.068 "read": true, 00:21:44.068 "write": true, 00:21:44.068 "unmap": true, 00:21:44.068 "write_zeroes": true, 00:21:44.068 "flush": true, 00:21:44.068 "reset": true, 00:21:44.068 "compare": false, 00:21:44.068 "compare_and_write": false, 00:21:44.068 "abort": true, 00:21:44.068 "nvme_admin": false, 00:21:44.068 "nvme_io": false 00:21:44.068 }, 00:21:44.068 "memory_domains": [ 00:21:44.068 { 00:21:44.068 "dma_device_id": "system", 00:21:44.068 "dma_device_type": 1 00:21:44.068 }, 00:21:44.068 { 00:21:44.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.068 "dma_device_type": 2 00:21:44.068 } 00:21:44.068 ], 00:21:44.068 "driver_specific": {} 00:21:44.068 } 00:21:44.068 ] 00:21:44.068 07:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:44.069 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:44.069 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:44.069 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:44.069 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:44.069 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:44.069 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:44.069 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:44.069 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:44.069 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.069 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.069 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.069 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.069 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.069 07:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.328 07:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:44.328 "name": "Existed_Raid", 00:21:44.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.328 "strip_size_kb": 64, 00:21:44.328 "state": "configuring", 00:21:44.328 "raid_level": "raid0", 00:21:44.328 "superblock": false, 00:21:44.328 "num_base_bdevs": 4, 00:21:44.328 "num_base_bdevs_discovered": 3, 00:21:44.328 "num_base_bdevs_operational": 4, 00:21:44.328 "base_bdevs_list": [ 00:21:44.328 { 00:21:44.328 "name": "BaseBdev1", 00:21:44.328 "uuid": "a48884b4-fac5-48e6-ac60-5d273ef27d3a", 00:21:44.328 "is_configured": true, 00:21:44.328 "data_offset": 0, 00:21:44.328 "data_size": 65536 00:21:44.328 }, 00:21:44.328 { 00:21:44.328 "name": "BaseBdev2", 00:21:44.328 "uuid": "6e346846-c123-4841-895d-4ae1fc27d965", 00:21:44.328 "is_configured": true, 00:21:44.328 "data_offset": 0, 00:21:44.328 "data_size": 65536 00:21:44.328 }, 00:21:44.328 { 00:21:44.328 "name": "BaseBdev3", 00:21:44.328 "uuid": "0b9cbfef-6173-4624-b69f-354b67cce9d8", 00:21:44.328 "is_configured": true, 00:21:44.328 "data_offset": 0, 00:21:44.328 "data_size": 65536 00:21:44.328 }, 00:21:44.328 { 00:21:44.328 "name": "BaseBdev4", 00:21:44.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.328 "is_configured": false, 00:21:44.328 "data_offset": 0, 00:21:44.328 "data_size": 0 00:21:44.328 } 00:21:44.328 ] 00:21:44.328 }' 00:21:44.328 07:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:44.328 07:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.894 07:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:45.151 [2024-07-12 07:32:18.851740] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:45.151 [2024-07-12 07:32:18.851799] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:21:45.151 [2024-07-12 07:32:18.851808] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:45.151 [2024-07-12 07:32:18.851996] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:21:45.151 [2024-07-12 07:32:18.852448] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:21:45.151 [2024-07-12 07:32:18.852459] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:21:45.151 [2024-07-12 07:32:18.852722] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.151 BaseBdev4 00:21:45.151 07:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:45.151 07:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:21:45.151 07:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:45.151 07:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:45.151 07:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:45.151 07:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:45.152 07:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:45.410 07:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:45.669 [ 00:21:45.669 { 00:21:45.669 "name": "BaseBdev4", 00:21:45.669 "aliases": [ 00:21:45.669 "e088ea6b-45a0-4ffb-916f-5742fdab62a0" 00:21:45.669 ], 00:21:45.669 "product_name": "Malloc disk", 00:21:45.669 "block_size": 512, 00:21:45.669 "num_blocks": 65536, 00:21:45.669 "uuid": "e088ea6b-45a0-4ffb-916f-5742fdab62a0", 00:21:45.669 "assigned_rate_limits": { 00:21:45.669 "rw_ios_per_sec": 0, 00:21:45.669 "rw_mbytes_per_sec": 0, 00:21:45.669 "r_mbytes_per_sec": 0, 00:21:45.669 "w_mbytes_per_sec": 0 00:21:45.669 }, 00:21:45.669 "claimed": true, 00:21:45.669 "claim_type": "exclusive_write", 00:21:45.669 "zoned": false, 00:21:45.669 "supported_io_types": { 00:21:45.669 "read": true, 00:21:45.669 "write": true, 00:21:45.669 "unmap": true, 00:21:45.669 "write_zeroes": true, 00:21:45.669 "flush": true, 00:21:45.669 "reset": true, 00:21:45.669 "compare": false, 00:21:45.669 "compare_and_write": false, 00:21:45.669 "abort": true, 00:21:45.669 "nvme_admin": false, 00:21:45.669 "nvme_io": false 00:21:45.669 }, 00:21:45.669 "memory_domains": [ 00:21:45.669 { 00:21:45.669 "dma_device_id": "system", 00:21:45.669 "dma_device_type": 1 00:21:45.669 }, 00:21:45.669 { 00:21:45.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.669 "dma_device_type": 2 00:21:45.669 } 00:21:45.669 ], 00:21:45.669 "driver_specific": {} 00:21:45.669 } 00:21:45.669 ] 00:21:45.669 07:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:45.669 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:45.669 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:45.669 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:45.669 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:45.669 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:45.669 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:45.669 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:45.669 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:45.669 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:45.669 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:45.669 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:45.669 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:45.669 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.669 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.927 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.927 "name": "Existed_Raid", 00:21:45.927 "uuid": "b04f90b2-8e74-45d7-b50f-29ddef98cb88", 00:21:45.927 "strip_size_kb": 64, 00:21:45.927 "state": "online", 00:21:45.927 "raid_level": "raid0", 00:21:45.927 "superblock": false, 00:21:45.927 "num_base_bdevs": 4, 00:21:45.927 "num_base_bdevs_discovered": 4, 00:21:45.927 "num_base_bdevs_operational": 4, 00:21:45.927 "base_bdevs_list": [ 00:21:45.927 { 00:21:45.927 "name": "BaseBdev1", 00:21:45.927 "uuid": "a48884b4-fac5-48e6-ac60-5d273ef27d3a", 00:21:45.927 "is_configured": true, 00:21:45.927 "data_offset": 0, 00:21:45.927 "data_size": 65536 00:21:45.927 }, 00:21:45.927 { 00:21:45.927 "name": "BaseBdev2", 00:21:45.927 "uuid": "6e346846-c123-4841-895d-4ae1fc27d965", 00:21:45.927 "is_configured": true, 00:21:45.927 "data_offset": 0, 00:21:45.927 "data_size": 65536 00:21:45.927 }, 00:21:45.927 { 00:21:45.927 "name": "BaseBdev3", 00:21:45.927 "uuid": "0b9cbfef-6173-4624-b69f-354b67cce9d8", 00:21:45.927 "is_configured": true, 00:21:45.927 "data_offset": 0, 00:21:45.927 "data_size": 65536 00:21:45.927 }, 00:21:45.927 { 00:21:45.927 "name": "BaseBdev4", 00:21:45.927 "uuid": "e088ea6b-45a0-4ffb-916f-5742fdab62a0", 00:21:45.927 "is_configured": true, 00:21:45.927 "data_offset": 0, 00:21:45.927 "data_size": 65536 00:21:45.927 } 00:21:45.927 ] 00:21:45.927 }' 00:21:45.927 07:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.927 07:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.495 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:46.495 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:46.495 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:46.495 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:46.495 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:46.495 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:46.496 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:46.496 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:46.496 [2024-07-12 07:32:20.304470] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:46.496 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:46.496 "name": "Existed_Raid", 00:21:46.496 "aliases": [ 00:21:46.496 "b04f90b2-8e74-45d7-b50f-29ddef98cb88" 00:21:46.496 ], 00:21:46.496 "product_name": "Raid Volume", 00:21:46.496 "block_size": 512, 00:21:46.496 "num_blocks": 262144, 00:21:46.496 "uuid": "b04f90b2-8e74-45d7-b50f-29ddef98cb88", 00:21:46.496 "assigned_rate_limits": { 00:21:46.496 "rw_ios_per_sec": 0, 00:21:46.496 "rw_mbytes_per_sec": 0, 00:21:46.496 "r_mbytes_per_sec": 0, 00:21:46.496 "w_mbytes_per_sec": 0 00:21:46.496 }, 00:21:46.496 "claimed": false, 00:21:46.496 "zoned": false, 00:21:46.496 "supported_io_types": { 00:21:46.496 "read": true, 00:21:46.496 "write": true, 00:21:46.496 "unmap": true, 00:21:46.496 "write_zeroes": true, 00:21:46.496 "flush": true, 00:21:46.496 "reset": true, 00:21:46.496 "compare": false, 00:21:46.496 "compare_and_write": false, 00:21:46.496 "abort": false, 00:21:46.496 "nvme_admin": false, 00:21:46.496 "nvme_io": false 00:21:46.496 }, 00:21:46.496 "memory_domains": [ 00:21:46.496 { 00:21:46.496 "dma_device_id": "system", 00:21:46.496 "dma_device_type": 1 00:21:46.496 }, 00:21:46.496 { 00:21:46.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.496 "dma_device_type": 2 00:21:46.496 }, 00:21:46.496 { 00:21:46.496 "dma_device_id": "system", 00:21:46.496 "dma_device_type": 1 00:21:46.496 }, 00:21:46.496 { 00:21:46.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.496 "dma_device_type": 2 00:21:46.496 }, 00:21:46.496 { 00:21:46.496 "dma_device_id": "system", 00:21:46.496 "dma_device_type": 1 00:21:46.496 }, 00:21:46.496 { 00:21:46.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.496 "dma_device_type": 2 00:21:46.496 }, 00:21:46.496 { 00:21:46.496 "dma_device_id": "system", 00:21:46.496 "dma_device_type": 1 00:21:46.496 }, 00:21:46.496 { 00:21:46.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.496 "dma_device_type": 2 00:21:46.496 } 00:21:46.496 ], 00:21:46.496 "driver_specific": { 00:21:46.496 "raid": { 00:21:46.496 "uuid": "b04f90b2-8e74-45d7-b50f-29ddef98cb88", 00:21:46.496 "strip_size_kb": 64, 00:21:46.496 "state": "online", 00:21:46.496 "raid_level": "raid0", 00:21:46.496 "superblock": false, 00:21:46.496 "num_base_bdevs": 4, 00:21:46.496 "num_base_bdevs_discovered": 4, 00:21:46.496 "num_base_bdevs_operational": 4, 00:21:46.496 "base_bdevs_list": [ 00:21:46.496 { 00:21:46.496 "name": "BaseBdev1", 00:21:46.496 "uuid": "a48884b4-fac5-48e6-ac60-5d273ef27d3a", 00:21:46.496 "is_configured": true, 00:21:46.496 "data_offset": 0, 00:21:46.496 "data_size": 65536 00:21:46.496 }, 00:21:46.496 { 00:21:46.496 "name": "BaseBdev2", 00:21:46.496 "uuid": "6e346846-c123-4841-895d-4ae1fc27d965", 00:21:46.496 "is_configured": true, 00:21:46.496 "data_offset": 0, 00:21:46.496 "data_size": 65536 00:21:46.496 }, 00:21:46.496 { 00:21:46.496 "name": "BaseBdev3", 00:21:46.496 "uuid": "0b9cbfef-6173-4624-b69f-354b67cce9d8", 00:21:46.496 "is_configured": true, 00:21:46.496 "data_offset": 0, 00:21:46.496 "data_size": 65536 00:21:46.496 }, 00:21:46.496 { 00:21:46.496 "name": "BaseBdev4", 00:21:46.496 "uuid": "e088ea6b-45a0-4ffb-916f-5742fdab62a0", 00:21:46.496 "is_configured": true, 00:21:46.496 "data_offset": 0, 00:21:46.496 "data_size": 65536 00:21:46.496 } 00:21:46.496 ] 00:21:46.496 } 00:21:46.496 } 00:21:46.496 }' 00:21:46.496 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:46.496 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:46.496 BaseBdev2 00:21:46.496 BaseBdev3 00:21:46.496 BaseBdev4' 00:21:46.496 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:46.496 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:46.754 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:46.754 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:46.754 "name": "BaseBdev1", 00:21:46.754 "aliases": [ 00:21:46.754 "a48884b4-fac5-48e6-ac60-5d273ef27d3a" 00:21:46.754 ], 00:21:46.754 "product_name": "Malloc disk", 00:21:46.754 "block_size": 512, 00:21:46.754 "num_blocks": 65536, 00:21:46.754 "uuid": "a48884b4-fac5-48e6-ac60-5d273ef27d3a", 00:21:46.754 "assigned_rate_limits": { 00:21:46.754 "rw_ios_per_sec": 0, 00:21:46.754 "rw_mbytes_per_sec": 0, 00:21:46.754 "r_mbytes_per_sec": 0, 00:21:46.754 "w_mbytes_per_sec": 0 00:21:46.754 }, 00:21:46.754 "claimed": true, 00:21:46.754 "claim_type": "exclusive_write", 00:21:46.754 "zoned": false, 00:21:46.754 "supported_io_types": { 00:21:46.754 "read": true, 00:21:46.754 "write": true, 00:21:46.754 "unmap": true, 00:21:46.754 "write_zeroes": true, 00:21:46.754 "flush": true, 00:21:46.754 "reset": true, 00:21:46.754 "compare": false, 00:21:46.754 "compare_and_write": false, 00:21:46.755 "abort": true, 00:21:46.755 "nvme_admin": false, 00:21:46.755 "nvme_io": false 00:21:46.755 }, 00:21:46.755 "memory_domains": [ 00:21:46.755 { 00:21:46.755 "dma_device_id": "system", 00:21:46.755 "dma_device_type": 1 00:21:46.755 }, 00:21:46.755 { 00:21:46.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.755 "dma_device_type": 2 00:21:46.755 } 00:21:46.755 ], 00:21:46.755 "driver_specific": {} 00:21:46.755 }' 00:21:46.755 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:46.755 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:47.013 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:47.013 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:47.013 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:47.013 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:47.013 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:47.013 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:47.014 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:47.014 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.377 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.377 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:47.377 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:47.377 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:47.377 07:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:47.688 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:47.688 "name": "BaseBdev2", 00:21:47.688 "aliases": [ 00:21:47.688 "6e346846-c123-4841-895d-4ae1fc27d965" 00:21:47.688 ], 00:21:47.688 "product_name": "Malloc disk", 00:21:47.688 "block_size": 512, 00:21:47.688 "num_blocks": 65536, 00:21:47.688 "uuid": "6e346846-c123-4841-895d-4ae1fc27d965", 00:21:47.688 "assigned_rate_limits": { 00:21:47.688 "rw_ios_per_sec": 0, 00:21:47.688 "rw_mbytes_per_sec": 0, 00:21:47.688 "r_mbytes_per_sec": 0, 00:21:47.688 "w_mbytes_per_sec": 0 00:21:47.688 }, 00:21:47.688 "claimed": true, 00:21:47.688 "claim_type": "exclusive_write", 00:21:47.688 "zoned": false, 00:21:47.688 "supported_io_types": { 00:21:47.688 "read": true, 00:21:47.688 "write": true, 00:21:47.688 "unmap": true, 00:21:47.688 "write_zeroes": true, 00:21:47.688 "flush": true, 00:21:47.688 "reset": true, 00:21:47.688 "compare": false, 00:21:47.688 "compare_and_write": false, 00:21:47.688 "abort": true, 00:21:47.688 "nvme_admin": false, 00:21:47.688 "nvme_io": false 00:21:47.688 }, 00:21:47.688 "memory_domains": [ 00:21:47.688 { 00:21:47.688 "dma_device_id": "system", 00:21:47.688 "dma_device_type": 1 00:21:47.688 }, 00:21:47.688 { 00:21:47.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.688 "dma_device_type": 2 00:21:47.688 } 00:21:47.688 ], 00:21:47.688 "driver_specific": {} 00:21:47.688 }' 00:21:47.688 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:47.688 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:47.688 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:47.688 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:47.688 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:47.688 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:47.688 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:47.688 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:47.688 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:47.688 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.688 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.947 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:47.947 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:47.947 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:47.947 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:47.947 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:47.947 "name": "BaseBdev3", 00:21:47.947 "aliases": [ 00:21:47.947 "0b9cbfef-6173-4624-b69f-354b67cce9d8" 00:21:47.947 ], 00:21:47.947 "product_name": "Malloc disk", 00:21:47.947 "block_size": 512, 00:21:47.947 "num_blocks": 65536, 00:21:47.947 "uuid": "0b9cbfef-6173-4624-b69f-354b67cce9d8", 00:21:47.947 "assigned_rate_limits": { 00:21:47.947 "rw_ios_per_sec": 0, 00:21:47.947 "rw_mbytes_per_sec": 0, 00:21:47.947 "r_mbytes_per_sec": 0, 00:21:47.947 "w_mbytes_per_sec": 0 00:21:47.947 }, 00:21:47.947 "claimed": true, 00:21:47.947 "claim_type": "exclusive_write", 00:21:47.947 "zoned": false, 00:21:47.947 "supported_io_types": { 00:21:47.947 "read": true, 00:21:47.947 "write": true, 00:21:47.947 "unmap": true, 00:21:47.947 "write_zeroes": true, 00:21:47.947 "flush": true, 00:21:47.947 "reset": true, 00:21:47.947 "compare": false, 00:21:47.947 "compare_and_write": false, 00:21:47.947 "abort": true, 00:21:47.947 "nvme_admin": false, 00:21:47.947 "nvme_io": false 00:21:47.947 }, 00:21:47.947 "memory_domains": [ 00:21:47.947 { 00:21:47.947 "dma_device_id": "system", 00:21:47.947 "dma_device_type": 1 00:21:47.947 }, 00:21:47.947 { 00:21:47.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.947 "dma_device_type": 2 00:21:47.947 } 00:21:47.947 ], 00:21:47.947 "driver_specific": {} 00:21:47.947 }' 00:21:47.947 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:48.206 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:48.206 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:48.206 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:48.206 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:48.206 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:48.206 07:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:48.206 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:48.206 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:48.206 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:48.465 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:48.465 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:48.465 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:48.465 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:48.465 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:48.727 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:48.727 "name": "BaseBdev4", 00:21:48.727 "aliases": [ 00:21:48.727 "e088ea6b-45a0-4ffb-916f-5742fdab62a0" 00:21:48.727 ], 00:21:48.727 "product_name": "Malloc disk", 00:21:48.727 "block_size": 512, 00:21:48.727 "num_blocks": 65536, 00:21:48.727 "uuid": "e088ea6b-45a0-4ffb-916f-5742fdab62a0", 00:21:48.727 "assigned_rate_limits": { 00:21:48.727 "rw_ios_per_sec": 0, 00:21:48.727 "rw_mbytes_per_sec": 0, 00:21:48.727 "r_mbytes_per_sec": 0, 00:21:48.727 "w_mbytes_per_sec": 0 00:21:48.727 }, 00:21:48.727 "claimed": true, 00:21:48.727 "claim_type": "exclusive_write", 00:21:48.727 "zoned": false, 00:21:48.727 "supported_io_types": { 00:21:48.727 "read": true, 00:21:48.727 "write": true, 00:21:48.727 "unmap": true, 00:21:48.727 "write_zeroes": true, 00:21:48.727 "flush": true, 00:21:48.727 "reset": true, 00:21:48.727 "compare": false, 00:21:48.727 "compare_and_write": false, 00:21:48.727 "abort": true, 00:21:48.727 "nvme_admin": false, 00:21:48.727 "nvme_io": false 00:21:48.727 }, 00:21:48.727 "memory_domains": [ 00:21:48.727 { 00:21:48.727 "dma_device_id": "system", 00:21:48.727 "dma_device_type": 1 00:21:48.727 }, 00:21:48.727 { 00:21:48.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.727 "dma_device_type": 2 00:21:48.727 } 00:21:48.727 ], 00:21:48.727 "driver_specific": {} 00:21:48.727 }' 00:21:48.727 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:48.727 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:48.727 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:48.727 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:48.727 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:48.986 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:48.986 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:48.986 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:48.986 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:48.986 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:48.986 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:48.986 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:48.986 07:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:49.245 [2024-07-12 07:32:23.082419] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:49.245 [2024-07-12 07:32:23.082472] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:49.245 [2024-07-12 07:32:23.082573] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:49.245 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:49.245 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:21:49.245 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:49.245 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:49.245 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:49.245 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:49.245 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:49.246 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:49.246 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:49.505 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:49.505 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:49.505 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:49.505 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:49.505 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:49.505 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:49.505 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.505 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.505 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:49.505 "name": "Existed_Raid", 00:21:49.505 "uuid": "b04f90b2-8e74-45d7-b50f-29ddef98cb88", 00:21:49.505 "strip_size_kb": 64, 00:21:49.505 "state": "offline", 00:21:49.505 "raid_level": "raid0", 00:21:49.505 "superblock": false, 00:21:49.505 "num_base_bdevs": 4, 00:21:49.505 "num_base_bdevs_discovered": 3, 00:21:49.505 "num_base_bdevs_operational": 3, 00:21:49.505 "base_bdevs_list": [ 00:21:49.505 { 00:21:49.505 "name": null, 00:21:49.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.505 "is_configured": false, 00:21:49.505 "data_offset": 0, 00:21:49.505 "data_size": 65536 00:21:49.505 }, 00:21:49.505 { 00:21:49.505 "name": "BaseBdev2", 00:21:49.505 "uuid": "6e346846-c123-4841-895d-4ae1fc27d965", 00:21:49.505 "is_configured": true, 00:21:49.505 "data_offset": 0, 00:21:49.505 "data_size": 65536 00:21:49.505 }, 00:21:49.505 { 00:21:49.505 "name": "BaseBdev3", 00:21:49.505 "uuid": "0b9cbfef-6173-4624-b69f-354b67cce9d8", 00:21:49.505 "is_configured": true, 00:21:49.505 "data_offset": 0, 00:21:49.505 "data_size": 65536 00:21:49.505 }, 00:21:49.505 { 00:21:49.505 "name": "BaseBdev4", 00:21:49.505 "uuid": "e088ea6b-45a0-4ffb-916f-5742fdab62a0", 00:21:49.505 "is_configured": true, 00:21:49.505 "data_offset": 0, 00:21:49.505 "data_size": 65536 00:21:49.505 } 00:21:49.505 ] 00:21:49.505 }' 00:21:49.505 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:49.505 07:32:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.074 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:50.074 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:50.074 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.074 07:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:50.332 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:50.332 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:50.332 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:50.591 [2024-07-12 07:32:24.419640] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:50.591 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:50.591 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:50.591 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:50.591 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.850 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:50.850 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:50.850 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:51.107 [2024-07-12 07:32:24.900605] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:51.107 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:51.107 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:51.107 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.107 07:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:51.365 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:51.365 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:51.365 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:51.623 [2024-07-12 07:32:25.401883] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:51.623 [2024-07-12 07:32:25.401973] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:21:51.623 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:51.623 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:51.623 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:51.623 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.881 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:51.881 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:51.881 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:51.881 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:51.881 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:51.881 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:52.140 BaseBdev2 00:21:52.140 07:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:52.140 07:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:52.140 07:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:52.140 07:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:52.140 07:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:52.140 07:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:52.140 07:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:52.398 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:52.656 [ 00:21:52.656 { 00:21:52.656 "name": "BaseBdev2", 00:21:52.656 "aliases": [ 00:21:52.656 "025a8950-892d-4cc9-8e20-7f304a27c68d" 00:21:52.656 ], 00:21:52.656 "product_name": "Malloc disk", 00:21:52.656 "block_size": 512, 00:21:52.656 "num_blocks": 65536, 00:21:52.656 "uuid": "025a8950-892d-4cc9-8e20-7f304a27c68d", 00:21:52.656 "assigned_rate_limits": { 00:21:52.656 "rw_ios_per_sec": 0, 00:21:52.656 "rw_mbytes_per_sec": 0, 00:21:52.656 "r_mbytes_per_sec": 0, 00:21:52.656 "w_mbytes_per_sec": 0 00:21:52.656 }, 00:21:52.656 "claimed": false, 00:21:52.656 "zoned": false, 00:21:52.656 "supported_io_types": { 00:21:52.656 "read": true, 00:21:52.656 "write": true, 00:21:52.656 "unmap": true, 00:21:52.656 "write_zeroes": true, 00:21:52.656 "flush": true, 00:21:52.656 "reset": true, 00:21:52.656 "compare": false, 00:21:52.656 "compare_and_write": false, 00:21:52.656 "abort": true, 00:21:52.656 "nvme_admin": false, 00:21:52.656 "nvme_io": false 00:21:52.656 }, 00:21:52.656 "memory_domains": [ 00:21:52.656 { 00:21:52.656 "dma_device_id": "system", 00:21:52.656 "dma_device_type": 1 00:21:52.656 }, 00:21:52.656 { 00:21:52.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.656 "dma_device_type": 2 00:21:52.656 } 00:21:52.656 ], 00:21:52.656 "driver_specific": {} 00:21:52.656 } 00:21:52.656 ] 00:21:52.656 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:52.656 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:52.656 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:52.656 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:52.914 BaseBdev3 00:21:52.914 07:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:52.914 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:52.914 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:52.914 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:52.914 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:52.914 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:52.914 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:52.914 07:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:53.172 [ 00:21:53.172 { 00:21:53.172 "name": "BaseBdev3", 00:21:53.172 "aliases": [ 00:21:53.172 "b6ef0f91-c8fe-4192-8284-44860bc3f183" 00:21:53.172 ], 00:21:53.172 "product_name": "Malloc disk", 00:21:53.172 "block_size": 512, 00:21:53.172 "num_blocks": 65536, 00:21:53.172 "uuid": "b6ef0f91-c8fe-4192-8284-44860bc3f183", 00:21:53.172 "assigned_rate_limits": { 00:21:53.172 "rw_ios_per_sec": 0, 00:21:53.172 "rw_mbytes_per_sec": 0, 00:21:53.173 "r_mbytes_per_sec": 0, 00:21:53.173 "w_mbytes_per_sec": 0 00:21:53.173 }, 00:21:53.173 "claimed": false, 00:21:53.173 "zoned": false, 00:21:53.173 "supported_io_types": { 00:21:53.173 "read": true, 00:21:53.173 "write": true, 00:21:53.173 "unmap": true, 00:21:53.173 "write_zeroes": true, 00:21:53.173 "flush": true, 00:21:53.173 "reset": true, 00:21:53.173 "compare": false, 00:21:53.173 "compare_and_write": false, 00:21:53.173 "abort": true, 00:21:53.173 "nvme_admin": false, 00:21:53.173 "nvme_io": false 00:21:53.173 }, 00:21:53.173 "memory_domains": [ 00:21:53.173 { 00:21:53.173 "dma_device_id": "system", 00:21:53.173 "dma_device_type": 1 00:21:53.173 }, 00:21:53.173 { 00:21:53.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.173 "dma_device_type": 2 00:21:53.173 } 00:21:53.173 ], 00:21:53.173 "driver_specific": {} 00:21:53.173 } 00:21:53.173 ] 00:21:53.173 07:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:53.173 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:53.173 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:53.173 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:53.430 BaseBdev4 00:21:53.430 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:53.430 07:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:21:53.430 07:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:53.430 07:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:53.431 07:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:53.431 07:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:53.431 07:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:53.689 07:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:53.948 [ 00:21:53.948 { 00:21:53.948 "name": "BaseBdev4", 00:21:53.948 "aliases": [ 00:21:53.948 "c1cf2c15-68c2-4785-bfdc-7a2a1d2d1393" 00:21:53.948 ], 00:21:53.948 "product_name": "Malloc disk", 00:21:53.948 "block_size": 512, 00:21:53.948 "num_blocks": 65536, 00:21:53.948 "uuid": "c1cf2c15-68c2-4785-bfdc-7a2a1d2d1393", 00:21:53.948 "assigned_rate_limits": { 00:21:53.948 "rw_ios_per_sec": 0, 00:21:53.948 "rw_mbytes_per_sec": 0, 00:21:53.948 "r_mbytes_per_sec": 0, 00:21:53.948 "w_mbytes_per_sec": 0 00:21:53.948 }, 00:21:53.948 "claimed": false, 00:21:53.948 "zoned": false, 00:21:53.948 "supported_io_types": { 00:21:53.948 "read": true, 00:21:53.948 "write": true, 00:21:53.948 "unmap": true, 00:21:53.948 "write_zeroes": true, 00:21:53.948 "flush": true, 00:21:53.948 "reset": true, 00:21:53.948 "compare": false, 00:21:53.948 "compare_and_write": false, 00:21:53.948 "abort": true, 00:21:53.948 "nvme_admin": false, 00:21:53.948 "nvme_io": false 00:21:53.948 }, 00:21:53.948 "memory_domains": [ 00:21:53.948 { 00:21:53.948 "dma_device_id": "system", 00:21:53.948 "dma_device_type": 1 00:21:53.948 }, 00:21:53.948 { 00:21:53.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.948 "dma_device_type": 2 00:21:53.948 } 00:21:53.948 ], 00:21:53.948 "driver_specific": {} 00:21:53.948 } 00:21:53.948 ] 00:21:53.948 07:32:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:53.948 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:53.948 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:53.948 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:54.208 [2024-07-12 07:32:27.915131] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:54.208 [2024-07-12 07:32:27.915244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:54.208 [2024-07-12 07:32:27.915270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:54.208 [2024-07-12 07:32:27.917768] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:54.208 [2024-07-12 07:32:27.917822] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:54.208 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:54.208 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:54.208 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:54.208 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:54.208 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:54.208 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:54.208 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:54.208 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:54.208 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:54.208 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:54.208 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.208 07:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.467 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:54.467 "name": "Existed_Raid", 00:21:54.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.467 "strip_size_kb": 64, 00:21:54.467 "state": "configuring", 00:21:54.467 "raid_level": "raid0", 00:21:54.467 "superblock": false, 00:21:54.467 "num_base_bdevs": 4, 00:21:54.467 "num_base_bdevs_discovered": 3, 00:21:54.467 "num_base_bdevs_operational": 4, 00:21:54.467 "base_bdevs_list": [ 00:21:54.467 { 00:21:54.467 "name": "BaseBdev1", 00:21:54.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.467 "is_configured": false, 00:21:54.467 "data_offset": 0, 00:21:54.467 "data_size": 0 00:21:54.467 }, 00:21:54.467 { 00:21:54.467 "name": "BaseBdev2", 00:21:54.467 "uuid": "025a8950-892d-4cc9-8e20-7f304a27c68d", 00:21:54.467 "is_configured": true, 00:21:54.467 "data_offset": 0, 00:21:54.467 "data_size": 65536 00:21:54.467 }, 00:21:54.467 { 00:21:54.467 "name": "BaseBdev3", 00:21:54.467 "uuid": "b6ef0f91-c8fe-4192-8284-44860bc3f183", 00:21:54.467 "is_configured": true, 00:21:54.467 "data_offset": 0, 00:21:54.467 "data_size": 65536 00:21:54.467 }, 00:21:54.467 { 00:21:54.467 "name": "BaseBdev4", 00:21:54.467 "uuid": "c1cf2c15-68c2-4785-bfdc-7a2a1d2d1393", 00:21:54.467 "is_configured": true, 00:21:54.467 "data_offset": 0, 00:21:54.467 "data_size": 65536 00:21:54.467 } 00:21:54.467 ] 00:21:54.467 }' 00:21:54.467 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:54.467 07:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.034 07:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:55.294 [2024-07-12 07:32:29.023376] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:55.294 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:55.294 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:55.294 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:55.294 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:55.294 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:55.294 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:55.294 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:55.294 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:55.294 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:55.294 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:55.294 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.294 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:55.552 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:55.552 "name": "Existed_Raid", 00:21:55.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.552 "strip_size_kb": 64, 00:21:55.552 "state": "configuring", 00:21:55.552 "raid_level": "raid0", 00:21:55.552 "superblock": false, 00:21:55.552 "num_base_bdevs": 4, 00:21:55.552 "num_base_bdevs_discovered": 2, 00:21:55.552 "num_base_bdevs_operational": 4, 00:21:55.552 "base_bdevs_list": [ 00:21:55.552 { 00:21:55.552 "name": "BaseBdev1", 00:21:55.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.552 "is_configured": false, 00:21:55.552 "data_offset": 0, 00:21:55.552 "data_size": 0 00:21:55.552 }, 00:21:55.552 { 00:21:55.552 "name": null, 00:21:55.552 "uuid": "025a8950-892d-4cc9-8e20-7f304a27c68d", 00:21:55.553 "is_configured": false, 00:21:55.553 "data_offset": 0, 00:21:55.553 "data_size": 65536 00:21:55.553 }, 00:21:55.553 { 00:21:55.553 "name": "BaseBdev3", 00:21:55.553 "uuid": "b6ef0f91-c8fe-4192-8284-44860bc3f183", 00:21:55.553 "is_configured": true, 00:21:55.553 "data_offset": 0, 00:21:55.553 "data_size": 65536 00:21:55.553 }, 00:21:55.553 { 00:21:55.553 "name": "BaseBdev4", 00:21:55.553 "uuid": "c1cf2c15-68c2-4785-bfdc-7a2a1d2d1393", 00:21:55.553 "is_configured": true, 00:21:55.553 "data_offset": 0, 00:21:55.553 "data_size": 65536 00:21:55.553 } 00:21:55.553 ] 00:21:55.553 }' 00:21:55.553 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:55.553 07:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.120 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:56.120 07:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.379 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:56.379 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:56.637 [2024-07-12 07:32:30.341029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:56.637 BaseBdev1 00:21:56.637 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:56.637 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:56.637 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:56.637 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:56.637 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:56.637 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:56.637 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:56.895 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:56.895 [ 00:21:56.895 { 00:21:56.895 "name": "BaseBdev1", 00:21:56.895 "aliases": [ 00:21:56.895 "866ad694-b51d-496e-b1c5-bf8072729d3b" 00:21:56.895 ], 00:21:56.895 "product_name": "Malloc disk", 00:21:56.895 "block_size": 512, 00:21:56.895 "num_blocks": 65536, 00:21:56.895 "uuid": "866ad694-b51d-496e-b1c5-bf8072729d3b", 00:21:56.895 "assigned_rate_limits": { 00:21:56.895 "rw_ios_per_sec": 0, 00:21:56.895 "rw_mbytes_per_sec": 0, 00:21:56.895 "r_mbytes_per_sec": 0, 00:21:56.895 "w_mbytes_per_sec": 0 00:21:56.895 }, 00:21:56.895 "claimed": true, 00:21:56.895 "claim_type": "exclusive_write", 00:21:56.895 "zoned": false, 00:21:56.895 "supported_io_types": { 00:21:56.895 "read": true, 00:21:56.895 "write": true, 00:21:56.895 "unmap": true, 00:21:56.895 "write_zeroes": true, 00:21:56.895 "flush": true, 00:21:56.895 "reset": true, 00:21:56.895 "compare": false, 00:21:56.895 "compare_and_write": false, 00:21:56.895 "abort": true, 00:21:56.895 "nvme_admin": false, 00:21:56.895 "nvme_io": false 00:21:56.895 }, 00:21:56.895 "memory_domains": [ 00:21:56.895 { 00:21:56.895 "dma_device_id": "system", 00:21:56.895 "dma_device_type": 1 00:21:56.895 }, 00:21:56.895 { 00:21:56.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.895 "dma_device_type": 2 00:21:56.895 } 00:21:56.895 ], 00:21:56.895 "driver_specific": {} 00:21:56.895 } 00:21:56.895 ] 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:57.153 "name": "Existed_Raid", 00:21:57.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.153 "strip_size_kb": 64, 00:21:57.153 "state": "configuring", 00:21:57.153 "raid_level": "raid0", 00:21:57.153 "superblock": false, 00:21:57.153 "num_base_bdevs": 4, 00:21:57.153 "num_base_bdevs_discovered": 3, 00:21:57.153 "num_base_bdevs_operational": 4, 00:21:57.153 "base_bdevs_list": [ 00:21:57.153 { 00:21:57.153 "name": "BaseBdev1", 00:21:57.153 "uuid": "866ad694-b51d-496e-b1c5-bf8072729d3b", 00:21:57.153 "is_configured": true, 00:21:57.153 "data_offset": 0, 00:21:57.153 "data_size": 65536 00:21:57.153 }, 00:21:57.153 { 00:21:57.153 "name": null, 00:21:57.153 "uuid": "025a8950-892d-4cc9-8e20-7f304a27c68d", 00:21:57.153 "is_configured": false, 00:21:57.153 "data_offset": 0, 00:21:57.153 "data_size": 65536 00:21:57.153 }, 00:21:57.153 { 00:21:57.153 "name": "BaseBdev3", 00:21:57.153 "uuid": "b6ef0f91-c8fe-4192-8284-44860bc3f183", 00:21:57.153 "is_configured": true, 00:21:57.153 "data_offset": 0, 00:21:57.153 "data_size": 65536 00:21:57.153 }, 00:21:57.153 { 00:21:57.153 "name": "BaseBdev4", 00:21:57.153 "uuid": "c1cf2c15-68c2-4785-bfdc-7a2a1d2d1393", 00:21:57.153 "is_configured": true, 00:21:57.153 "data_offset": 0, 00:21:57.153 "data_size": 65536 00:21:57.153 } 00:21:57.153 ] 00:21:57.153 }' 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:57.153 07:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.719 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.719 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:57.977 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:57.977 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:58.236 [2024-07-12 07:32:31.958023] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:58.236 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:58.236 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:58.236 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:58.236 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:58.236 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:58.236 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:58.236 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:58.236 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:58.236 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:58.236 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:58.236 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.236 07:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.494 07:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:58.494 "name": "Existed_Raid", 00:21:58.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.494 "strip_size_kb": 64, 00:21:58.494 "state": "configuring", 00:21:58.494 "raid_level": "raid0", 00:21:58.494 "superblock": false, 00:21:58.494 "num_base_bdevs": 4, 00:21:58.494 "num_base_bdevs_discovered": 2, 00:21:58.494 "num_base_bdevs_operational": 4, 00:21:58.494 "base_bdevs_list": [ 00:21:58.494 { 00:21:58.494 "name": "BaseBdev1", 00:21:58.494 "uuid": "866ad694-b51d-496e-b1c5-bf8072729d3b", 00:21:58.494 "is_configured": true, 00:21:58.494 "data_offset": 0, 00:21:58.494 "data_size": 65536 00:21:58.494 }, 00:21:58.494 { 00:21:58.494 "name": null, 00:21:58.494 "uuid": "025a8950-892d-4cc9-8e20-7f304a27c68d", 00:21:58.494 "is_configured": false, 00:21:58.494 "data_offset": 0, 00:21:58.494 "data_size": 65536 00:21:58.494 }, 00:21:58.494 { 00:21:58.494 "name": null, 00:21:58.494 "uuid": "b6ef0f91-c8fe-4192-8284-44860bc3f183", 00:21:58.494 "is_configured": false, 00:21:58.494 "data_offset": 0, 00:21:58.494 "data_size": 65536 00:21:58.494 }, 00:21:58.494 { 00:21:58.494 "name": "BaseBdev4", 00:21:58.494 "uuid": "c1cf2c15-68c2-4785-bfdc-7a2a1d2d1393", 00:21:58.494 "is_configured": true, 00:21:58.494 "data_offset": 0, 00:21:58.494 "data_size": 65536 00:21:58.494 } 00:21:58.494 ] 00:21:58.494 }' 00:21:58.494 07:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:58.494 07:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.061 07:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:59.061 07:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.319 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:59.319 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:59.319 [2024-07-12 07:32:33.197921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:59.583 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:59.583 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:59.583 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:59.583 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:59.583 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:59.583 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:59.583 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:59.583 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:59.583 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:59.583 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:59.583 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.583 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.583 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:59.583 "name": "Existed_Raid", 00:21:59.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.583 "strip_size_kb": 64, 00:21:59.583 "state": "configuring", 00:21:59.583 "raid_level": "raid0", 00:21:59.583 "superblock": false, 00:21:59.583 "num_base_bdevs": 4, 00:21:59.583 "num_base_bdevs_discovered": 3, 00:21:59.583 "num_base_bdevs_operational": 4, 00:21:59.583 "base_bdevs_list": [ 00:21:59.583 { 00:21:59.583 "name": "BaseBdev1", 00:21:59.583 "uuid": "866ad694-b51d-496e-b1c5-bf8072729d3b", 00:21:59.583 "is_configured": true, 00:21:59.583 "data_offset": 0, 00:21:59.583 "data_size": 65536 00:21:59.583 }, 00:21:59.583 { 00:21:59.583 "name": null, 00:21:59.583 "uuid": "025a8950-892d-4cc9-8e20-7f304a27c68d", 00:21:59.583 "is_configured": false, 00:21:59.583 "data_offset": 0, 00:21:59.583 "data_size": 65536 00:21:59.583 }, 00:21:59.583 { 00:21:59.583 "name": "BaseBdev3", 00:21:59.583 "uuid": "b6ef0f91-c8fe-4192-8284-44860bc3f183", 00:21:59.583 "is_configured": true, 00:21:59.583 "data_offset": 0, 00:21:59.583 "data_size": 65536 00:21:59.583 }, 00:21:59.583 { 00:21:59.583 "name": "BaseBdev4", 00:21:59.583 "uuid": "c1cf2c15-68c2-4785-bfdc-7a2a1d2d1393", 00:21:59.583 "is_configured": true, 00:21:59.583 "data_offset": 0, 00:21:59.583 "data_size": 65536 00:21:59.583 } 00:21:59.583 ] 00:21:59.583 }' 00:21:59.583 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:59.583 07:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.159 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.159 07:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:00.416 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:00.416 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:00.673 [2024-07-12 07:32:34.482293] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:00.673 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:00.673 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:00.673 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:00.673 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:00.673 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:00.673 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:00.673 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.673 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.673 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.674 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.674 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.674 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.931 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:00.931 "name": "Existed_Raid", 00:22:00.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.931 "strip_size_kb": 64, 00:22:00.931 "state": "configuring", 00:22:00.931 "raid_level": "raid0", 00:22:00.931 "superblock": false, 00:22:00.931 "num_base_bdevs": 4, 00:22:00.931 "num_base_bdevs_discovered": 2, 00:22:00.931 "num_base_bdevs_operational": 4, 00:22:00.931 "base_bdevs_list": [ 00:22:00.931 { 00:22:00.931 "name": null, 00:22:00.931 "uuid": "866ad694-b51d-496e-b1c5-bf8072729d3b", 00:22:00.931 "is_configured": false, 00:22:00.931 "data_offset": 0, 00:22:00.931 "data_size": 65536 00:22:00.931 }, 00:22:00.931 { 00:22:00.931 "name": null, 00:22:00.931 "uuid": "025a8950-892d-4cc9-8e20-7f304a27c68d", 00:22:00.931 "is_configured": false, 00:22:00.931 "data_offset": 0, 00:22:00.931 "data_size": 65536 00:22:00.931 }, 00:22:00.931 { 00:22:00.931 "name": "BaseBdev3", 00:22:00.931 "uuid": "b6ef0f91-c8fe-4192-8284-44860bc3f183", 00:22:00.931 "is_configured": true, 00:22:00.931 "data_offset": 0, 00:22:00.931 "data_size": 65536 00:22:00.931 }, 00:22:00.931 { 00:22:00.931 "name": "BaseBdev4", 00:22:00.931 "uuid": "c1cf2c15-68c2-4785-bfdc-7a2a1d2d1393", 00:22:00.931 "is_configured": true, 00:22:00.931 "data_offset": 0, 00:22:00.931 "data_size": 65536 00:22:00.931 } 00:22:00.931 ] 00:22:00.931 }' 00:22:00.931 07:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:00.931 07:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.497 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.497 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:01.755 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:01.755 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:02.013 [2024-07-12 07:32:35.690356] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.013 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:02.013 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:02.013 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:02.013 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:02.013 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:02.013 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:02.013 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.013 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.013 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.013 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.013 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.013 07:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.270 07:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:02.270 "name": "Existed_Raid", 00:22:02.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.270 "strip_size_kb": 64, 00:22:02.270 "state": "configuring", 00:22:02.270 "raid_level": "raid0", 00:22:02.270 "superblock": false, 00:22:02.270 "num_base_bdevs": 4, 00:22:02.270 "num_base_bdevs_discovered": 3, 00:22:02.270 "num_base_bdevs_operational": 4, 00:22:02.270 "base_bdevs_list": [ 00:22:02.270 { 00:22:02.270 "name": null, 00:22:02.270 "uuid": "866ad694-b51d-496e-b1c5-bf8072729d3b", 00:22:02.270 "is_configured": false, 00:22:02.270 "data_offset": 0, 00:22:02.270 "data_size": 65536 00:22:02.270 }, 00:22:02.270 { 00:22:02.270 "name": "BaseBdev2", 00:22:02.270 "uuid": "025a8950-892d-4cc9-8e20-7f304a27c68d", 00:22:02.270 "is_configured": true, 00:22:02.270 "data_offset": 0, 00:22:02.270 "data_size": 65536 00:22:02.270 }, 00:22:02.270 { 00:22:02.270 "name": "BaseBdev3", 00:22:02.270 "uuid": "b6ef0f91-c8fe-4192-8284-44860bc3f183", 00:22:02.270 "is_configured": true, 00:22:02.270 "data_offset": 0, 00:22:02.270 "data_size": 65536 00:22:02.270 }, 00:22:02.270 { 00:22:02.270 "name": "BaseBdev4", 00:22:02.270 "uuid": "c1cf2c15-68c2-4785-bfdc-7a2a1d2d1393", 00:22:02.270 "is_configured": true, 00:22:02.270 "data_offset": 0, 00:22:02.270 "data_size": 65536 00:22:02.270 } 00:22:02.270 ] 00:22:02.270 }' 00:22:02.270 07:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:02.270 07:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.836 07:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.836 07:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:03.094 07:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:03.094 07:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.094 07:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:03.351 07:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 866ad694-b51d-496e-b1c5-bf8072729d3b 00:22:03.609 [2024-07-12 07:32:37.324156] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:03.609 [2024-07-12 07:32:37.324209] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:22:03.609 [2024-07-12 07:32:37.324218] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:03.609 [2024-07-12 07:32:37.324305] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:22:03.609 [2024-07-12 07:32:37.324640] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:22:03.609 [2024-07-12 07:32:37.324651] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:22:03.609 [2024-07-12 07:32:37.324851] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.609 NewBaseBdev 00:22:03.609 07:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:03.609 07:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:22:03.609 07:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:03.609 07:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:03.609 07:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:03.609 07:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:03.609 07:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:03.867 07:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:04.125 [ 00:22:04.125 { 00:22:04.125 "name": "NewBaseBdev", 00:22:04.125 "aliases": [ 00:22:04.125 "866ad694-b51d-496e-b1c5-bf8072729d3b" 00:22:04.125 ], 00:22:04.125 "product_name": "Malloc disk", 00:22:04.125 "block_size": 512, 00:22:04.125 "num_blocks": 65536, 00:22:04.125 "uuid": "866ad694-b51d-496e-b1c5-bf8072729d3b", 00:22:04.125 "assigned_rate_limits": { 00:22:04.125 "rw_ios_per_sec": 0, 00:22:04.125 "rw_mbytes_per_sec": 0, 00:22:04.125 "r_mbytes_per_sec": 0, 00:22:04.125 "w_mbytes_per_sec": 0 00:22:04.125 }, 00:22:04.125 "claimed": true, 00:22:04.125 "claim_type": "exclusive_write", 00:22:04.125 "zoned": false, 00:22:04.125 "supported_io_types": { 00:22:04.125 "read": true, 00:22:04.125 "write": true, 00:22:04.125 "unmap": true, 00:22:04.125 "write_zeroes": true, 00:22:04.125 "flush": true, 00:22:04.125 "reset": true, 00:22:04.125 "compare": false, 00:22:04.125 "compare_and_write": false, 00:22:04.125 "abort": true, 00:22:04.125 "nvme_admin": false, 00:22:04.125 "nvme_io": false 00:22:04.125 }, 00:22:04.125 "memory_domains": [ 00:22:04.125 { 00:22:04.125 "dma_device_id": "system", 00:22:04.125 "dma_device_type": 1 00:22:04.125 }, 00:22:04.125 { 00:22:04.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.125 "dma_device_type": 2 00:22:04.125 } 00:22:04.125 ], 00:22:04.125 "driver_specific": {} 00:22:04.125 } 00:22:04.125 ] 00:22:04.125 07:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:04.125 07:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:04.125 07:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:04.125 07:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:04.125 07:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:04.125 07:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:04.125 07:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:04.125 07:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:04.125 07:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:04.125 07:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:04.125 07:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:04.125 07:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.125 07:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.383 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.383 "name": "Existed_Raid", 00:22:04.383 "uuid": "a2c81e59-70cc-4544-bf53-236d77fc1549", 00:22:04.383 "strip_size_kb": 64, 00:22:04.383 "state": "online", 00:22:04.383 "raid_level": "raid0", 00:22:04.383 "superblock": false, 00:22:04.383 "num_base_bdevs": 4, 00:22:04.383 "num_base_bdevs_discovered": 4, 00:22:04.383 "num_base_bdevs_operational": 4, 00:22:04.383 "base_bdevs_list": [ 00:22:04.383 { 00:22:04.383 "name": "NewBaseBdev", 00:22:04.383 "uuid": "866ad694-b51d-496e-b1c5-bf8072729d3b", 00:22:04.383 "is_configured": true, 00:22:04.383 "data_offset": 0, 00:22:04.383 "data_size": 65536 00:22:04.383 }, 00:22:04.383 { 00:22:04.383 "name": "BaseBdev2", 00:22:04.383 "uuid": "025a8950-892d-4cc9-8e20-7f304a27c68d", 00:22:04.383 "is_configured": true, 00:22:04.383 "data_offset": 0, 00:22:04.383 "data_size": 65536 00:22:04.383 }, 00:22:04.383 { 00:22:04.383 "name": "BaseBdev3", 00:22:04.383 "uuid": "b6ef0f91-c8fe-4192-8284-44860bc3f183", 00:22:04.383 "is_configured": true, 00:22:04.383 "data_offset": 0, 00:22:04.383 "data_size": 65536 00:22:04.383 }, 00:22:04.383 { 00:22:04.383 "name": "BaseBdev4", 00:22:04.383 "uuid": "c1cf2c15-68c2-4785-bfdc-7a2a1d2d1393", 00:22:04.383 "is_configured": true, 00:22:04.383 "data_offset": 0, 00:22:04.383 "data_size": 65536 00:22:04.383 } 00:22:04.383 ] 00:22:04.383 }' 00:22:04.383 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.383 07:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.950 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:04.950 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:04.950 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:04.950 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:04.950 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:04.950 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:04.950 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:04.950 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:04.950 [2024-07-12 07:32:38.828904] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:05.208 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:05.208 "name": "Existed_Raid", 00:22:05.208 "aliases": [ 00:22:05.208 "a2c81e59-70cc-4544-bf53-236d77fc1549" 00:22:05.208 ], 00:22:05.208 "product_name": "Raid Volume", 00:22:05.208 "block_size": 512, 00:22:05.208 "num_blocks": 262144, 00:22:05.208 "uuid": "a2c81e59-70cc-4544-bf53-236d77fc1549", 00:22:05.208 "assigned_rate_limits": { 00:22:05.208 "rw_ios_per_sec": 0, 00:22:05.208 "rw_mbytes_per_sec": 0, 00:22:05.208 "r_mbytes_per_sec": 0, 00:22:05.208 "w_mbytes_per_sec": 0 00:22:05.208 }, 00:22:05.208 "claimed": false, 00:22:05.208 "zoned": false, 00:22:05.208 "supported_io_types": { 00:22:05.208 "read": true, 00:22:05.208 "write": true, 00:22:05.208 "unmap": true, 00:22:05.208 "write_zeroes": true, 00:22:05.208 "flush": true, 00:22:05.208 "reset": true, 00:22:05.208 "compare": false, 00:22:05.208 "compare_and_write": false, 00:22:05.208 "abort": false, 00:22:05.208 "nvme_admin": false, 00:22:05.208 "nvme_io": false 00:22:05.208 }, 00:22:05.208 "memory_domains": [ 00:22:05.208 { 00:22:05.208 "dma_device_id": "system", 00:22:05.208 "dma_device_type": 1 00:22:05.208 }, 00:22:05.208 { 00:22:05.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.208 "dma_device_type": 2 00:22:05.208 }, 00:22:05.208 { 00:22:05.208 "dma_device_id": "system", 00:22:05.208 "dma_device_type": 1 00:22:05.208 }, 00:22:05.208 { 00:22:05.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.208 "dma_device_type": 2 00:22:05.208 }, 00:22:05.208 { 00:22:05.208 "dma_device_id": "system", 00:22:05.208 "dma_device_type": 1 00:22:05.208 }, 00:22:05.208 { 00:22:05.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.208 "dma_device_type": 2 00:22:05.208 }, 00:22:05.208 { 00:22:05.208 "dma_device_id": "system", 00:22:05.208 "dma_device_type": 1 00:22:05.208 }, 00:22:05.208 { 00:22:05.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.208 "dma_device_type": 2 00:22:05.208 } 00:22:05.208 ], 00:22:05.208 "driver_specific": { 00:22:05.208 "raid": { 00:22:05.208 "uuid": "a2c81e59-70cc-4544-bf53-236d77fc1549", 00:22:05.208 "strip_size_kb": 64, 00:22:05.208 "state": "online", 00:22:05.208 "raid_level": "raid0", 00:22:05.208 "superblock": false, 00:22:05.208 "num_base_bdevs": 4, 00:22:05.208 "num_base_bdevs_discovered": 4, 00:22:05.208 "num_base_bdevs_operational": 4, 00:22:05.208 "base_bdevs_list": [ 00:22:05.208 { 00:22:05.208 "name": "NewBaseBdev", 00:22:05.208 "uuid": "866ad694-b51d-496e-b1c5-bf8072729d3b", 00:22:05.208 "is_configured": true, 00:22:05.208 "data_offset": 0, 00:22:05.208 "data_size": 65536 00:22:05.208 }, 00:22:05.208 { 00:22:05.208 "name": "BaseBdev2", 00:22:05.208 "uuid": "025a8950-892d-4cc9-8e20-7f304a27c68d", 00:22:05.208 "is_configured": true, 00:22:05.208 "data_offset": 0, 00:22:05.208 "data_size": 65536 00:22:05.208 }, 00:22:05.208 { 00:22:05.208 "name": "BaseBdev3", 00:22:05.208 "uuid": "b6ef0f91-c8fe-4192-8284-44860bc3f183", 00:22:05.208 "is_configured": true, 00:22:05.208 "data_offset": 0, 00:22:05.208 "data_size": 65536 00:22:05.208 }, 00:22:05.208 { 00:22:05.208 "name": "BaseBdev4", 00:22:05.208 "uuid": "c1cf2c15-68c2-4785-bfdc-7a2a1d2d1393", 00:22:05.208 "is_configured": true, 00:22:05.208 "data_offset": 0, 00:22:05.208 "data_size": 65536 00:22:05.208 } 00:22:05.208 ] 00:22:05.208 } 00:22:05.208 } 00:22:05.208 }' 00:22:05.208 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:05.208 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:05.208 BaseBdev2 00:22:05.208 BaseBdev3 00:22:05.208 BaseBdev4' 00:22:05.208 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:05.208 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:05.208 07:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:05.208 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:05.208 "name": "NewBaseBdev", 00:22:05.208 "aliases": [ 00:22:05.208 "866ad694-b51d-496e-b1c5-bf8072729d3b" 00:22:05.208 ], 00:22:05.208 "product_name": "Malloc disk", 00:22:05.208 "block_size": 512, 00:22:05.208 "num_blocks": 65536, 00:22:05.208 "uuid": "866ad694-b51d-496e-b1c5-bf8072729d3b", 00:22:05.208 "assigned_rate_limits": { 00:22:05.208 "rw_ios_per_sec": 0, 00:22:05.208 "rw_mbytes_per_sec": 0, 00:22:05.208 "r_mbytes_per_sec": 0, 00:22:05.208 "w_mbytes_per_sec": 0 00:22:05.208 }, 00:22:05.208 "claimed": true, 00:22:05.208 "claim_type": "exclusive_write", 00:22:05.208 "zoned": false, 00:22:05.208 "supported_io_types": { 00:22:05.208 "read": true, 00:22:05.208 "write": true, 00:22:05.208 "unmap": true, 00:22:05.208 "write_zeroes": true, 00:22:05.208 "flush": true, 00:22:05.208 "reset": true, 00:22:05.208 "compare": false, 00:22:05.208 "compare_and_write": false, 00:22:05.208 "abort": true, 00:22:05.208 "nvme_admin": false, 00:22:05.208 "nvme_io": false 00:22:05.208 }, 00:22:05.208 "memory_domains": [ 00:22:05.208 { 00:22:05.208 "dma_device_id": "system", 00:22:05.208 "dma_device_type": 1 00:22:05.208 }, 00:22:05.208 { 00:22:05.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.208 "dma_device_type": 2 00:22:05.208 } 00:22:05.208 ], 00:22:05.208 "driver_specific": {} 00:22:05.208 }' 00:22:05.208 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.466 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.466 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:05.466 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.466 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.466 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:05.466 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.466 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.466 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:05.466 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.723 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.723 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:05.723 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:05.723 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:05.723 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:05.980 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:05.980 "name": "BaseBdev2", 00:22:05.980 "aliases": [ 00:22:05.980 "025a8950-892d-4cc9-8e20-7f304a27c68d" 00:22:05.980 ], 00:22:05.980 "product_name": "Malloc disk", 00:22:05.980 "block_size": 512, 00:22:05.980 "num_blocks": 65536, 00:22:05.980 "uuid": "025a8950-892d-4cc9-8e20-7f304a27c68d", 00:22:05.980 "assigned_rate_limits": { 00:22:05.980 "rw_ios_per_sec": 0, 00:22:05.980 "rw_mbytes_per_sec": 0, 00:22:05.980 "r_mbytes_per_sec": 0, 00:22:05.980 "w_mbytes_per_sec": 0 00:22:05.980 }, 00:22:05.980 "claimed": true, 00:22:05.980 "claim_type": "exclusive_write", 00:22:05.980 "zoned": false, 00:22:05.980 "supported_io_types": { 00:22:05.980 "read": true, 00:22:05.980 "write": true, 00:22:05.980 "unmap": true, 00:22:05.980 "write_zeroes": true, 00:22:05.980 "flush": true, 00:22:05.980 "reset": true, 00:22:05.981 "compare": false, 00:22:05.981 "compare_and_write": false, 00:22:05.981 "abort": true, 00:22:05.981 "nvme_admin": false, 00:22:05.981 "nvme_io": false 00:22:05.981 }, 00:22:05.981 "memory_domains": [ 00:22:05.981 { 00:22:05.981 "dma_device_id": "system", 00:22:05.981 "dma_device_type": 1 00:22:05.981 }, 00:22:05.981 { 00:22:05.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.981 "dma_device_type": 2 00:22:05.981 } 00:22:05.981 ], 00:22:05.981 "driver_specific": {} 00:22:05.981 }' 00:22:05.981 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.981 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.981 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:05.981 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.981 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.981 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:05.981 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.238 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.238 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:06.238 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.238 07:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.238 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:06.238 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:06.238 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:06.238 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:06.495 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:06.495 "name": "BaseBdev3", 00:22:06.495 "aliases": [ 00:22:06.495 "b6ef0f91-c8fe-4192-8284-44860bc3f183" 00:22:06.495 ], 00:22:06.495 "product_name": "Malloc disk", 00:22:06.495 "block_size": 512, 00:22:06.495 "num_blocks": 65536, 00:22:06.495 "uuid": "b6ef0f91-c8fe-4192-8284-44860bc3f183", 00:22:06.495 "assigned_rate_limits": { 00:22:06.495 "rw_ios_per_sec": 0, 00:22:06.495 "rw_mbytes_per_sec": 0, 00:22:06.495 "r_mbytes_per_sec": 0, 00:22:06.495 "w_mbytes_per_sec": 0 00:22:06.495 }, 00:22:06.495 "claimed": true, 00:22:06.495 "claim_type": "exclusive_write", 00:22:06.495 "zoned": false, 00:22:06.495 "supported_io_types": { 00:22:06.495 "read": true, 00:22:06.495 "write": true, 00:22:06.495 "unmap": true, 00:22:06.495 "write_zeroes": true, 00:22:06.495 "flush": true, 00:22:06.495 "reset": true, 00:22:06.495 "compare": false, 00:22:06.495 "compare_and_write": false, 00:22:06.495 "abort": true, 00:22:06.495 "nvme_admin": false, 00:22:06.495 "nvme_io": false 00:22:06.495 }, 00:22:06.495 "memory_domains": [ 00:22:06.495 { 00:22:06.495 "dma_device_id": "system", 00:22:06.495 "dma_device_type": 1 00:22:06.495 }, 00:22:06.495 { 00:22:06.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.495 "dma_device_type": 2 00:22:06.495 } 00:22:06.495 ], 00:22:06.495 "driver_specific": {} 00:22:06.495 }' 00:22:06.495 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.495 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.495 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:06.495 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.752 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.752 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:06.752 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.752 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.752 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:06.752 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.752 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:07.009 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:07.009 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:07.009 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:07.009 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:07.009 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:07.009 "name": "BaseBdev4", 00:22:07.009 "aliases": [ 00:22:07.009 "c1cf2c15-68c2-4785-bfdc-7a2a1d2d1393" 00:22:07.010 ], 00:22:07.010 "product_name": "Malloc disk", 00:22:07.010 "block_size": 512, 00:22:07.010 "num_blocks": 65536, 00:22:07.010 "uuid": "c1cf2c15-68c2-4785-bfdc-7a2a1d2d1393", 00:22:07.010 "assigned_rate_limits": { 00:22:07.010 "rw_ios_per_sec": 0, 00:22:07.010 "rw_mbytes_per_sec": 0, 00:22:07.010 "r_mbytes_per_sec": 0, 00:22:07.010 "w_mbytes_per_sec": 0 00:22:07.010 }, 00:22:07.010 "claimed": true, 00:22:07.010 "claim_type": "exclusive_write", 00:22:07.010 "zoned": false, 00:22:07.010 "supported_io_types": { 00:22:07.010 "read": true, 00:22:07.010 "write": true, 00:22:07.010 "unmap": true, 00:22:07.010 "write_zeroes": true, 00:22:07.010 "flush": true, 00:22:07.010 "reset": true, 00:22:07.010 "compare": false, 00:22:07.010 "compare_and_write": false, 00:22:07.010 "abort": true, 00:22:07.010 "nvme_admin": false, 00:22:07.010 "nvme_io": false 00:22:07.010 }, 00:22:07.010 "memory_domains": [ 00:22:07.010 { 00:22:07.010 "dma_device_id": "system", 00:22:07.010 "dma_device_type": 1 00:22:07.010 }, 00:22:07.010 { 00:22:07.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.010 "dma_device_type": 2 00:22:07.010 } 00:22:07.010 ], 00:22:07.010 "driver_specific": {} 00:22:07.010 }' 00:22:07.010 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:07.010 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:07.267 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:07.267 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:07.267 07:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:07.267 07:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:07.267 07:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:07.267 07:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:07.267 07:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:07.267 07:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:07.267 07:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:07.525 07:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:07.525 07:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:07.784 [2024-07-12 07:32:41.449965] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:07.784 [2024-07-12 07:32:41.450012] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:07.784 [2024-07-12 07:32:41.450128] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:07.784 [2024-07-12 07:32:41.450216] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:07.784 [2024-07-12 07:32:41.450231] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:22:07.784 07:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 144170 00:22:07.784 07:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 144170 ']' 00:22:07.784 07:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 144170 00:22:07.784 07:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:22:07.784 07:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:07.784 07:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 144170 00:22:07.784 killing process with pid 144170 00:22:07.784 07:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:07.784 07:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:07.784 07:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 144170' 00:22:07.784 07:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 144170 00:22:07.784 07:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 144170 00:22:07.784 [2024-07-12 07:32:41.495135] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:07.784 [2024-07-12 07:32:41.572879] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:08.351 ************************************ 00:22:08.351 END TEST raid_state_function_test 00:22:08.351 ************************************ 00:22:08.351 07:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:22:08.351 00:22:08.351 real 0m31.603s 00:22:08.351 user 0m58.060s 00:22:08.351 sys 0m5.492s 00:22:08.351 07:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:08.351 07:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.351 07:32:42 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:22:08.351 07:32:42 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:22:08.351 07:32:42 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:08.351 07:32:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:08.351 ************************************ 00:22:08.351 START TEST raid_state_function_test_sb 00:22:08.351 ************************************ 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 true 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=145247 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 145247' 00:22:08.351 Process raid pid: 145247 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 145247 /var/tmp/spdk-raid.sock 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 145247 ']' 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:08.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:08.351 07:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.351 [2024-07-12 07:32:42.130295] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:08.351 [2024-07-12 07:32:42.130563] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.609 [2024-07-12 07:32:42.293044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.609 [2024-07-12 07:32:42.384104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.609 [2024-07-12 07:32:42.465641] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:09.545 07:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:09.545 07:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:22:09.545 07:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:09.545 [2024-07-12 07:32:43.375751] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:09.545 [2024-07-12 07:32:43.375876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:09.545 [2024-07-12 07:32:43.375889] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:09.545 [2024-07-12 07:32:43.375910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:09.545 [2024-07-12 07:32:43.375917] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:09.545 [2024-07-12 07:32:43.375967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:09.545 [2024-07-12 07:32:43.375976] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:09.545 [2024-07-12 07:32:43.376002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:09.545 07:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:09.545 07:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:09.545 07:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:09.545 07:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:09.545 07:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:09.545 07:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:09.545 07:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:09.545 07:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:09.545 07:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:09.545 07:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:09.545 07:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.545 07:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.803 07:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:09.803 "name": "Existed_Raid", 00:22:09.803 "uuid": "47c7c25f-c399-4ca1-909c-54893610f50d", 00:22:09.803 "strip_size_kb": 64, 00:22:09.803 "state": "configuring", 00:22:09.803 "raid_level": "raid0", 00:22:09.803 "superblock": true, 00:22:09.803 "num_base_bdevs": 4, 00:22:09.803 "num_base_bdevs_discovered": 0, 00:22:09.803 "num_base_bdevs_operational": 4, 00:22:09.803 "base_bdevs_list": [ 00:22:09.803 { 00:22:09.803 "name": "BaseBdev1", 00:22:09.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.803 "is_configured": false, 00:22:09.803 "data_offset": 0, 00:22:09.803 "data_size": 0 00:22:09.803 }, 00:22:09.803 { 00:22:09.803 "name": "BaseBdev2", 00:22:09.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.803 "is_configured": false, 00:22:09.803 "data_offset": 0, 00:22:09.803 "data_size": 0 00:22:09.803 }, 00:22:09.803 { 00:22:09.803 "name": "BaseBdev3", 00:22:09.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.803 "is_configured": false, 00:22:09.803 "data_offset": 0, 00:22:09.803 "data_size": 0 00:22:09.803 }, 00:22:09.803 { 00:22:09.803 "name": "BaseBdev4", 00:22:09.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.803 "is_configured": false, 00:22:09.803 "data_offset": 0, 00:22:09.803 "data_size": 0 00:22:09.803 } 00:22:09.803 ] 00:22:09.803 }' 00:22:09.803 07:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:09.803 07:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.369 07:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:10.628 [2024-07-12 07:32:44.463771] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:10.628 [2024-07-12 07:32:44.463858] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:22:10.628 07:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:10.887 [2024-07-12 07:32:44.663840] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:10.887 [2024-07-12 07:32:44.663936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:10.887 [2024-07-12 07:32:44.663948] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:10.887 [2024-07-12 07:32:44.663975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:10.887 [2024-07-12 07:32:44.663983] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:10.887 [2024-07-12 07:32:44.664001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:10.887 [2024-07-12 07:32:44.664008] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:10.887 [2024-07-12 07:32:44.664039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:10.887 07:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:11.156 [2024-07-12 07:32:44.960080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:11.156 BaseBdev1 00:22:11.156 07:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:11.156 07:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:22:11.156 07:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:11.156 07:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:11.156 07:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:11.156 07:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:11.156 07:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:11.435 07:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:11.693 [ 00:22:11.694 { 00:22:11.694 "name": "BaseBdev1", 00:22:11.694 "aliases": [ 00:22:11.694 "4314344e-f043-46ae-bb31-7aa71e2eb029" 00:22:11.694 ], 00:22:11.694 "product_name": "Malloc disk", 00:22:11.694 "block_size": 512, 00:22:11.694 "num_blocks": 65536, 00:22:11.694 "uuid": "4314344e-f043-46ae-bb31-7aa71e2eb029", 00:22:11.694 "assigned_rate_limits": { 00:22:11.694 "rw_ios_per_sec": 0, 00:22:11.694 "rw_mbytes_per_sec": 0, 00:22:11.694 "r_mbytes_per_sec": 0, 00:22:11.694 "w_mbytes_per_sec": 0 00:22:11.694 }, 00:22:11.694 "claimed": true, 00:22:11.694 "claim_type": "exclusive_write", 00:22:11.694 "zoned": false, 00:22:11.694 "supported_io_types": { 00:22:11.694 "read": true, 00:22:11.694 "write": true, 00:22:11.694 "unmap": true, 00:22:11.694 "write_zeroes": true, 00:22:11.694 "flush": true, 00:22:11.694 "reset": true, 00:22:11.694 "compare": false, 00:22:11.694 "compare_and_write": false, 00:22:11.694 "abort": true, 00:22:11.694 "nvme_admin": false, 00:22:11.694 "nvme_io": false 00:22:11.694 }, 00:22:11.694 "memory_domains": [ 00:22:11.694 { 00:22:11.694 "dma_device_id": "system", 00:22:11.694 "dma_device_type": 1 00:22:11.694 }, 00:22:11.694 { 00:22:11.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.694 "dma_device_type": 2 00:22:11.694 } 00:22:11.694 ], 00:22:11.694 "driver_specific": {} 00:22:11.694 } 00:22:11.694 ] 00:22:11.694 07:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:11.694 07:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:11.694 07:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:11.694 07:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:11.694 07:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:11.694 07:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:11.694 07:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:11.694 07:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.694 07:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.694 07:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.694 07:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:11.694 07:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.694 07:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.951 07:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:11.951 "name": "Existed_Raid", 00:22:11.951 "uuid": "8d9d86be-1a28-4426-a713-ae7a52cff68f", 00:22:11.951 "strip_size_kb": 64, 00:22:11.951 "state": "configuring", 00:22:11.951 "raid_level": "raid0", 00:22:11.951 "superblock": true, 00:22:11.951 "num_base_bdevs": 4, 00:22:11.951 "num_base_bdevs_discovered": 1, 00:22:11.951 "num_base_bdevs_operational": 4, 00:22:11.951 "base_bdevs_list": [ 00:22:11.951 { 00:22:11.951 "name": "BaseBdev1", 00:22:11.951 "uuid": "4314344e-f043-46ae-bb31-7aa71e2eb029", 00:22:11.951 "is_configured": true, 00:22:11.951 "data_offset": 2048, 00:22:11.951 "data_size": 63488 00:22:11.951 }, 00:22:11.951 { 00:22:11.951 "name": "BaseBdev2", 00:22:11.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.951 "is_configured": false, 00:22:11.951 "data_offset": 0, 00:22:11.951 "data_size": 0 00:22:11.951 }, 00:22:11.951 { 00:22:11.951 "name": "BaseBdev3", 00:22:11.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.951 "is_configured": false, 00:22:11.951 "data_offset": 0, 00:22:11.951 "data_size": 0 00:22:11.951 }, 00:22:11.951 { 00:22:11.951 "name": "BaseBdev4", 00:22:11.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.951 "is_configured": false, 00:22:11.951 "data_offset": 0, 00:22:11.951 "data_size": 0 00:22:11.951 } 00:22:11.951 ] 00:22:11.951 }' 00:22:11.951 07:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:11.951 07:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.210 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:12.469 [2024-07-12 07:32:46.220402] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:12.469 [2024-07-12 07:32:46.220522] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:22:12.469 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:12.728 [2024-07-12 07:32:46.468540] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:12.728 [2024-07-12 07:32:46.471007] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:12.728 [2024-07-12 07:32:46.471113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:12.728 [2024-07-12 07:32:46.471125] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:12.728 [2024-07-12 07:32:46.471151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:12.728 [2024-07-12 07:32:46.471159] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:12.728 [2024-07-12 07:32:46.471196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:12.728 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:12.728 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:12.728 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:12.728 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:12.728 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:12.728 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:12.728 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:12.728 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:12.728 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:12.728 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:12.728 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:12.728 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:12.728 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.728 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.987 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.987 "name": "Existed_Raid", 00:22:12.987 "uuid": "ae9c1336-9f2a-4695-ae87-5118ab4d5ea8", 00:22:12.987 "strip_size_kb": 64, 00:22:12.987 "state": "configuring", 00:22:12.987 "raid_level": "raid0", 00:22:12.987 "superblock": true, 00:22:12.987 "num_base_bdevs": 4, 00:22:12.987 "num_base_bdevs_discovered": 1, 00:22:12.987 "num_base_bdevs_operational": 4, 00:22:12.987 "base_bdevs_list": [ 00:22:12.987 { 00:22:12.987 "name": "BaseBdev1", 00:22:12.987 "uuid": "4314344e-f043-46ae-bb31-7aa71e2eb029", 00:22:12.987 "is_configured": true, 00:22:12.987 "data_offset": 2048, 00:22:12.987 "data_size": 63488 00:22:12.987 }, 00:22:12.987 { 00:22:12.987 "name": "BaseBdev2", 00:22:12.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.987 "is_configured": false, 00:22:12.987 "data_offset": 0, 00:22:12.987 "data_size": 0 00:22:12.987 }, 00:22:12.987 { 00:22:12.987 "name": "BaseBdev3", 00:22:12.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.987 "is_configured": false, 00:22:12.987 "data_offset": 0, 00:22:12.987 "data_size": 0 00:22:12.987 }, 00:22:12.987 { 00:22:12.987 "name": "BaseBdev4", 00:22:12.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.987 "is_configured": false, 00:22:12.987 "data_offset": 0, 00:22:12.987 "data_size": 0 00:22:12.987 } 00:22:12.987 ] 00:22:12.987 }' 00:22:12.987 07:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.987 07:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.554 07:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:13.814 [2024-07-12 07:32:47.659363] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:13.814 BaseBdev2 00:22:13.814 07:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:13.814 07:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:22:13.814 07:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:13.814 07:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:13.814 07:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:13.814 07:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:13.814 07:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:14.072 07:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:14.331 [ 00:22:14.331 { 00:22:14.331 "name": "BaseBdev2", 00:22:14.331 "aliases": [ 00:22:14.331 "e82c6022-4a20-4683-8792-042b20571a6d" 00:22:14.331 ], 00:22:14.331 "product_name": "Malloc disk", 00:22:14.331 "block_size": 512, 00:22:14.331 "num_blocks": 65536, 00:22:14.331 "uuid": "e82c6022-4a20-4683-8792-042b20571a6d", 00:22:14.331 "assigned_rate_limits": { 00:22:14.331 "rw_ios_per_sec": 0, 00:22:14.331 "rw_mbytes_per_sec": 0, 00:22:14.331 "r_mbytes_per_sec": 0, 00:22:14.331 "w_mbytes_per_sec": 0 00:22:14.331 }, 00:22:14.331 "claimed": true, 00:22:14.331 "claim_type": "exclusive_write", 00:22:14.331 "zoned": false, 00:22:14.331 "supported_io_types": { 00:22:14.331 "read": true, 00:22:14.331 "write": true, 00:22:14.331 "unmap": true, 00:22:14.331 "write_zeroes": true, 00:22:14.331 "flush": true, 00:22:14.331 "reset": true, 00:22:14.331 "compare": false, 00:22:14.331 "compare_and_write": false, 00:22:14.331 "abort": true, 00:22:14.331 "nvme_admin": false, 00:22:14.331 "nvme_io": false 00:22:14.331 }, 00:22:14.331 "memory_domains": [ 00:22:14.331 { 00:22:14.331 "dma_device_id": "system", 00:22:14.331 "dma_device_type": 1 00:22:14.331 }, 00:22:14.331 { 00:22:14.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.331 "dma_device_type": 2 00:22:14.331 } 00:22:14.331 ], 00:22:14.331 "driver_specific": {} 00:22:14.331 } 00:22:14.331 ] 00:22:14.331 07:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:14.331 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:14.331 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:14.331 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:14.331 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:14.331 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:14.331 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:14.331 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:14.331 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:14.331 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:14.331 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:14.331 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:14.331 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:14.331 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.331 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.590 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.590 "name": "Existed_Raid", 00:22:14.590 "uuid": "ae9c1336-9f2a-4695-ae87-5118ab4d5ea8", 00:22:14.590 "strip_size_kb": 64, 00:22:14.590 "state": "configuring", 00:22:14.590 "raid_level": "raid0", 00:22:14.590 "superblock": true, 00:22:14.590 "num_base_bdevs": 4, 00:22:14.590 "num_base_bdevs_discovered": 2, 00:22:14.590 "num_base_bdevs_operational": 4, 00:22:14.590 "base_bdevs_list": [ 00:22:14.590 { 00:22:14.590 "name": "BaseBdev1", 00:22:14.590 "uuid": "4314344e-f043-46ae-bb31-7aa71e2eb029", 00:22:14.590 "is_configured": true, 00:22:14.590 "data_offset": 2048, 00:22:14.590 "data_size": 63488 00:22:14.590 }, 00:22:14.590 { 00:22:14.590 "name": "BaseBdev2", 00:22:14.590 "uuid": "e82c6022-4a20-4683-8792-042b20571a6d", 00:22:14.590 "is_configured": true, 00:22:14.590 "data_offset": 2048, 00:22:14.590 "data_size": 63488 00:22:14.590 }, 00:22:14.590 { 00:22:14.590 "name": "BaseBdev3", 00:22:14.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.590 "is_configured": false, 00:22:14.590 "data_offset": 0, 00:22:14.590 "data_size": 0 00:22:14.590 }, 00:22:14.590 { 00:22:14.590 "name": "BaseBdev4", 00:22:14.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.590 "is_configured": false, 00:22:14.590 "data_offset": 0, 00:22:14.590 "data_size": 0 00:22:14.590 } 00:22:14.590 ] 00:22:14.590 }' 00:22:14.590 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.590 07:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.158 07:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:15.418 [2024-07-12 07:32:49.193176] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:15.418 BaseBdev3 00:22:15.418 07:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:15.418 07:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:22:15.418 07:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:15.418 07:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:15.418 07:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:15.418 07:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:15.418 07:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:15.677 07:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:15.936 [ 00:22:15.936 { 00:22:15.936 "name": "BaseBdev3", 00:22:15.936 "aliases": [ 00:22:15.936 "b31b5065-3df2-4b80-baae-b3239c3e8b08" 00:22:15.936 ], 00:22:15.936 "product_name": "Malloc disk", 00:22:15.936 "block_size": 512, 00:22:15.936 "num_blocks": 65536, 00:22:15.936 "uuid": "b31b5065-3df2-4b80-baae-b3239c3e8b08", 00:22:15.936 "assigned_rate_limits": { 00:22:15.936 "rw_ios_per_sec": 0, 00:22:15.936 "rw_mbytes_per_sec": 0, 00:22:15.936 "r_mbytes_per_sec": 0, 00:22:15.936 "w_mbytes_per_sec": 0 00:22:15.936 }, 00:22:15.936 "claimed": true, 00:22:15.936 "claim_type": "exclusive_write", 00:22:15.936 "zoned": false, 00:22:15.936 "supported_io_types": { 00:22:15.936 "read": true, 00:22:15.936 "write": true, 00:22:15.936 "unmap": true, 00:22:15.936 "write_zeroes": true, 00:22:15.936 "flush": true, 00:22:15.936 "reset": true, 00:22:15.936 "compare": false, 00:22:15.936 "compare_and_write": false, 00:22:15.936 "abort": true, 00:22:15.936 "nvme_admin": false, 00:22:15.936 "nvme_io": false 00:22:15.936 }, 00:22:15.936 "memory_domains": [ 00:22:15.936 { 00:22:15.936 "dma_device_id": "system", 00:22:15.936 "dma_device_type": 1 00:22:15.936 }, 00:22:15.936 { 00:22:15.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.936 "dma_device_type": 2 00:22:15.936 } 00:22:15.936 ], 00:22:15.936 "driver_specific": {} 00:22:15.936 } 00:22:15.936 ] 00:22:15.936 07:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:15.936 07:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:15.936 07:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:15.936 07:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:15.936 07:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:15.936 07:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:15.936 07:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:15.936 07:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:15.936 07:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:15.936 07:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:15.936 07:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:15.936 07:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:15.936 07:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:15.936 07:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.936 07:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:16.196 07:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:16.196 "name": "Existed_Raid", 00:22:16.196 "uuid": "ae9c1336-9f2a-4695-ae87-5118ab4d5ea8", 00:22:16.196 "strip_size_kb": 64, 00:22:16.196 "state": "configuring", 00:22:16.196 "raid_level": "raid0", 00:22:16.196 "superblock": true, 00:22:16.196 "num_base_bdevs": 4, 00:22:16.196 "num_base_bdevs_discovered": 3, 00:22:16.196 "num_base_bdevs_operational": 4, 00:22:16.196 "base_bdevs_list": [ 00:22:16.196 { 00:22:16.196 "name": "BaseBdev1", 00:22:16.196 "uuid": "4314344e-f043-46ae-bb31-7aa71e2eb029", 00:22:16.196 "is_configured": true, 00:22:16.196 "data_offset": 2048, 00:22:16.196 "data_size": 63488 00:22:16.196 }, 00:22:16.196 { 00:22:16.196 "name": "BaseBdev2", 00:22:16.196 "uuid": "e82c6022-4a20-4683-8792-042b20571a6d", 00:22:16.196 "is_configured": true, 00:22:16.196 "data_offset": 2048, 00:22:16.196 "data_size": 63488 00:22:16.196 }, 00:22:16.196 { 00:22:16.196 "name": "BaseBdev3", 00:22:16.196 "uuid": "b31b5065-3df2-4b80-baae-b3239c3e8b08", 00:22:16.196 "is_configured": true, 00:22:16.196 "data_offset": 2048, 00:22:16.196 "data_size": 63488 00:22:16.196 }, 00:22:16.196 { 00:22:16.196 "name": "BaseBdev4", 00:22:16.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.196 "is_configured": false, 00:22:16.196 "data_offset": 0, 00:22:16.196 "data_size": 0 00:22:16.196 } 00:22:16.196 ] 00:22:16.196 }' 00:22:16.196 07:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:16.196 07:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.763 07:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:17.331 [2024-07-12 07:32:50.915565] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:17.331 [2024-07-12 07:32:50.915804] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:22:17.331 [2024-07-12 07:32:50.915818] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:17.331 [2024-07-12 07:32:50.915971] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:22:17.331 [2024-07-12 07:32:50.916398] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:22:17.331 [2024-07-12 07:32:50.916409] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:22:17.331 [2024-07-12 07:32:50.916581] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.331 BaseBdev4 00:22:17.331 07:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:17.331 07:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:22:17.331 07:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:17.331 07:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:17.331 07:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:17.331 07:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:17.331 07:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:17.331 07:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:17.590 [ 00:22:17.590 { 00:22:17.590 "name": "BaseBdev4", 00:22:17.590 "aliases": [ 00:22:17.590 "c01f16e9-e2c5-43d1-9c96-78fe15f16e21" 00:22:17.590 ], 00:22:17.590 "product_name": "Malloc disk", 00:22:17.590 "block_size": 512, 00:22:17.590 "num_blocks": 65536, 00:22:17.590 "uuid": "c01f16e9-e2c5-43d1-9c96-78fe15f16e21", 00:22:17.590 "assigned_rate_limits": { 00:22:17.590 "rw_ios_per_sec": 0, 00:22:17.590 "rw_mbytes_per_sec": 0, 00:22:17.590 "r_mbytes_per_sec": 0, 00:22:17.590 "w_mbytes_per_sec": 0 00:22:17.590 }, 00:22:17.590 "claimed": true, 00:22:17.590 "claim_type": "exclusive_write", 00:22:17.590 "zoned": false, 00:22:17.590 "supported_io_types": { 00:22:17.590 "read": true, 00:22:17.590 "write": true, 00:22:17.590 "unmap": true, 00:22:17.590 "write_zeroes": true, 00:22:17.590 "flush": true, 00:22:17.590 "reset": true, 00:22:17.590 "compare": false, 00:22:17.590 "compare_and_write": false, 00:22:17.590 "abort": true, 00:22:17.590 "nvme_admin": false, 00:22:17.590 "nvme_io": false 00:22:17.590 }, 00:22:17.590 "memory_domains": [ 00:22:17.590 { 00:22:17.590 "dma_device_id": "system", 00:22:17.590 "dma_device_type": 1 00:22:17.590 }, 00:22:17.590 { 00:22:17.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.590 "dma_device_type": 2 00:22:17.590 } 00:22:17.590 ], 00:22:17.590 "driver_specific": {} 00:22:17.590 } 00:22:17.590 ] 00:22:17.590 07:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:17.590 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:17.590 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:17.590 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:17.590 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:17.590 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:17.590 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:17.590 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:17.590 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:17.590 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:17.590 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:17.590 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:17.590 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:17.590 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.590 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.849 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:17.849 "name": "Existed_Raid", 00:22:17.849 "uuid": "ae9c1336-9f2a-4695-ae87-5118ab4d5ea8", 00:22:17.849 "strip_size_kb": 64, 00:22:17.849 "state": "online", 00:22:17.849 "raid_level": "raid0", 00:22:17.849 "superblock": true, 00:22:17.849 "num_base_bdevs": 4, 00:22:17.849 "num_base_bdevs_discovered": 4, 00:22:17.849 "num_base_bdevs_operational": 4, 00:22:17.849 "base_bdevs_list": [ 00:22:17.849 { 00:22:17.849 "name": "BaseBdev1", 00:22:17.849 "uuid": "4314344e-f043-46ae-bb31-7aa71e2eb029", 00:22:17.849 "is_configured": true, 00:22:17.849 "data_offset": 2048, 00:22:17.849 "data_size": 63488 00:22:17.849 }, 00:22:17.849 { 00:22:17.849 "name": "BaseBdev2", 00:22:17.849 "uuid": "e82c6022-4a20-4683-8792-042b20571a6d", 00:22:17.849 "is_configured": true, 00:22:17.849 "data_offset": 2048, 00:22:17.849 "data_size": 63488 00:22:17.849 }, 00:22:17.849 { 00:22:17.849 "name": "BaseBdev3", 00:22:17.849 "uuid": "b31b5065-3df2-4b80-baae-b3239c3e8b08", 00:22:17.849 "is_configured": true, 00:22:17.849 "data_offset": 2048, 00:22:17.849 "data_size": 63488 00:22:17.849 }, 00:22:17.849 { 00:22:17.849 "name": "BaseBdev4", 00:22:17.849 "uuid": "c01f16e9-e2c5-43d1-9c96-78fe15f16e21", 00:22:17.849 "is_configured": true, 00:22:17.849 "data_offset": 2048, 00:22:17.849 "data_size": 63488 00:22:17.849 } 00:22:17.849 ] 00:22:17.849 }' 00:22:17.849 07:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:17.849 07:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.417 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:18.417 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:18.417 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:18.417 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:18.417 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:18.417 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:18.417 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:18.417 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:18.676 [2024-07-12 07:32:52.332184] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:18.676 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:18.676 "name": "Existed_Raid", 00:22:18.676 "aliases": [ 00:22:18.676 "ae9c1336-9f2a-4695-ae87-5118ab4d5ea8" 00:22:18.676 ], 00:22:18.676 "product_name": "Raid Volume", 00:22:18.676 "block_size": 512, 00:22:18.676 "num_blocks": 253952, 00:22:18.676 "uuid": "ae9c1336-9f2a-4695-ae87-5118ab4d5ea8", 00:22:18.676 "assigned_rate_limits": { 00:22:18.676 "rw_ios_per_sec": 0, 00:22:18.676 "rw_mbytes_per_sec": 0, 00:22:18.676 "r_mbytes_per_sec": 0, 00:22:18.676 "w_mbytes_per_sec": 0 00:22:18.676 }, 00:22:18.676 "claimed": false, 00:22:18.676 "zoned": false, 00:22:18.676 "supported_io_types": { 00:22:18.676 "read": true, 00:22:18.676 "write": true, 00:22:18.676 "unmap": true, 00:22:18.676 "write_zeroes": true, 00:22:18.676 "flush": true, 00:22:18.676 "reset": true, 00:22:18.676 "compare": false, 00:22:18.676 "compare_and_write": false, 00:22:18.676 "abort": false, 00:22:18.676 "nvme_admin": false, 00:22:18.676 "nvme_io": false 00:22:18.676 }, 00:22:18.676 "memory_domains": [ 00:22:18.676 { 00:22:18.676 "dma_device_id": "system", 00:22:18.676 "dma_device_type": 1 00:22:18.676 }, 00:22:18.676 { 00:22:18.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.676 "dma_device_type": 2 00:22:18.676 }, 00:22:18.676 { 00:22:18.676 "dma_device_id": "system", 00:22:18.676 "dma_device_type": 1 00:22:18.676 }, 00:22:18.676 { 00:22:18.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.676 "dma_device_type": 2 00:22:18.676 }, 00:22:18.676 { 00:22:18.676 "dma_device_id": "system", 00:22:18.676 "dma_device_type": 1 00:22:18.676 }, 00:22:18.676 { 00:22:18.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.676 "dma_device_type": 2 00:22:18.676 }, 00:22:18.676 { 00:22:18.676 "dma_device_id": "system", 00:22:18.676 "dma_device_type": 1 00:22:18.676 }, 00:22:18.676 { 00:22:18.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.676 "dma_device_type": 2 00:22:18.676 } 00:22:18.676 ], 00:22:18.676 "driver_specific": { 00:22:18.676 "raid": { 00:22:18.676 "uuid": "ae9c1336-9f2a-4695-ae87-5118ab4d5ea8", 00:22:18.676 "strip_size_kb": 64, 00:22:18.676 "state": "online", 00:22:18.676 "raid_level": "raid0", 00:22:18.676 "superblock": true, 00:22:18.676 "num_base_bdevs": 4, 00:22:18.676 "num_base_bdevs_discovered": 4, 00:22:18.676 "num_base_bdevs_operational": 4, 00:22:18.676 "base_bdevs_list": [ 00:22:18.676 { 00:22:18.676 "name": "BaseBdev1", 00:22:18.676 "uuid": "4314344e-f043-46ae-bb31-7aa71e2eb029", 00:22:18.676 "is_configured": true, 00:22:18.676 "data_offset": 2048, 00:22:18.676 "data_size": 63488 00:22:18.676 }, 00:22:18.676 { 00:22:18.676 "name": "BaseBdev2", 00:22:18.676 "uuid": "e82c6022-4a20-4683-8792-042b20571a6d", 00:22:18.676 "is_configured": true, 00:22:18.676 "data_offset": 2048, 00:22:18.676 "data_size": 63488 00:22:18.676 }, 00:22:18.676 { 00:22:18.676 "name": "BaseBdev3", 00:22:18.676 "uuid": "b31b5065-3df2-4b80-baae-b3239c3e8b08", 00:22:18.676 "is_configured": true, 00:22:18.676 "data_offset": 2048, 00:22:18.676 "data_size": 63488 00:22:18.676 }, 00:22:18.676 { 00:22:18.676 "name": "BaseBdev4", 00:22:18.676 "uuid": "c01f16e9-e2c5-43d1-9c96-78fe15f16e21", 00:22:18.676 "is_configured": true, 00:22:18.676 "data_offset": 2048, 00:22:18.676 "data_size": 63488 00:22:18.676 } 00:22:18.676 ] 00:22:18.676 } 00:22:18.676 } 00:22:18.676 }' 00:22:18.677 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:18.677 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:18.677 BaseBdev2 00:22:18.677 BaseBdev3 00:22:18.677 BaseBdev4' 00:22:18.677 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:18.677 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:18.677 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:18.936 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:18.936 "name": "BaseBdev1", 00:22:18.936 "aliases": [ 00:22:18.936 "4314344e-f043-46ae-bb31-7aa71e2eb029" 00:22:18.936 ], 00:22:18.936 "product_name": "Malloc disk", 00:22:18.936 "block_size": 512, 00:22:18.936 "num_blocks": 65536, 00:22:18.936 "uuid": "4314344e-f043-46ae-bb31-7aa71e2eb029", 00:22:18.936 "assigned_rate_limits": { 00:22:18.936 "rw_ios_per_sec": 0, 00:22:18.936 "rw_mbytes_per_sec": 0, 00:22:18.936 "r_mbytes_per_sec": 0, 00:22:18.936 "w_mbytes_per_sec": 0 00:22:18.936 }, 00:22:18.936 "claimed": true, 00:22:18.936 "claim_type": "exclusive_write", 00:22:18.936 "zoned": false, 00:22:18.936 "supported_io_types": { 00:22:18.936 "read": true, 00:22:18.936 "write": true, 00:22:18.936 "unmap": true, 00:22:18.936 "write_zeroes": true, 00:22:18.936 "flush": true, 00:22:18.936 "reset": true, 00:22:18.936 "compare": false, 00:22:18.936 "compare_and_write": false, 00:22:18.936 "abort": true, 00:22:18.936 "nvme_admin": false, 00:22:18.936 "nvme_io": false 00:22:18.936 }, 00:22:18.936 "memory_domains": [ 00:22:18.936 { 00:22:18.936 "dma_device_id": "system", 00:22:18.936 "dma_device_type": 1 00:22:18.936 }, 00:22:18.936 { 00:22:18.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.936 "dma_device_type": 2 00:22:18.936 } 00:22:18.936 ], 00:22:18.936 "driver_specific": {} 00:22:18.936 }' 00:22:18.936 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:18.936 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:18.936 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:18.936 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:18.936 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:18.936 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:18.936 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.195 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.195 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:19.195 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.195 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.195 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:19.195 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:19.195 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:19.195 07:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:19.453 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:19.453 "name": "BaseBdev2", 00:22:19.453 "aliases": [ 00:22:19.453 "e82c6022-4a20-4683-8792-042b20571a6d" 00:22:19.453 ], 00:22:19.453 "product_name": "Malloc disk", 00:22:19.453 "block_size": 512, 00:22:19.453 "num_blocks": 65536, 00:22:19.453 "uuid": "e82c6022-4a20-4683-8792-042b20571a6d", 00:22:19.453 "assigned_rate_limits": { 00:22:19.453 "rw_ios_per_sec": 0, 00:22:19.453 "rw_mbytes_per_sec": 0, 00:22:19.453 "r_mbytes_per_sec": 0, 00:22:19.453 "w_mbytes_per_sec": 0 00:22:19.453 }, 00:22:19.453 "claimed": true, 00:22:19.453 "claim_type": "exclusive_write", 00:22:19.453 "zoned": false, 00:22:19.453 "supported_io_types": { 00:22:19.453 "read": true, 00:22:19.453 "write": true, 00:22:19.453 "unmap": true, 00:22:19.453 "write_zeroes": true, 00:22:19.453 "flush": true, 00:22:19.453 "reset": true, 00:22:19.453 "compare": false, 00:22:19.453 "compare_and_write": false, 00:22:19.453 "abort": true, 00:22:19.453 "nvme_admin": false, 00:22:19.453 "nvme_io": false 00:22:19.453 }, 00:22:19.453 "memory_domains": [ 00:22:19.453 { 00:22:19.453 "dma_device_id": "system", 00:22:19.453 "dma_device_type": 1 00:22:19.453 }, 00:22:19.453 { 00:22:19.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.453 "dma_device_type": 2 00:22:19.453 } 00:22:19.453 ], 00:22:19.453 "driver_specific": {} 00:22:19.453 }' 00:22:19.453 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.453 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.453 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:19.453 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.711 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.711 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:19.711 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.711 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.711 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:19.711 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.711 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.969 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:19.969 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:19.969 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:19.969 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:20.228 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.228 "name": "BaseBdev3", 00:22:20.228 "aliases": [ 00:22:20.228 "b31b5065-3df2-4b80-baae-b3239c3e8b08" 00:22:20.228 ], 00:22:20.228 "product_name": "Malloc disk", 00:22:20.228 "block_size": 512, 00:22:20.228 "num_blocks": 65536, 00:22:20.228 "uuid": "b31b5065-3df2-4b80-baae-b3239c3e8b08", 00:22:20.228 "assigned_rate_limits": { 00:22:20.228 "rw_ios_per_sec": 0, 00:22:20.228 "rw_mbytes_per_sec": 0, 00:22:20.228 "r_mbytes_per_sec": 0, 00:22:20.228 "w_mbytes_per_sec": 0 00:22:20.228 }, 00:22:20.228 "claimed": true, 00:22:20.228 "claim_type": "exclusive_write", 00:22:20.228 "zoned": false, 00:22:20.228 "supported_io_types": { 00:22:20.228 "read": true, 00:22:20.228 "write": true, 00:22:20.228 "unmap": true, 00:22:20.228 "write_zeroes": true, 00:22:20.228 "flush": true, 00:22:20.228 "reset": true, 00:22:20.228 "compare": false, 00:22:20.228 "compare_and_write": false, 00:22:20.228 "abort": true, 00:22:20.228 "nvme_admin": false, 00:22:20.228 "nvme_io": false 00:22:20.228 }, 00:22:20.228 "memory_domains": [ 00:22:20.228 { 00:22:20.228 "dma_device_id": "system", 00:22:20.228 "dma_device_type": 1 00:22:20.228 }, 00:22:20.228 { 00:22:20.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.228 "dma_device_type": 2 00:22:20.228 } 00:22:20.228 ], 00:22:20.228 "driver_specific": {} 00:22:20.228 }' 00:22:20.228 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.228 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.228 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:20.228 07:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.228 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.228 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:20.228 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.487 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.487 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:20.487 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.487 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.487 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:20.487 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:20.487 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:20.487 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:20.745 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.745 "name": "BaseBdev4", 00:22:20.745 "aliases": [ 00:22:20.745 "c01f16e9-e2c5-43d1-9c96-78fe15f16e21" 00:22:20.745 ], 00:22:20.745 "product_name": "Malloc disk", 00:22:20.745 "block_size": 512, 00:22:20.745 "num_blocks": 65536, 00:22:20.745 "uuid": "c01f16e9-e2c5-43d1-9c96-78fe15f16e21", 00:22:20.745 "assigned_rate_limits": { 00:22:20.745 "rw_ios_per_sec": 0, 00:22:20.745 "rw_mbytes_per_sec": 0, 00:22:20.745 "r_mbytes_per_sec": 0, 00:22:20.745 "w_mbytes_per_sec": 0 00:22:20.745 }, 00:22:20.745 "claimed": true, 00:22:20.745 "claim_type": "exclusive_write", 00:22:20.745 "zoned": false, 00:22:20.745 "supported_io_types": { 00:22:20.745 "read": true, 00:22:20.745 "write": true, 00:22:20.745 "unmap": true, 00:22:20.745 "write_zeroes": true, 00:22:20.745 "flush": true, 00:22:20.745 "reset": true, 00:22:20.745 "compare": false, 00:22:20.745 "compare_and_write": false, 00:22:20.745 "abort": true, 00:22:20.745 "nvme_admin": false, 00:22:20.745 "nvme_io": false 00:22:20.745 }, 00:22:20.745 "memory_domains": [ 00:22:20.745 { 00:22:20.745 "dma_device_id": "system", 00:22:20.745 "dma_device_type": 1 00:22:20.745 }, 00:22:20.745 { 00:22:20.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.746 "dma_device_type": 2 00:22:20.746 } 00:22:20.746 ], 00:22:20.746 "driver_specific": {} 00:22:20.746 }' 00:22:20.746 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.746 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.746 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:20.746 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.004 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.004 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:21.004 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.004 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.004 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:21.004 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.004 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.004 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:21.004 07:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:21.262 [2024-07-12 07:32:55.132637] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:21.262 [2024-07-12 07:32:55.132690] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:21.262 [2024-07-12 07:32:55.132795] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.521 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:21.780 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:21.780 "name": "Existed_Raid", 00:22:21.780 "uuid": "ae9c1336-9f2a-4695-ae87-5118ab4d5ea8", 00:22:21.780 "strip_size_kb": 64, 00:22:21.780 "state": "offline", 00:22:21.780 "raid_level": "raid0", 00:22:21.780 "superblock": true, 00:22:21.780 "num_base_bdevs": 4, 00:22:21.780 "num_base_bdevs_discovered": 3, 00:22:21.780 "num_base_bdevs_operational": 3, 00:22:21.780 "base_bdevs_list": [ 00:22:21.780 { 00:22:21.780 "name": null, 00:22:21.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.780 "is_configured": false, 00:22:21.780 "data_offset": 2048, 00:22:21.780 "data_size": 63488 00:22:21.780 }, 00:22:21.780 { 00:22:21.780 "name": "BaseBdev2", 00:22:21.780 "uuid": "e82c6022-4a20-4683-8792-042b20571a6d", 00:22:21.780 "is_configured": true, 00:22:21.780 "data_offset": 2048, 00:22:21.780 "data_size": 63488 00:22:21.780 }, 00:22:21.780 { 00:22:21.780 "name": "BaseBdev3", 00:22:21.780 "uuid": "b31b5065-3df2-4b80-baae-b3239c3e8b08", 00:22:21.780 "is_configured": true, 00:22:21.780 "data_offset": 2048, 00:22:21.780 "data_size": 63488 00:22:21.780 }, 00:22:21.780 { 00:22:21.780 "name": "BaseBdev4", 00:22:21.780 "uuid": "c01f16e9-e2c5-43d1-9c96-78fe15f16e21", 00:22:21.780 "is_configured": true, 00:22:21.780 "data_offset": 2048, 00:22:21.780 "data_size": 63488 00:22:21.780 } 00:22:21.780 ] 00:22:21.780 }' 00:22:21.780 07:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:21.780 07:32:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.351 07:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:22.351 07:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:22.351 07:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:22.351 07:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.609 07:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:22.609 07:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:22.609 07:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:22.867 [2024-07-12 07:32:56.557965] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:22.867 07:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:22.867 07:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:22.867 07:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.867 07:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:23.125 07:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:23.125 07:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:23.125 07:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:23.383 [2024-07-12 07:32:57.063371] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:23.383 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:23.383 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:23.383 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:23.383 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.641 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:23.641 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:23.641 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:23.900 [2024-07-12 07:32:57.593682] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:23.900 [2024-07-12 07:32:57.593940] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:22:23.900 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:23.900 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:23.900 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.900 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:24.183 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:24.183 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:24.183 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:24.183 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:24.183 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:24.183 07:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:24.461 BaseBdev2 00:22:24.461 07:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:24.461 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:22:24.461 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:24.461 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:24.461 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:24.461 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:24.461 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:24.461 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:24.719 [ 00:22:24.719 { 00:22:24.719 "name": "BaseBdev2", 00:22:24.719 "aliases": [ 00:22:24.719 "10d8ba0d-f28b-4542-b27e-1ca7506cbdd4" 00:22:24.719 ], 00:22:24.719 "product_name": "Malloc disk", 00:22:24.719 "block_size": 512, 00:22:24.719 "num_blocks": 65536, 00:22:24.719 "uuid": "10d8ba0d-f28b-4542-b27e-1ca7506cbdd4", 00:22:24.719 "assigned_rate_limits": { 00:22:24.719 "rw_ios_per_sec": 0, 00:22:24.719 "rw_mbytes_per_sec": 0, 00:22:24.719 "r_mbytes_per_sec": 0, 00:22:24.719 "w_mbytes_per_sec": 0 00:22:24.719 }, 00:22:24.719 "claimed": false, 00:22:24.719 "zoned": false, 00:22:24.719 "supported_io_types": { 00:22:24.719 "read": true, 00:22:24.719 "write": true, 00:22:24.719 "unmap": true, 00:22:24.719 "write_zeroes": true, 00:22:24.719 "flush": true, 00:22:24.719 "reset": true, 00:22:24.719 "compare": false, 00:22:24.719 "compare_and_write": false, 00:22:24.719 "abort": true, 00:22:24.719 "nvme_admin": false, 00:22:24.719 "nvme_io": false 00:22:24.719 }, 00:22:24.719 "memory_domains": [ 00:22:24.719 { 00:22:24.720 "dma_device_id": "system", 00:22:24.720 "dma_device_type": 1 00:22:24.720 }, 00:22:24.720 { 00:22:24.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.720 "dma_device_type": 2 00:22:24.720 } 00:22:24.720 ], 00:22:24.720 "driver_specific": {} 00:22:24.720 } 00:22:24.720 ] 00:22:24.720 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:24.720 07:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:24.720 07:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:24.720 07:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:24.978 BaseBdev3 00:22:24.978 07:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:24.978 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:22:24.978 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:24.978 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:24.978 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:24.978 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:24.978 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:25.236 07:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:25.493 [ 00:22:25.493 { 00:22:25.493 "name": "BaseBdev3", 00:22:25.493 "aliases": [ 00:22:25.493 "21b76f2e-a71a-455a-a3b3-f4d797237897" 00:22:25.493 ], 00:22:25.493 "product_name": "Malloc disk", 00:22:25.493 "block_size": 512, 00:22:25.493 "num_blocks": 65536, 00:22:25.493 "uuid": "21b76f2e-a71a-455a-a3b3-f4d797237897", 00:22:25.493 "assigned_rate_limits": { 00:22:25.493 "rw_ios_per_sec": 0, 00:22:25.493 "rw_mbytes_per_sec": 0, 00:22:25.494 "r_mbytes_per_sec": 0, 00:22:25.494 "w_mbytes_per_sec": 0 00:22:25.494 }, 00:22:25.494 "claimed": false, 00:22:25.494 "zoned": false, 00:22:25.494 "supported_io_types": { 00:22:25.494 "read": true, 00:22:25.494 "write": true, 00:22:25.494 "unmap": true, 00:22:25.494 "write_zeroes": true, 00:22:25.494 "flush": true, 00:22:25.494 "reset": true, 00:22:25.494 "compare": false, 00:22:25.494 "compare_and_write": false, 00:22:25.494 "abort": true, 00:22:25.494 "nvme_admin": false, 00:22:25.494 "nvme_io": false 00:22:25.494 }, 00:22:25.494 "memory_domains": [ 00:22:25.494 { 00:22:25.494 "dma_device_id": "system", 00:22:25.494 "dma_device_type": 1 00:22:25.494 }, 00:22:25.494 { 00:22:25.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.494 "dma_device_type": 2 00:22:25.494 } 00:22:25.494 ], 00:22:25.494 "driver_specific": {} 00:22:25.494 } 00:22:25.494 ] 00:22:25.494 07:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:25.494 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:25.494 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:25.494 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:25.494 BaseBdev4 00:22:25.752 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:25.752 07:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:22:25.752 07:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:25.752 07:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:25.752 07:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:25.752 07:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:25.752 07:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:25.752 07:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:26.011 [ 00:22:26.011 { 00:22:26.011 "name": "BaseBdev4", 00:22:26.011 "aliases": [ 00:22:26.011 "cd42a69a-8d4a-4912-a487-36b724105738" 00:22:26.011 ], 00:22:26.011 "product_name": "Malloc disk", 00:22:26.011 "block_size": 512, 00:22:26.011 "num_blocks": 65536, 00:22:26.011 "uuid": "cd42a69a-8d4a-4912-a487-36b724105738", 00:22:26.011 "assigned_rate_limits": { 00:22:26.011 "rw_ios_per_sec": 0, 00:22:26.011 "rw_mbytes_per_sec": 0, 00:22:26.011 "r_mbytes_per_sec": 0, 00:22:26.011 "w_mbytes_per_sec": 0 00:22:26.011 }, 00:22:26.011 "claimed": false, 00:22:26.011 "zoned": false, 00:22:26.011 "supported_io_types": { 00:22:26.011 "read": true, 00:22:26.011 "write": true, 00:22:26.011 "unmap": true, 00:22:26.011 "write_zeroes": true, 00:22:26.011 "flush": true, 00:22:26.011 "reset": true, 00:22:26.011 "compare": false, 00:22:26.011 "compare_and_write": false, 00:22:26.011 "abort": true, 00:22:26.011 "nvme_admin": false, 00:22:26.011 "nvme_io": false 00:22:26.011 }, 00:22:26.011 "memory_domains": [ 00:22:26.011 { 00:22:26.011 "dma_device_id": "system", 00:22:26.011 "dma_device_type": 1 00:22:26.011 }, 00:22:26.011 { 00:22:26.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.011 "dma_device_type": 2 00:22:26.011 } 00:22:26.011 ], 00:22:26.011 "driver_specific": {} 00:22:26.011 } 00:22:26.011 ] 00:22:26.011 07:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:26.011 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:26.011 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:26.011 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:26.269 [2024-07-12 07:32:59.939251] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:26.269 [2024-07-12 07:32:59.940082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:26.269 [2024-07-12 07:32:59.940221] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:26.269 [2024-07-12 07:32:59.942683] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:26.269 [2024-07-12 07:32:59.942840] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:26.269 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:26.269 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:26.269 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:26.269 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:26.269 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:26.269 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:26.269 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:26.269 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:26.269 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:26.269 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:26.269 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.269 07:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:26.528 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:26.528 "name": "Existed_Raid", 00:22:26.528 "uuid": "96e6194e-3e63-4d7f-825d-98bc4345bf60", 00:22:26.528 "strip_size_kb": 64, 00:22:26.528 "state": "configuring", 00:22:26.528 "raid_level": "raid0", 00:22:26.528 "superblock": true, 00:22:26.528 "num_base_bdevs": 4, 00:22:26.528 "num_base_bdevs_discovered": 3, 00:22:26.528 "num_base_bdevs_operational": 4, 00:22:26.528 "base_bdevs_list": [ 00:22:26.528 { 00:22:26.528 "name": "BaseBdev1", 00:22:26.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.528 "is_configured": false, 00:22:26.528 "data_offset": 0, 00:22:26.528 "data_size": 0 00:22:26.528 }, 00:22:26.528 { 00:22:26.528 "name": "BaseBdev2", 00:22:26.528 "uuid": "10d8ba0d-f28b-4542-b27e-1ca7506cbdd4", 00:22:26.528 "is_configured": true, 00:22:26.528 "data_offset": 2048, 00:22:26.528 "data_size": 63488 00:22:26.528 }, 00:22:26.528 { 00:22:26.528 "name": "BaseBdev3", 00:22:26.528 "uuid": "21b76f2e-a71a-455a-a3b3-f4d797237897", 00:22:26.528 "is_configured": true, 00:22:26.528 "data_offset": 2048, 00:22:26.528 "data_size": 63488 00:22:26.528 }, 00:22:26.528 { 00:22:26.528 "name": "BaseBdev4", 00:22:26.528 "uuid": "cd42a69a-8d4a-4912-a487-36b724105738", 00:22:26.528 "is_configured": true, 00:22:26.528 "data_offset": 2048, 00:22:26.528 "data_size": 63488 00:22:26.528 } 00:22:26.528 ] 00:22:26.528 }' 00:22:26.528 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:26.528 07:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.095 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:27.095 [2024-07-12 07:33:00.963429] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:27.356 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:27.356 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:27.356 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:27.356 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:27.356 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:27.356 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:27.356 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:27.356 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:27.356 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:27.356 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:27.356 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.356 07:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.356 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:27.356 "name": "Existed_Raid", 00:22:27.356 "uuid": "96e6194e-3e63-4d7f-825d-98bc4345bf60", 00:22:27.356 "strip_size_kb": 64, 00:22:27.356 "state": "configuring", 00:22:27.356 "raid_level": "raid0", 00:22:27.356 "superblock": true, 00:22:27.356 "num_base_bdevs": 4, 00:22:27.356 "num_base_bdevs_discovered": 2, 00:22:27.356 "num_base_bdevs_operational": 4, 00:22:27.356 "base_bdevs_list": [ 00:22:27.356 { 00:22:27.356 "name": "BaseBdev1", 00:22:27.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.356 "is_configured": false, 00:22:27.356 "data_offset": 0, 00:22:27.356 "data_size": 0 00:22:27.356 }, 00:22:27.356 { 00:22:27.356 "name": null, 00:22:27.356 "uuid": "10d8ba0d-f28b-4542-b27e-1ca7506cbdd4", 00:22:27.356 "is_configured": false, 00:22:27.356 "data_offset": 2048, 00:22:27.356 "data_size": 63488 00:22:27.356 }, 00:22:27.356 { 00:22:27.356 "name": "BaseBdev3", 00:22:27.356 "uuid": "21b76f2e-a71a-455a-a3b3-f4d797237897", 00:22:27.356 "is_configured": true, 00:22:27.356 "data_offset": 2048, 00:22:27.356 "data_size": 63488 00:22:27.356 }, 00:22:27.356 { 00:22:27.356 "name": "BaseBdev4", 00:22:27.356 "uuid": "cd42a69a-8d4a-4912-a487-36b724105738", 00:22:27.356 "is_configured": true, 00:22:27.356 "data_offset": 2048, 00:22:27.356 "data_size": 63488 00:22:27.356 } 00:22:27.356 ] 00:22:27.356 }' 00:22:27.356 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:27.356 07:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.290 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.290 07:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:28.290 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:28.290 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:28.548 [2024-07-12 07:33:02.281334] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:28.548 BaseBdev1 00:22:28.548 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:28.548 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:22:28.548 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:28.548 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:28.548 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:28.548 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:28.548 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:28.806 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:29.064 [ 00:22:29.064 { 00:22:29.064 "name": "BaseBdev1", 00:22:29.064 "aliases": [ 00:22:29.064 "d31e11ea-64d5-4488-840e-d802d48cb69e" 00:22:29.064 ], 00:22:29.064 "product_name": "Malloc disk", 00:22:29.064 "block_size": 512, 00:22:29.064 "num_blocks": 65536, 00:22:29.064 "uuid": "d31e11ea-64d5-4488-840e-d802d48cb69e", 00:22:29.064 "assigned_rate_limits": { 00:22:29.064 "rw_ios_per_sec": 0, 00:22:29.064 "rw_mbytes_per_sec": 0, 00:22:29.064 "r_mbytes_per_sec": 0, 00:22:29.064 "w_mbytes_per_sec": 0 00:22:29.064 }, 00:22:29.064 "claimed": true, 00:22:29.064 "claim_type": "exclusive_write", 00:22:29.064 "zoned": false, 00:22:29.064 "supported_io_types": { 00:22:29.064 "read": true, 00:22:29.064 "write": true, 00:22:29.064 "unmap": true, 00:22:29.064 "write_zeroes": true, 00:22:29.064 "flush": true, 00:22:29.064 "reset": true, 00:22:29.064 "compare": false, 00:22:29.065 "compare_and_write": false, 00:22:29.065 "abort": true, 00:22:29.065 "nvme_admin": false, 00:22:29.065 "nvme_io": false 00:22:29.065 }, 00:22:29.065 "memory_domains": [ 00:22:29.065 { 00:22:29.065 "dma_device_id": "system", 00:22:29.065 "dma_device_type": 1 00:22:29.065 }, 00:22:29.065 { 00:22:29.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.065 "dma_device_type": 2 00:22:29.065 } 00:22:29.065 ], 00:22:29.065 "driver_specific": {} 00:22:29.065 } 00:22:29.065 ] 00:22:29.065 07:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:29.065 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:29.065 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:29.065 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:29.065 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:29.065 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:29.065 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:29.065 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:29.065 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:29.065 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:29.065 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:29.065 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.065 07:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.332 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:29.332 "name": "Existed_Raid", 00:22:29.332 "uuid": "96e6194e-3e63-4d7f-825d-98bc4345bf60", 00:22:29.332 "strip_size_kb": 64, 00:22:29.333 "state": "configuring", 00:22:29.333 "raid_level": "raid0", 00:22:29.333 "superblock": true, 00:22:29.333 "num_base_bdevs": 4, 00:22:29.333 "num_base_bdevs_discovered": 3, 00:22:29.333 "num_base_bdevs_operational": 4, 00:22:29.333 "base_bdevs_list": [ 00:22:29.333 { 00:22:29.333 "name": "BaseBdev1", 00:22:29.333 "uuid": "d31e11ea-64d5-4488-840e-d802d48cb69e", 00:22:29.333 "is_configured": true, 00:22:29.333 "data_offset": 2048, 00:22:29.333 "data_size": 63488 00:22:29.333 }, 00:22:29.333 { 00:22:29.333 "name": null, 00:22:29.333 "uuid": "10d8ba0d-f28b-4542-b27e-1ca7506cbdd4", 00:22:29.333 "is_configured": false, 00:22:29.333 "data_offset": 2048, 00:22:29.333 "data_size": 63488 00:22:29.333 }, 00:22:29.333 { 00:22:29.333 "name": "BaseBdev3", 00:22:29.333 "uuid": "21b76f2e-a71a-455a-a3b3-f4d797237897", 00:22:29.333 "is_configured": true, 00:22:29.333 "data_offset": 2048, 00:22:29.333 "data_size": 63488 00:22:29.333 }, 00:22:29.333 { 00:22:29.333 "name": "BaseBdev4", 00:22:29.333 "uuid": "cd42a69a-8d4a-4912-a487-36b724105738", 00:22:29.333 "is_configured": true, 00:22:29.333 "data_offset": 2048, 00:22:29.333 "data_size": 63488 00:22:29.333 } 00:22:29.333 ] 00:22:29.333 }' 00:22:29.333 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:29.333 07:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.898 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.898 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:30.157 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:30.157 07:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:30.415 [2024-07-12 07:33:04.137821] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:30.415 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:30.415 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:30.415 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:30.415 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:30.415 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:30.415 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:30.415 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.415 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.415 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.415 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.415 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.415 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.674 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:30.674 "name": "Existed_Raid", 00:22:30.674 "uuid": "96e6194e-3e63-4d7f-825d-98bc4345bf60", 00:22:30.674 "strip_size_kb": 64, 00:22:30.674 "state": "configuring", 00:22:30.674 "raid_level": "raid0", 00:22:30.674 "superblock": true, 00:22:30.674 "num_base_bdevs": 4, 00:22:30.674 "num_base_bdevs_discovered": 2, 00:22:30.674 "num_base_bdevs_operational": 4, 00:22:30.674 "base_bdevs_list": [ 00:22:30.674 { 00:22:30.674 "name": "BaseBdev1", 00:22:30.674 "uuid": "d31e11ea-64d5-4488-840e-d802d48cb69e", 00:22:30.674 "is_configured": true, 00:22:30.674 "data_offset": 2048, 00:22:30.674 "data_size": 63488 00:22:30.674 }, 00:22:30.674 { 00:22:30.674 "name": null, 00:22:30.674 "uuid": "10d8ba0d-f28b-4542-b27e-1ca7506cbdd4", 00:22:30.674 "is_configured": false, 00:22:30.674 "data_offset": 2048, 00:22:30.674 "data_size": 63488 00:22:30.674 }, 00:22:30.674 { 00:22:30.674 "name": null, 00:22:30.674 "uuid": "21b76f2e-a71a-455a-a3b3-f4d797237897", 00:22:30.674 "is_configured": false, 00:22:30.674 "data_offset": 2048, 00:22:30.674 "data_size": 63488 00:22:30.674 }, 00:22:30.674 { 00:22:30.674 "name": "BaseBdev4", 00:22:30.674 "uuid": "cd42a69a-8d4a-4912-a487-36b724105738", 00:22:30.674 "is_configured": true, 00:22:30.674 "data_offset": 2048, 00:22:30.674 "data_size": 63488 00:22:30.674 } 00:22:30.674 ] 00:22:30.674 }' 00:22:30.674 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:30.674 07:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.242 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.242 07:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:31.501 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:31.501 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:31.760 [2024-07-12 07:33:05.502096] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:31.760 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:31.760 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:31.760 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:31.760 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:31.760 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:31.760 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:31.760 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.760 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.760 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.760 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.760 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.760 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.019 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.019 "name": "Existed_Raid", 00:22:32.019 "uuid": "96e6194e-3e63-4d7f-825d-98bc4345bf60", 00:22:32.019 "strip_size_kb": 64, 00:22:32.019 "state": "configuring", 00:22:32.019 "raid_level": "raid0", 00:22:32.019 "superblock": true, 00:22:32.019 "num_base_bdevs": 4, 00:22:32.019 "num_base_bdevs_discovered": 3, 00:22:32.019 "num_base_bdevs_operational": 4, 00:22:32.019 "base_bdevs_list": [ 00:22:32.019 { 00:22:32.019 "name": "BaseBdev1", 00:22:32.019 "uuid": "d31e11ea-64d5-4488-840e-d802d48cb69e", 00:22:32.019 "is_configured": true, 00:22:32.019 "data_offset": 2048, 00:22:32.019 "data_size": 63488 00:22:32.019 }, 00:22:32.019 { 00:22:32.019 "name": null, 00:22:32.019 "uuid": "10d8ba0d-f28b-4542-b27e-1ca7506cbdd4", 00:22:32.019 "is_configured": false, 00:22:32.019 "data_offset": 2048, 00:22:32.019 "data_size": 63488 00:22:32.019 }, 00:22:32.019 { 00:22:32.019 "name": "BaseBdev3", 00:22:32.019 "uuid": "21b76f2e-a71a-455a-a3b3-f4d797237897", 00:22:32.019 "is_configured": true, 00:22:32.019 "data_offset": 2048, 00:22:32.019 "data_size": 63488 00:22:32.019 }, 00:22:32.019 { 00:22:32.019 "name": "BaseBdev4", 00:22:32.019 "uuid": "cd42a69a-8d4a-4912-a487-36b724105738", 00:22:32.019 "is_configured": true, 00:22:32.019 "data_offset": 2048, 00:22:32.019 "data_size": 63488 00:22:32.019 } 00:22:32.019 ] 00:22:32.019 }' 00:22:32.019 07:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.019 07:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.586 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.586 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:32.844 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:32.844 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:33.103 [2024-07-12 07:33:06.898386] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:33.103 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:33.103 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:33.103 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:33.103 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:33.103 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:33.103 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:33.103 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.103 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.103 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.103 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.103 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.103 07:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.362 07:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.362 "name": "Existed_Raid", 00:22:33.362 "uuid": "96e6194e-3e63-4d7f-825d-98bc4345bf60", 00:22:33.362 "strip_size_kb": 64, 00:22:33.362 "state": "configuring", 00:22:33.362 "raid_level": "raid0", 00:22:33.362 "superblock": true, 00:22:33.362 "num_base_bdevs": 4, 00:22:33.362 "num_base_bdevs_discovered": 2, 00:22:33.362 "num_base_bdevs_operational": 4, 00:22:33.362 "base_bdevs_list": [ 00:22:33.362 { 00:22:33.362 "name": null, 00:22:33.362 "uuid": "d31e11ea-64d5-4488-840e-d802d48cb69e", 00:22:33.362 "is_configured": false, 00:22:33.362 "data_offset": 2048, 00:22:33.362 "data_size": 63488 00:22:33.362 }, 00:22:33.362 { 00:22:33.362 "name": null, 00:22:33.362 "uuid": "10d8ba0d-f28b-4542-b27e-1ca7506cbdd4", 00:22:33.362 "is_configured": false, 00:22:33.362 "data_offset": 2048, 00:22:33.362 "data_size": 63488 00:22:33.362 }, 00:22:33.362 { 00:22:33.362 "name": "BaseBdev3", 00:22:33.362 "uuid": "21b76f2e-a71a-455a-a3b3-f4d797237897", 00:22:33.362 "is_configured": true, 00:22:33.362 "data_offset": 2048, 00:22:33.362 "data_size": 63488 00:22:33.362 }, 00:22:33.362 { 00:22:33.362 "name": "BaseBdev4", 00:22:33.362 "uuid": "cd42a69a-8d4a-4912-a487-36b724105738", 00:22:33.362 "is_configured": true, 00:22:33.362 "data_offset": 2048, 00:22:33.362 "data_size": 63488 00:22:33.362 } 00:22:33.362 ] 00:22:33.362 }' 00:22:33.362 07:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.362 07:33:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.929 07:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.929 07:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:34.496 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:34.496 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:34.496 [2024-07-12 07:33:08.270495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:34.496 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:34.496 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:34.496 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:34.496 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:34.496 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:34.496 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:34.496 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:34.496 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:34.496 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:34.496 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:34.496 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.496 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:34.754 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:34.754 "name": "Existed_Raid", 00:22:34.754 "uuid": "96e6194e-3e63-4d7f-825d-98bc4345bf60", 00:22:34.754 "strip_size_kb": 64, 00:22:34.754 "state": "configuring", 00:22:34.754 "raid_level": "raid0", 00:22:34.754 "superblock": true, 00:22:34.754 "num_base_bdevs": 4, 00:22:34.754 "num_base_bdevs_discovered": 3, 00:22:34.754 "num_base_bdevs_operational": 4, 00:22:34.754 "base_bdevs_list": [ 00:22:34.754 { 00:22:34.754 "name": null, 00:22:34.754 "uuid": "d31e11ea-64d5-4488-840e-d802d48cb69e", 00:22:34.754 "is_configured": false, 00:22:34.754 "data_offset": 2048, 00:22:34.754 "data_size": 63488 00:22:34.754 }, 00:22:34.754 { 00:22:34.754 "name": "BaseBdev2", 00:22:34.754 "uuid": "10d8ba0d-f28b-4542-b27e-1ca7506cbdd4", 00:22:34.754 "is_configured": true, 00:22:34.754 "data_offset": 2048, 00:22:34.754 "data_size": 63488 00:22:34.754 }, 00:22:34.754 { 00:22:34.754 "name": "BaseBdev3", 00:22:34.754 "uuid": "21b76f2e-a71a-455a-a3b3-f4d797237897", 00:22:34.754 "is_configured": true, 00:22:34.754 "data_offset": 2048, 00:22:34.754 "data_size": 63488 00:22:34.754 }, 00:22:34.754 { 00:22:34.754 "name": "BaseBdev4", 00:22:34.754 "uuid": "cd42a69a-8d4a-4912-a487-36b724105738", 00:22:34.754 "is_configured": true, 00:22:34.754 "data_offset": 2048, 00:22:34.754 "data_size": 63488 00:22:34.754 } 00:22:34.754 ] 00:22:34.754 }' 00:22:34.754 07:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:34.754 07:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:35.322 07:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.322 07:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:35.581 07:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:35.581 07:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:35.581 07:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.839 07:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d31e11ea-64d5-4488-840e-d802d48cb69e 00:22:36.099 [2024-07-12 07:33:09.896334] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:36.099 [2024-07-12 07:33:09.896822] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:22:36.099 [2024-07-12 07:33:09.896967] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:36.099 [2024-07-12 07:33:09.897094] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:22:36.099 [2024-07-12 07:33:09.897581] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:22:36.099 [2024-07-12 07:33:09.897627] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:22:36.099 [2024-07-12 07:33:09.897822] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:36.099 NewBaseBdev 00:22:36.099 07:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:36.099 07:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:22:36.099 07:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:36.099 07:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:36.099 07:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:36.099 07:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:36.099 07:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:36.358 07:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:36.616 [ 00:22:36.616 { 00:22:36.616 "name": "NewBaseBdev", 00:22:36.616 "aliases": [ 00:22:36.616 "d31e11ea-64d5-4488-840e-d802d48cb69e" 00:22:36.616 ], 00:22:36.616 "product_name": "Malloc disk", 00:22:36.616 "block_size": 512, 00:22:36.616 "num_blocks": 65536, 00:22:36.616 "uuid": "d31e11ea-64d5-4488-840e-d802d48cb69e", 00:22:36.616 "assigned_rate_limits": { 00:22:36.616 "rw_ios_per_sec": 0, 00:22:36.616 "rw_mbytes_per_sec": 0, 00:22:36.616 "r_mbytes_per_sec": 0, 00:22:36.616 "w_mbytes_per_sec": 0 00:22:36.616 }, 00:22:36.616 "claimed": true, 00:22:36.616 "claim_type": "exclusive_write", 00:22:36.616 "zoned": false, 00:22:36.616 "supported_io_types": { 00:22:36.616 "read": true, 00:22:36.616 "write": true, 00:22:36.616 "unmap": true, 00:22:36.616 "write_zeroes": true, 00:22:36.616 "flush": true, 00:22:36.616 "reset": true, 00:22:36.616 "compare": false, 00:22:36.616 "compare_and_write": false, 00:22:36.616 "abort": true, 00:22:36.616 "nvme_admin": false, 00:22:36.616 "nvme_io": false 00:22:36.616 }, 00:22:36.616 "memory_domains": [ 00:22:36.616 { 00:22:36.616 "dma_device_id": "system", 00:22:36.616 "dma_device_type": 1 00:22:36.616 }, 00:22:36.616 { 00:22:36.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.616 "dma_device_type": 2 00:22:36.616 } 00:22:36.616 ], 00:22:36.616 "driver_specific": {} 00:22:36.616 } 00:22:36.616 ] 00:22:36.616 07:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:36.616 07:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:36.616 07:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:36.616 07:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:36.616 07:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:36.616 07:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:36.616 07:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:36.616 07:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.616 07:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.616 07:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.616 07:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.616 07:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.616 07:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.875 07:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.875 "name": "Existed_Raid", 00:22:36.875 "uuid": "96e6194e-3e63-4d7f-825d-98bc4345bf60", 00:22:36.875 "strip_size_kb": 64, 00:22:36.875 "state": "online", 00:22:36.875 "raid_level": "raid0", 00:22:36.875 "superblock": true, 00:22:36.875 "num_base_bdevs": 4, 00:22:36.875 "num_base_bdevs_discovered": 4, 00:22:36.875 "num_base_bdevs_operational": 4, 00:22:36.875 "base_bdevs_list": [ 00:22:36.875 { 00:22:36.875 "name": "NewBaseBdev", 00:22:36.875 "uuid": "d31e11ea-64d5-4488-840e-d802d48cb69e", 00:22:36.875 "is_configured": true, 00:22:36.875 "data_offset": 2048, 00:22:36.875 "data_size": 63488 00:22:36.875 }, 00:22:36.875 { 00:22:36.875 "name": "BaseBdev2", 00:22:36.875 "uuid": "10d8ba0d-f28b-4542-b27e-1ca7506cbdd4", 00:22:36.875 "is_configured": true, 00:22:36.875 "data_offset": 2048, 00:22:36.875 "data_size": 63488 00:22:36.875 }, 00:22:36.875 { 00:22:36.875 "name": "BaseBdev3", 00:22:36.875 "uuid": "21b76f2e-a71a-455a-a3b3-f4d797237897", 00:22:36.875 "is_configured": true, 00:22:36.875 "data_offset": 2048, 00:22:36.875 "data_size": 63488 00:22:36.875 }, 00:22:36.875 { 00:22:36.875 "name": "BaseBdev4", 00:22:36.875 "uuid": "cd42a69a-8d4a-4912-a487-36b724105738", 00:22:36.875 "is_configured": true, 00:22:36.875 "data_offset": 2048, 00:22:36.875 "data_size": 63488 00:22:36.875 } 00:22:36.875 ] 00:22:36.875 }' 00:22:36.875 07:33:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.875 07:33:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.442 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:37.442 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:37.442 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:37.442 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:37.442 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:37.442 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:37.442 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:37.442 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:37.442 [2024-07-12 07:33:11.301024] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:37.442 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:37.442 "name": "Existed_Raid", 00:22:37.442 "aliases": [ 00:22:37.442 "96e6194e-3e63-4d7f-825d-98bc4345bf60" 00:22:37.442 ], 00:22:37.442 "product_name": "Raid Volume", 00:22:37.442 "block_size": 512, 00:22:37.442 "num_blocks": 253952, 00:22:37.442 "uuid": "96e6194e-3e63-4d7f-825d-98bc4345bf60", 00:22:37.442 "assigned_rate_limits": { 00:22:37.442 "rw_ios_per_sec": 0, 00:22:37.442 "rw_mbytes_per_sec": 0, 00:22:37.442 "r_mbytes_per_sec": 0, 00:22:37.442 "w_mbytes_per_sec": 0 00:22:37.442 }, 00:22:37.443 "claimed": false, 00:22:37.443 "zoned": false, 00:22:37.443 "supported_io_types": { 00:22:37.443 "read": true, 00:22:37.443 "write": true, 00:22:37.443 "unmap": true, 00:22:37.443 "write_zeroes": true, 00:22:37.443 "flush": true, 00:22:37.443 "reset": true, 00:22:37.443 "compare": false, 00:22:37.443 "compare_and_write": false, 00:22:37.443 "abort": false, 00:22:37.443 "nvme_admin": false, 00:22:37.443 "nvme_io": false 00:22:37.443 }, 00:22:37.443 "memory_domains": [ 00:22:37.443 { 00:22:37.443 "dma_device_id": "system", 00:22:37.443 "dma_device_type": 1 00:22:37.443 }, 00:22:37.443 { 00:22:37.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.443 "dma_device_type": 2 00:22:37.443 }, 00:22:37.443 { 00:22:37.443 "dma_device_id": "system", 00:22:37.443 "dma_device_type": 1 00:22:37.443 }, 00:22:37.443 { 00:22:37.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.443 "dma_device_type": 2 00:22:37.443 }, 00:22:37.443 { 00:22:37.443 "dma_device_id": "system", 00:22:37.443 "dma_device_type": 1 00:22:37.443 }, 00:22:37.443 { 00:22:37.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.443 "dma_device_type": 2 00:22:37.443 }, 00:22:37.443 { 00:22:37.443 "dma_device_id": "system", 00:22:37.443 "dma_device_type": 1 00:22:37.443 }, 00:22:37.443 { 00:22:37.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.443 "dma_device_type": 2 00:22:37.443 } 00:22:37.443 ], 00:22:37.443 "driver_specific": { 00:22:37.443 "raid": { 00:22:37.443 "uuid": "96e6194e-3e63-4d7f-825d-98bc4345bf60", 00:22:37.443 "strip_size_kb": 64, 00:22:37.443 "state": "online", 00:22:37.443 "raid_level": "raid0", 00:22:37.443 "superblock": true, 00:22:37.443 "num_base_bdevs": 4, 00:22:37.443 "num_base_bdevs_discovered": 4, 00:22:37.443 "num_base_bdevs_operational": 4, 00:22:37.443 "base_bdevs_list": [ 00:22:37.443 { 00:22:37.443 "name": "NewBaseBdev", 00:22:37.443 "uuid": "d31e11ea-64d5-4488-840e-d802d48cb69e", 00:22:37.443 "is_configured": true, 00:22:37.443 "data_offset": 2048, 00:22:37.443 "data_size": 63488 00:22:37.443 }, 00:22:37.443 { 00:22:37.443 "name": "BaseBdev2", 00:22:37.443 "uuid": "10d8ba0d-f28b-4542-b27e-1ca7506cbdd4", 00:22:37.443 "is_configured": true, 00:22:37.443 "data_offset": 2048, 00:22:37.443 "data_size": 63488 00:22:37.443 }, 00:22:37.443 { 00:22:37.443 "name": "BaseBdev3", 00:22:37.443 "uuid": "21b76f2e-a71a-455a-a3b3-f4d797237897", 00:22:37.443 "is_configured": true, 00:22:37.443 "data_offset": 2048, 00:22:37.443 "data_size": 63488 00:22:37.443 }, 00:22:37.443 { 00:22:37.443 "name": "BaseBdev4", 00:22:37.443 "uuid": "cd42a69a-8d4a-4912-a487-36b724105738", 00:22:37.443 "is_configured": true, 00:22:37.443 "data_offset": 2048, 00:22:37.443 "data_size": 63488 00:22:37.443 } 00:22:37.443 ] 00:22:37.443 } 00:22:37.443 } 00:22:37.443 }' 00:22:37.443 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:37.702 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:37.702 BaseBdev2 00:22:37.702 BaseBdev3 00:22:37.702 BaseBdev4' 00:22:37.702 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:37.702 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:37.702 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:37.960 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:37.960 "name": "NewBaseBdev", 00:22:37.960 "aliases": [ 00:22:37.960 "d31e11ea-64d5-4488-840e-d802d48cb69e" 00:22:37.960 ], 00:22:37.960 "product_name": "Malloc disk", 00:22:37.960 "block_size": 512, 00:22:37.960 "num_blocks": 65536, 00:22:37.960 "uuid": "d31e11ea-64d5-4488-840e-d802d48cb69e", 00:22:37.960 "assigned_rate_limits": { 00:22:37.960 "rw_ios_per_sec": 0, 00:22:37.960 "rw_mbytes_per_sec": 0, 00:22:37.960 "r_mbytes_per_sec": 0, 00:22:37.960 "w_mbytes_per_sec": 0 00:22:37.960 }, 00:22:37.960 "claimed": true, 00:22:37.960 "claim_type": "exclusive_write", 00:22:37.960 "zoned": false, 00:22:37.960 "supported_io_types": { 00:22:37.960 "read": true, 00:22:37.960 "write": true, 00:22:37.960 "unmap": true, 00:22:37.960 "write_zeroes": true, 00:22:37.960 "flush": true, 00:22:37.960 "reset": true, 00:22:37.960 "compare": false, 00:22:37.960 "compare_and_write": false, 00:22:37.960 "abort": true, 00:22:37.960 "nvme_admin": false, 00:22:37.960 "nvme_io": false 00:22:37.960 }, 00:22:37.960 "memory_domains": [ 00:22:37.960 { 00:22:37.960 "dma_device_id": "system", 00:22:37.960 "dma_device_type": 1 00:22:37.960 }, 00:22:37.960 { 00:22:37.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.960 "dma_device_type": 2 00:22:37.960 } 00:22:37.960 ], 00:22:37.960 "driver_specific": {} 00:22:37.960 }' 00:22:37.960 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:37.960 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:37.960 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:37.960 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:37.960 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:37.960 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:38.232 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:38.232 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:38.232 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:38.232 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:38.232 07:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:38.232 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:38.232 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:38.232 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:38.232 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:38.503 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:38.503 "name": "BaseBdev2", 00:22:38.503 "aliases": [ 00:22:38.503 "10d8ba0d-f28b-4542-b27e-1ca7506cbdd4" 00:22:38.503 ], 00:22:38.503 "product_name": "Malloc disk", 00:22:38.503 "block_size": 512, 00:22:38.503 "num_blocks": 65536, 00:22:38.503 "uuid": "10d8ba0d-f28b-4542-b27e-1ca7506cbdd4", 00:22:38.503 "assigned_rate_limits": { 00:22:38.503 "rw_ios_per_sec": 0, 00:22:38.503 "rw_mbytes_per_sec": 0, 00:22:38.503 "r_mbytes_per_sec": 0, 00:22:38.503 "w_mbytes_per_sec": 0 00:22:38.503 }, 00:22:38.503 "claimed": true, 00:22:38.503 "claim_type": "exclusive_write", 00:22:38.503 "zoned": false, 00:22:38.503 "supported_io_types": { 00:22:38.503 "read": true, 00:22:38.503 "write": true, 00:22:38.503 "unmap": true, 00:22:38.503 "write_zeroes": true, 00:22:38.503 "flush": true, 00:22:38.503 "reset": true, 00:22:38.503 "compare": false, 00:22:38.503 "compare_and_write": false, 00:22:38.503 "abort": true, 00:22:38.503 "nvme_admin": false, 00:22:38.503 "nvme_io": false 00:22:38.503 }, 00:22:38.503 "memory_domains": [ 00:22:38.503 { 00:22:38.503 "dma_device_id": "system", 00:22:38.503 "dma_device_type": 1 00:22:38.503 }, 00:22:38.503 { 00:22:38.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.503 "dma_device_type": 2 00:22:38.503 } 00:22:38.503 ], 00:22:38.503 "driver_specific": {} 00:22:38.503 }' 00:22:38.503 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:38.503 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:38.503 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:38.503 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:38.761 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:38.761 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:38.761 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:38.761 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:38.761 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:38.761 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:38.761 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:38.761 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:38.761 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:38.761 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:38.761 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:39.020 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:39.020 "name": "BaseBdev3", 00:22:39.020 "aliases": [ 00:22:39.020 "21b76f2e-a71a-455a-a3b3-f4d797237897" 00:22:39.020 ], 00:22:39.020 "product_name": "Malloc disk", 00:22:39.020 "block_size": 512, 00:22:39.020 "num_blocks": 65536, 00:22:39.020 "uuid": "21b76f2e-a71a-455a-a3b3-f4d797237897", 00:22:39.020 "assigned_rate_limits": { 00:22:39.020 "rw_ios_per_sec": 0, 00:22:39.020 "rw_mbytes_per_sec": 0, 00:22:39.020 "r_mbytes_per_sec": 0, 00:22:39.020 "w_mbytes_per_sec": 0 00:22:39.020 }, 00:22:39.020 "claimed": true, 00:22:39.020 "claim_type": "exclusive_write", 00:22:39.020 "zoned": false, 00:22:39.020 "supported_io_types": { 00:22:39.020 "read": true, 00:22:39.020 "write": true, 00:22:39.020 "unmap": true, 00:22:39.020 "write_zeroes": true, 00:22:39.020 "flush": true, 00:22:39.020 "reset": true, 00:22:39.020 "compare": false, 00:22:39.020 "compare_and_write": false, 00:22:39.020 "abort": true, 00:22:39.020 "nvme_admin": false, 00:22:39.020 "nvme_io": false 00:22:39.020 }, 00:22:39.020 "memory_domains": [ 00:22:39.020 { 00:22:39.020 "dma_device_id": "system", 00:22:39.020 "dma_device_type": 1 00:22:39.020 }, 00:22:39.020 { 00:22:39.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.020 "dma_device_type": 2 00:22:39.020 } 00:22:39.020 ], 00:22:39.020 "driver_specific": {} 00:22:39.020 }' 00:22:39.020 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:39.020 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:39.020 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:39.020 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:39.020 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:39.291 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:39.292 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:39.292 07:33:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:39.292 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:39.292 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:39.292 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:39.292 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:39.292 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:39.292 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:39.292 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:39.549 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:39.549 "name": "BaseBdev4", 00:22:39.549 "aliases": [ 00:22:39.549 "cd42a69a-8d4a-4912-a487-36b724105738" 00:22:39.549 ], 00:22:39.549 "product_name": "Malloc disk", 00:22:39.549 "block_size": 512, 00:22:39.549 "num_blocks": 65536, 00:22:39.549 "uuid": "cd42a69a-8d4a-4912-a487-36b724105738", 00:22:39.549 "assigned_rate_limits": { 00:22:39.549 "rw_ios_per_sec": 0, 00:22:39.549 "rw_mbytes_per_sec": 0, 00:22:39.549 "r_mbytes_per_sec": 0, 00:22:39.549 "w_mbytes_per_sec": 0 00:22:39.549 }, 00:22:39.549 "claimed": true, 00:22:39.549 "claim_type": "exclusive_write", 00:22:39.549 "zoned": false, 00:22:39.549 "supported_io_types": { 00:22:39.549 "read": true, 00:22:39.549 "write": true, 00:22:39.549 "unmap": true, 00:22:39.549 "write_zeroes": true, 00:22:39.549 "flush": true, 00:22:39.549 "reset": true, 00:22:39.549 "compare": false, 00:22:39.549 "compare_and_write": false, 00:22:39.549 "abort": true, 00:22:39.549 "nvme_admin": false, 00:22:39.549 "nvme_io": false 00:22:39.549 }, 00:22:39.549 "memory_domains": [ 00:22:39.549 { 00:22:39.549 "dma_device_id": "system", 00:22:39.549 "dma_device_type": 1 00:22:39.549 }, 00:22:39.549 { 00:22:39.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.549 "dma_device_type": 2 00:22:39.549 } 00:22:39.549 ], 00:22:39.549 "driver_specific": {} 00:22:39.549 }' 00:22:39.549 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:39.806 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:39.806 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:39.806 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:39.807 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:39.807 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:39.807 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:39.807 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:39.807 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:39.807 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:40.064 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:40.064 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:40.064 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:40.064 [2024-07-12 07:33:13.945246] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:40.064 [2024-07-12 07:33:13.945532] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:40.064 [2024-07-12 07:33:13.945786] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:40.064 [2024-07-12 07:33:13.945967] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:40.064 [2024-07-12 07:33:13.946036] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:22:40.321 07:33:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 145247 00:22:40.321 07:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 145247 ']' 00:22:40.321 07:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 145247 00:22:40.321 07:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:22:40.321 07:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:40.321 07:33:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 145247 00:22:40.321 07:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:40.321 07:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:40.321 07:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 145247' 00:22:40.321 killing process with pid 145247 00:22:40.321 07:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 145247 00:22:40.321 [2024-07-12 07:33:14.005533] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:40.321 07:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 145247 00:22:40.321 [2024-07-12 07:33:14.083799] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:40.889 07:33:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:40.889 00:22:40.889 real 0m32.440s 00:22:40.889 user 0m59.608s 00:22:40.889 sys 0m5.627s 00:22:40.889 07:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:40.889 07:33:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.889 ************************************ 00:22:40.889 END TEST raid_state_function_test_sb 00:22:40.889 ************************************ 00:22:40.889 07:33:14 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:22:40.889 07:33:14 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:40.889 07:33:14 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:40.889 07:33:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:40.889 ************************************ 00:22:40.889 START TEST raid_superblock_test 00:22:40.889 ************************************ 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 4 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=146336 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 146336 /var/tmp/spdk-raid.sock 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 146336 ']' 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:40.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:40.889 07:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.889 [2024-07-12 07:33:14.641347] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:40.889 [2024-07-12 07:33:14.641831] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146336 ] 00:22:41.147 [2024-07-12 07:33:14.796370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.147 [2024-07-12 07:33:14.891192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.147 [2024-07-12 07:33:14.971121] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:42.081 07:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:42.081 07:33:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:22:42.081 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:22:42.081 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:42.081 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:22:42.081 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:22:42.081 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:42.081 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:42.081 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:42.081 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:42.081 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:42.081 malloc1 00:22:42.081 07:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:42.339 [2024-07-12 07:33:16.068089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:42.339 [2024-07-12 07:33:16.068476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.339 [2024-07-12 07:33:16.068564] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:22:42.339 [2024-07-12 07:33:16.068824] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.339 [2024-07-12 07:33:16.071848] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.339 [2024-07-12 07:33:16.072026] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:42.339 pt1 00:22:42.339 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:42.339 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:42.339 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:22:42.339 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:22:42.339 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:42.339 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:42.339 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:42.339 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:42.339 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:42.597 malloc2 00:22:42.597 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:42.857 [2024-07-12 07:33:16.516101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:42.857 [2024-07-12 07:33:16.516422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.857 [2024-07-12 07:33:16.516501] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:22:42.857 [2024-07-12 07:33:16.516635] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.857 [2024-07-12 07:33:16.519524] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.857 [2024-07-12 07:33:16.519693] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:42.857 pt2 00:22:42.857 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:42.857 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:42.857 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:22:42.857 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:22:42.857 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:42.857 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:42.857 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:42.857 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:42.857 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:43.117 malloc3 00:22:43.117 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:43.117 [2024-07-12 07:33:16.955037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:43.117 [2024-07-12 07:33:16.955422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.117 [2024-07-12 07:33:16.955510] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:43.117 [2024-07-12 07:33:16.955661] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.117 [2024-07-12 07:33:16.958513] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.117 [2024-07-12 07:33:16.958694] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:43.117 pt3 00:22:43.117 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:43.117 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:43.117 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:22:43.117 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:22:43.117 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:43.117 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:43.117 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:43.117 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:43.117 07:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:22:43.375 malloc4 00:22:43.375 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:43.633 [2024-07-12 07:33:17.426830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:43.633 [2024-07-12 07:33:17.427127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.633 [2024-07-12 07:33:17.427302] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:43.633 [2024-07-12 07:33:17.427429] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.633 [2024-07-12 07:33:17.430453] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.633 [2024-07-12 07:33:17.430638] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:43.633 pt4 00:22:43.633 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:43.633 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:43.633 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:22:43.891 [2024-07-12 07:33:17.675080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:43.891 [2024-07-12 07:33:17.677771] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:43.891 [2024-07-12 07:33:17.677976] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:43.891 [2024-07-12 07:33:17.678137] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:43.891 [2024-07-12 07:33:17.678387] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:22:43.891 [2024-07-12 07:33:17.678486] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:43.891 [2024-07-12 07:33:17.678722] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:22:43.892 [2024-07-12 07:33:17.679262] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:22:43.892 [2024-07-12 07:33:17.679370] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:22:43.892 [2024-07-12 07:33:17.679680] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.892 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:43.892 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:43.892 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:43.892 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:43.892 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:43.892 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:43.892 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:43.892 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:43.892 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:43.892 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:43.892 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.892 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.150 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:44.150 "name": "raid_bdev1", 00:22:44.150 "uuid": "9116617c-c1d1-46fe-b28c-c2d161a8beb8", 00:22:44.150 "strip_size_kb": 64, 00:22:44.150 "state": "online", 00:22:44.150 "raid_level": "raid0", 00:22:44.150 "superblock": true, 00:22:44.150 "num_base_bdevs": 4, 00:22:44.150 "num_base_bdevs_discovered": 4, 00:22:44.150 "num_base_bdevs_operational": 4, 00:22:44.150 "base_bdevs_list": [ 00:22:44.150 { 00:22:44.150 "name": "pt1", 00:22:44.150 "uuid": "9e587aa0-7521-5ae1-b725-170b649efe09", 00:22:44.150 "is_configured": true, 00:22:44.150 "data_offset": 2048, 00:22:44.150 "data_size": 63488 00:22:44.150 }, 00:22:44.150 { 00:22:44.150 "name": "pt2", 00:22:44.150 "uuid": "7e1e1b4d-0110-5852-b3dd-1f8d09e185ab", 00:22:44.150 "is_configured": true, 00:22:44.150 "data_offset": 2048, 00:22:44.150 "data_size": 63488 00:22:44.150 }, 00:22:44.150 { 00:22:44.150 "name": "pt3", 00:22:44.150 "uuid": "520d19d9-bb83-5f5a-9aa1-609c0132ae56", 00:22:44.150 "is_configured": true, 00:22:44.150 "data_offset": 2048, 00:22:44.150 "data_size": 63488 00:22:44.150 }, 00:22:44.150 { 00:22:44.150 "name": "pt4", 00:22:44.150 "uuid": "769510ee-b7f7-5df1-86cb-21142639380f", 00:22:44.150 "is_configured": true, 00:22:44.150 "data_offset": 2048, 00:22:44.150 "data_size": 63488 00:22:44.150 } 00:22:44.150 ] 00:22:44.150 }' 00:22:44.150 07:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:44.150 07:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.716 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:22:44.716 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:44.716 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:44.716 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:44.716 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:44.716 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:44.716 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:44.716 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:44.975 [2024-07-12 07:33:18.656122] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:44.975 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:44.975 "name": "raid_bdev1", 00:22:44.975 "aliases": [ 00:22:44.975 "9116617c-c1d1-46fe-b28c-c2d161a8beb8" 00:22:44.975 ], 00:22:44.975 "product_name": "Raid Volume", 00:22:44.975 "block_size": 512, 00:22:44.975 "num_blocks": 253952, 00:22:44.975 "uuid": "9116617c-c1d1-46fe-b28c-c2d161a8beb8", 00:22:44.975 "assigned_rate_limits": { 00:22:44.975 "rw_ios_per_sec": 0, 00:22:44.975 "rw_mbytes_per_sec": 0, 00:22:44.975 "r_mbytes_per_sec": 0, 00:22:44.975 "w_mbytes_per_sec": 0 00:22:44.975 }, 00:22:44.975 "claimed": false, 00:22:44.975 "zoned": false, 00:22:44.975 "supported_io_types": { 00:22:44.975 "read": true, 00:22:44.975 "write": true, 00:22:44.975 "unmap": true, 00:22:44.975 "write_zeroes": true, 00:22:44.975 "flush": true, 00:22:44.975 "reset": true, 00:22:44.975 "compare": false, 00:22:44.975 "compare_and_write": false, 00:22:44.975 "abort": false, 00:22:44.975 "nvme_admin": false, 00:22:44.975 "nvme_io": false 00:22:44.975 }, 00:22:44.975 "memory_domains": [ 00:22:44.975 { 00:22:44.975 "dma_device_id": "system", 00:22:44.975 "dma_device_type": 1 00:22:44.975 }, 00:22:44.975 { 00:22:44.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.975 "dma_device_type": 2 00:22:44.975 }, 00:22:44.975 { 00:22:44.975 "dma_device_id": "system", 00:22:44.975 "dma_device_type": 1 00:22:44.975 }, 00:22:44.975 { 00:22:44.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.975 "dma_device_type": 2 00:22:44.975 }, 00:22:44.975 { 00:22:44.975 "dma_device_id": "system", 00:22:44.975 "dma_device_type": 1 00:22:44.975 }, 00:22:44.975 { 00:22:44.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.975 "dma_device_type": 2 00:22:44.975 }, 00:22:44.975 { 00:22:44.976 "dma_device_id": "system", 00:22:44.976 "dma_device_type": 1 00:22:44.976 }, 00:22:44.976 { 00:22:44.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.976 "dma_device_type": 2 00:22:44.976 } 00:22:44.976 ], 00:22:44.976 "driver_specific": { 00:22:44.976 "raid": { 00:22:44.976 "uuid": "9116617c-c1d1-46fe-b28c-c2d161a8beb8", 00:22:44.976 "strip_size_kb": 64, 00:22:44.976 "state": "online", 00:22:44.976 "raid_level": "raid0", 00:22:44.976 "superblock": true, 00:22:44.976 "num_base_bdevs": 4, 00:22:44.976 "num_base_bdevs_discovered": 4, 00:22:44.976 "num_base_bdevs_operational": 4, 00:22:44.976 "base_bdevs_list": [ 00:22:44.976 { 00:22:44.976 "name": "pt1", 00:22:44.976 "uuid": "9e587aa0-7521-5ae1-b725-170b649efe09", 00:22:44.976 "is_configured": true, 00:22:44.976 "data_offset": 2048, 00:22:44.976 "data_size": 63488 00:22:44.976 }, 00:22:44.976 { 00:22:44.976 "name": "pt2", 00:22:44.976 "uuid": "7e1e1b4d-0110-5852-b3dd-1f8d09e185ab", 00:22:44.976 "is_configured": true, 00:22:44.976 "data_offset": 2048, 00:22:44.976 "data_size": 63488 00:22:44.976 }, 00:22:44.976 { 00:22:44.976 "name": "pt3", 00:22:44.976 "uuid": "520d19d9-bb83-5f5a-9aa1-609c0132ae56", 00:22:44.976 "is_configured": true, 00:22:44.976 "data_offset": 2048, 00:22:44.976 "data_size": 63488 00:22:44.976 }, 00:22:44.976 { 00:22:44.976 "name": "pt4", 00:22:44.976 "uuid": "769510ee-b7f7-5df1-86cb-21142639380f", 00:22:44.976 "is_configured": true, 00:22:44.976 "data_offset": 2048, 00:22:44.976 "data_size": 63488 00:22:44.976 } 00:22:44.976 ] 00:22:44.976 } 00:22:44.976 } 00:22:44.976 }' 00:22:44.976 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:44.976 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:44.976 pt2 00:22:44.976 pt3 00:22:44.976 pt4' 00:22:44.976 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:44.976 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:44.976 07:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:45.235 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:45.235 "name": "pt1", 00:22:45.235 "aliases": [ 00:22:45.235 "9e587aa0-7521-5ae1-b725-170b649efe09" 00:22:45.235 ], 00:22:45.236 "product_name": "passthru", 00:22:45.236 "block_size": 512, 00:22:45.236 "num_blocks": 65536, 00:22:45.236 "uuid": "9e587aa0-7521-5ae1-b725-170b649efe09", 00:22:45.236 "assigned_rate_limits": { 00:22:45.236 "rw_ios_per_sec": 0, 00:22:45.236 "rw_mbytes_per_sec": 0, 00:22:45.236 "r_mbytes_per_sec": 0, 00:22:45.236 "w_mbytes_per_sec": 0 00:22:45.236 }, 00:22:45.236 "claimed": true, 00:22:45.236 "claim_type": "exclusive_write", 00:22:45.236 "zoned": false, 00:22:45.236 "supported_io_types": { 00:22:45.236 "read": true, 00:22:45.236 "write": true, 00:22:45.236 "unmap": true, 00:22:45.236 "write_zeroes": true, 00:22:45.236 "flush": true, 00:22:45.236 "reset": true, 00:22:45.236 "compare": false, 00:22:45.236 "compare_and_write": false, 00:22:45.236 "abort": true, 00:22:45.236 "nvme_admin": false, 00:22:45.236 "nvme_io": false 00:22:45.236 }, 00:22:45.236 "memory_domains": [ 00:22:45.236 { 00:22:45.236 "dma_device_id": "system", 00:22:45.236 "dma_device_type": 1 00:22:45.236 }, 00:22:45.236 { 00:22:45.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.236 "dma_device_type": 2 00:22:45.236 } 00:22:45.236 ], 00:22:45.236 "driver_specific": { 00:22:45.236 "passthru": { 00:22:45.236 "name": "pt1", 00:22:45.236 "base_bdev_name": "malloc1" 00:22:45.236 } 00:22:45.236 } 00:22:45.236 }' 00:22:45.236 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:45.236 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:45.236 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:45.236 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:45.494 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:45.494 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:45.494 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:45.494 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:45.494 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:45.494 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:45.494 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:45.494 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:45.494 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:45.494 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:45.494 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:46.062 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:46.062 "name": "pt2", 00:22:46.062 "aliases": [ 00:22:46.062 "7e1e1b4d-0110-5852-b3dd-1f8d09e185ab" 00:22:46.062 ], 00:22:46.062 "product_name": "passthru", 00:22:46.062 "block_size": 512, 00:22:46.062 "num_blocks": 65536, 00:22:46.062 "uuid": "7e1e1b4d-0110-5852-b3dd-1f8d09e185ab", 00:22:46.062 "assigned_rate_limits": { 00:22:46.062 "rw_ios_per_sec": 0, 00:22:46.062 "rw_mbytes_per_sec": 0, 00:22:46.062 "r_mbytes_per_sec": 0, 00:22:46.062 "w_mbytes_per_sec": 0 00:22:46.062 }, 00:22:46.062 "claimed": true, 00:22:46.062 "claim_type": "exclusive_write", 00:22:46.062 "zoned": false, 00:22:46.062 "supported_io_types": { 00:22:46.062 "read": true, 00:22:46.062 "write": true, 00:22:46.062 "unmap": true, 00:22:46.062 "write_zeroes": true, 00:22:46.062 "flush": true, 00:22:46.062 "reset": true, 00:22:46.062 "compare": false, 00:22:46.062 "compare_and_write": false, 00:22:46.062 "abort": true, 00:22:46.062 "nvme_admin": false, 00:22:46.062 "nvme_io": false 00:22:46.062 }, 00:22:46.062 "memory_domains": [ 00:22:46.062 { 00:22:46.062 "dma_device_id": "system", 00:22:46.062 "dma_device_type": 1 00:22:46.062 }, 00:22:46.062 { 00:22:46.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.062 "dma_device_type": 2 00:22:46.062 } 00:22:46.062 ], 00:22:46.062 "driver_specific": { 00:22:46.062 "passthru": { 00:22:46.062 "name": "pt2", 00:22:46.062 "base_bdev_name": "malloc2" 00:22:46.062 } 00:22:46.062 } 00:22:46.062 }' 00:22:46.062 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:46.062 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:46.062 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:46.062 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:46.062 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:46.062 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:46.062 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:46.062 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:46.062 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:46.062 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:46.062 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:46.321 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:46.321 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:46.321 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:46.321 07:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:46.321 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:46.321 "name": "pt3", 00:22:46.321 "aliases": [ 00:22:46.321 "520d19d9-bb83-5f5a-9aa1-609c0132ae56" 00:22:46.321 ], 00:22:46.321 "product_name": "passthru", 00:22:46.321 "block_size": 512, 00:22:46.321 "num_blocks": 65536, 00:22:46.321 "uuid": "520d19d9-bb83-5f5a-9aa1-609c0132ae56", 00:22:46.321 "assigned_rate_limits": { 00:22:46.321 "rw_ios_per_sec": 0, 00:22:46.321 "rw_mbytes_per_sec": 0, 00:22:46.321 "r_mbytes_per_sec": 0, 00:22:46.321 "w_mbytes_per_sec": 0 00:22:46.321 }, 00:22:46.321 "claimed": true, 00:22:46.321 "claim_type": "exclusive_write", 00:22:46.321 "zoned": false, 00:22:46.321 "supported_io_types": { 00:22:46.321 "read": true, 00:22:46.321 "write": true, 00:22:46.321 "unmap": true, 00:22:46.321 "write_zeroes": true, 00:22:46.321 "flush": true, 00:22:46.321 "reset": true, 00:22:46.321 "compare": false, 00:22:46.321 "compare_and_write": false, 00:22:46.321 "abort": true, 00:22:46.321 "nvme_admin": false, 00:22:46.321 "nvme_io": false 00:22:46.321 }, 00:22:46.321 "memory_domains": [ 00:22:46.321 { 00:22:46.321 "dma_device_id": "system", 00:22:46.321 "dma_device_type": 1 00:22:46.321 }, 00:22:46.321 { 00:22:46.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.321 "dma_device_type": 2 00:22:46.321 } 00:22:46.321 ], 00:22:46.321 "driver_specific": { 00:22:46.321 "passthru": { 00:22:46.321 "name": "pt3", 00:22:46.321 "base_bdev_name": "malloc3" 00:22:46.321 } 00:22:46.321 } 00:22:46.321 }' 00:22:46.321 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:46.580 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:46.580 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:46.580 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:46.580 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:46.580 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:46.580 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:46.580 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:46.580 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:46.580 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:46.580 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:46.839 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:46.839 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:46.839 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:46.839 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:47.098 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:47.098 "name": "pt4", 00:22:47.098 "aliases": [ 00:22:47.098 "769510ee-b7f7-5df1-86cb-21142639380f" 00:22:47.098 ], 00:22:47.098 "product_name": "passthru", 00:22:47.098 "block_size": 512, 00:22:47.098 "num_blocks": 65536, 00:22:47.098 "uuid": "769510ee-b7f7-5df1-86cb-21142639380f", 00:22:47.098 "assigned_rate_limits": { 00:22:47.098 "rw_ios_per_sec": 0, 00:22:47.098 "rw_mbytes_per_sec": 0, 00:22:47.098 "r_mbytes_per_sec": 0, 00:22:47.098 "w_mbytes_per_sec": 0 00:22:47.098 }, 00:22:47.098 "claimed": true, 00:22:47.098 "claim_type": "exclusive_write", 00:22:47.098 "zoned": false, 00:22:47.098 "supported_io_types": { 00:22:47.098 "read": true, 00:22:47.098 "write": true, 00:22:47.098 "unmap": true, 00:22:47.098 "write_zeroes": true, 00:22:47.098 "flush": true, 00:22:47.098 "reset": true, 00:22:47.098 "compare": false, 00:22:47.098 "compare_and_write": false, 00:22:47.098 "abort": true, 00:22:47.098 "nvme_admin": false, 00:22:47.098 "nvme_io": false 00:22:47.098 }, 00:22:47.098 "memory_domains": [ 00:22:47.098 { 00:22:47.098 "dma_device_id": "system", 00:22:47.098 "dma_device_type": 1 00:22:47.098 }, 00:22:47.098 { 00:22:47.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.098 "dma_device_type": 2 00:22:47.098 } 00:22:47.098 ], 00:22:47.098 "driver_specific": { 00:22:47.098 "passthru": { 00:22:47.098 "name": "pt4", 00:22:47.098 "base_bdev_name": "malloc4" 00:22:47.098 } 00:22:47.098 } 00:22:47.098 }' 00:22:47.099 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:47.099 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:47.099 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:47.099 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:47.099 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:47.099 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:47.099 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:47.357 07:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:47.357 07:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:47.357 07:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:47.357 07:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:47.357 07:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:47.357 07:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:47.357 07:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:22:47.616 [2024-07-12 07:33:21.412673] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:47.616 07:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=9116617c-c1d1-46fe-b28c-c2d161a8beb8 00:22:47.616 07:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 9116617c-c1d1-46fe-b28c-c2d161a8beb8 ']' 00:22:47.616 07:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:47.876 [2024-07-12 07:33:21.620492] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:47.876 [2024-07-12 07:33:21.620751] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:47.876 [2024-07-12 07:33:21.621006] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:47.876 [2024-07-12 07:33:21.621185] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:47.876 [2024-07-12 07:33:21.621300] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:22:47.876 07:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.876 07:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:22:48.135 07:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:22:48.135 07:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:22:48.135 07:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:48.135 07:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:48.395 07:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:48.395 07:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:48.725 07:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:48.725 07:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:49.011 07:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:49.011 07:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:49.011 07:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:49.011 07:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:49.268 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:22:49.269 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:49.269 07:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:22:49.269 07:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:49.269 07:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:49.527 07:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.527 07:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:49.527 07:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.527 07:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:49.527 07:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.527 07:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:49.527 07:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:49.527 07:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:49.527 [2024-07-12 07:33:23.340779] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:49.527 [2024-07-12 07:33:23.343513] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:49.527 [2024-07-12 07:33:23.343685] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:49.527 [2024-07-12 07:33:23.343748] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:49.527 [2024-07-12 07:33:23.343877] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:49.527 [2024-07-12 07:33:23.344055] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:49.527 [2024-07-12 07:33:23.344159] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:49.527 [2024-07-12 07:33:23.344320] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:49.527 [2024-07-12 07:33:23.344424] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:49.527 [2024-07-12 07:33:23.344512] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:22:49.527 request: 00:22:49.527 { 00:22:49.527 "name": "raid_bdev1", 00:22:49.527 "raid_level": "raid0", 00:22:49.527 "base_bdevs": [ 00:22:49.527 "malloc1", 00:22:49.527 "malloc2", 00:22:49.527 "malloc3", 00:22:49.527 "malloc4" 00:22:49.527 ], 00:22:49.527 "superblock": false, 00:22:49.527 "strip_size_kb": 64, 00:22:49.527 "method": "bdev_raid_create", 00:22:49.527 "req_id": 1 00:22:49.527 } 00:22:49.527 Got JSON-RPC error response 00:22:49.527 response: 00:22:49.527 { 00:22:49.527 "code": -17, 00:22:49.527 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:49.527 } 00:22:49.527 07:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:22:49.527 07:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:49.527 07:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:49.527 07:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:49.527 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:22:49.527 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.786 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:22:49.786 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:22:49.786 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:50.045 [2024-07-12 07:33:23.805757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:50.045 [2024-07-12 07:33:23.806178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:50.045 [2024-07-12 07:33:23.806283] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:50.045 [2024-07-12 07:33:23.806579] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:50.045 [2024-07-12 07:33:23.810557] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:50.045 [2024-07-12 07:33:23.810829] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:50.045 [2024-07-12 07:33:23.811100] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:50.045 [2024-07-12 07:33:23.811317] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:50.045 pt1 00:22:50.045 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:22:50.045 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:50.045 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:50.045 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:50.045 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:50.045 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:50.045 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:50.045 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:50.045 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:50.045 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:50.045 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.045 07:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.305 07:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:50.305 "name": "raid_bdev1", 00:22:50.305 "uuid": "9116617c-c1d1-46fe-b28c-c2d161a8beb8", 00:22:50.305 "strip_size_kb": 64, 00:22:50.305 "state": "configuring", 00:22:50.305 "raid_level": "raid0", 00:22:50.305 "superblock": true, 00:22:50.305 "num_base_bdevs": 4, 00:22:50.305 "num_base_bdevs_discovered": 1, 00:22:50.305 "num_base_bdevs_operational": 4, 00:22:50.305 "base_bdevs_list": [ 00:22:50.305 { 00:22:50.305 "name": "pt1", 00:22:50.305 "uuid": "9e587aa0-7521-5ae1-b725-170b649efe09", 00:22:50.305 "is_configured": true, 00:22:50.305 "data_offset": 2048, 00:22:50.305 "data_size": 63488 00:22:50.305 }, 00:22:50.305 { 00:22:50.305 "name": null, 00:22:50.305 "uuid": "7e1e1b4d-0110-5852-b3dd-1f8d09e185ab", 00:22:50.305 "is_configured": false, 00:22:50.305 "data_offset": 2048, 00:22:50.305 "data_size": 63488 00:22:50.305 }, 00:22:50.305 { 00:22:50.305 "name": null, 00:22:50.305 "uuid": "520d19d9-bb83-5f5a-9aa1-609c0132ae56", 00:22:50.305 "is_configured": false, 00:22:50.305 "data_offset": 2048, 00:22:50.305 "data_size": 63488 00:22:50.305 }, 00:22:50.305 { 00:22:50.305 "name": null, 00:22:50.305 "uuid": "769510ee-b7f7-5df1-86cb-21142639380f", 00:22:50.305 "is_configured": false, 00:22:50.305 "data_offset": 2048, 00:22:50.305 "data_size": 63488 00:22:50.305 } 00:22:50.305 ] 00:22:50.305 }' 00:22:50.305 07:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:50.305 07:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.873 07:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:22:50.873 07:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:51.132 [2024-07-12 07:33:24.927437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:51.132 [2024-07-12 07:33:24.927821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.132 [2024-07-12 07:33:24.927904] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:51.132 [2024-07-12 07:33:24.927997] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.132 [2024-07-12 07:33:24.928523] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.132 [2024-07-12 07:33:24.928687] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:51.132 [2024-07-12 07:33:24.928874] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:51.132 [2024-07-12 07:33:24.928926] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:51.132 pt2 00:22:51.132 07:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:51.391 [2024-07-12 07:33:25.199642] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:51.391 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:22:51.391 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:51.391 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:51.391 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:51.391 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:51.391 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:51.391 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:51.391 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:51.391 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:51.391 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.391 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.391 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.649 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.649 "name": "raid_bdev1", 00:22:51.649 "uuid": "9116617c-c1d1-46fe-b28c-c2d161a8beb8", 00:22:51.649 "strip_size_kb": 64, 00:22:51.649 "state": "configuring", 00:22:51.649 "raid_level": "raid0", 00:22:51.649 "superblock": true, 00:22:51.649 "num_base_bdevs": 4, 00:22:51.649 "num_base_bdevs_discovered": 1, 00:22:51.649 "num_base_bdevs_operational": 4, 00:22:51.649 "base_bdevs_list": [ 00:22:51.649 { 00:22:51.649 "name": "pt1", 00:22:51.649 "uuid": "9e587aa0-7521-5ae1-b725-170b649efe09", 00:22:51.649 "is_configured": true, 00:22:51.649 "data_offset": 2048, 00:22:51.649 "data_size": 63488 00:22:51.649 }, 00:22:51.649 { 00:22:51.649 "name": null, 00:22:51.649 "uuid": "7e1e1b4d-0110-5852-b3dd-1f8d09e185ab", 00:22:51.649 "is_configured": false, 00:22:51.649 "data_offset": 2048, 00:22:51.649 "data_size": 63488 00:22:51.649 }, 00:22:51.649 { 00:22:51.649 "name": null, 00:22:51.649 "uuid": "520d19d9-bb83-5f5a-9aa1-609c0132ae56", 00:22:51.649 "is_configured": false, 00:22:51.649 "data_offset": 2048, 00:22:51.649 "data_size": 63488 00:22:51.649 }, 00:22:51.649 { 00:22:51.649 "name": null, 00:22:51.649 "uuid": "769510ee-b7f7-5df1-86cb-21142639380f", 00:22:51.649 "is_configured": false, 00:22:51.649 "data_offset": 2048, 00:22:51.649 "data_size": 63488 00:22:51.649 } 00:22:51.649 ] 00:22:51.649 }' 00:22:51.649 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.649 07:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.215 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:22:52.215 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:52.215 07:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:52.473 [2024-07-12 07:33:26.167690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:52.473 [2024-07-12 07:33:26.168043] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:52.473 [2024-07-12 07:33:26.168126] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:52.473 [2024-07-12 07:33:26.168228] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:52.473 [2024-07-12 07:33:26.168795] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:52.473 [2024-07-12 07:33:26.168968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:52.473 [2024-07-12 07:33:26.169164] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:52.473 [2024-07-12 07:33:26.169300] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:52.473 pt2 00:22:52.473 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:52.473 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:52.473 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:52.732 [2024-07-12 07:33:26.367744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:52.732 [2024-07-12 07:33:26.368080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:52.732 [2024-07-12 07:33:26.368155] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:52.732 [2024-07-12 07:33:26.368259] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:52.732 [2024-07-12 07:33:26.368787] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:52.732 [2024-07-12 07:33:26.368961] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:52.732 [2024-07-12 07:33:26.369191] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:52.732 [2024-07-12 07:33:26.369336] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:52.732 pt3 00:22:52.732 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:52.732 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:52.732 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:52.732 [2024-07-12 07:33:26.607769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:52.732 [2024-07-12 07:33:26.608085] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:52.732 [2024-07-12 07:33:26.608163] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:52.732 [2024-07-12 07:33:26.608277] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:52.732 [2024-07-12 07:33:26.608865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:52.732 [2024-07-12 07:33:26.609036] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:52.732 [2024-07-12 07:33:26.609213] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:52.732 [2024-07-12 07:33:26.609339] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:52.732 [2024-07-12 07:33:26.609527] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:22:52.732 [2024-07-12 07:33:26.609623] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:52.732 [2024-07-12 07:33:26.609750] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:22:52.732 [2024-07-12 07:33:26.610184] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:22:52.732 [2024-07-12 07:33:26.610310] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:22:52.732 [2024-07-12 07:33:26.610493] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.732 pt4 00:22:52.990 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:52.990 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:52.990 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:52.990 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:52.990 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:52.990 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:52.990 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:52.990 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:52.990 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.990 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.990 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.990 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.990 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.990 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.248 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:53.248 "name": "raid_bdev1", 00:22:53.248 "uuid": "9116617c-c1d1-46fe-b28c-c2d161a8beb8", 00:22:53.248 "strip_size_kb": 64, 00:22:53.248 "state": "online", 00:22:53.248 "raid_level": "raid0", 00:22:53.248 "superblock": true, 00:22:53.248 "num_base_bdevs": 4, 00:22:53.248 "num_base_bdevs_discovered": 4, 00:22:53.248 "num_base_bdevs_operational": 4, 00:22:53.248 "base_bdevs_list": [ 00:22:53.248 { 00:22:53.248 "name": "pt1", 00:22:53.248 "uuid": "9e587aa0-7521-5ae1-b725-170b649efe09", 00:22:53.248 "is_configured": true, 00:22:53.248 "data_offset": 2048, 00:22:53.248 "data_size": 63488 00:22:53.248 }, 00:22:53.248 { 00:22:53.248 "name": "pt2", 00:22:53.248 "uuid": "7e1e1b4d-0110-5852-b3dd-1f8d09e185ab", 00:22:53.248 "is_configured": true, 00:22:53.248 "data_offset": 2048, 00:22:53.248 "data_size": 63488 00:22:53.248 }, 00:22:53.248 { 00:22:53.248 "name": "pt3", 00:22:53.248 "uuid": "520d19d9-bb83-5f5a-9aa1-609c0132ae56", 00:22:53.248 "is_configured": true, 00:22:53.248 "data_offset": 2048, 00:22:53.248 "data_size": 63488 00:22:53.248 }, 00:22:53.248 { 00:22:53.248 "name": "pt4", 00:22:53.248 "uuid": "769510ee-b7f7-5df1-86cb-21142639380f", 00:22:53.248 "is_configured": true, 00:22:53.248 "data_offset": 2048, 00:22:53.248 "data_size": 63488 00:22:53.248 } 00:22:53.248 ] 00:22:53.248 }' 00:22:53.248 07:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:53.248 07:33:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.815 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:22:53.815 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:53.815 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:53.815 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:53.815 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:53.815 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:53.815 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:53.815 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:53.815 [2024-07-12 07:33:27.688236] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.075 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:54.075 "name": "raid_bdev1", 00:22:54.075 "aliases": [ 00:22:54.075 "9116617c-c1d1-46fe-b28c-c2d161a8beb8" 00:22:54.075 ], 00:22:54.075 "product_name": "Raid Volume", 00:22:54.075 "block_size": 512, 00:22:54.075 "num_blocks": 253952, 00:22:54.075 "uuid": "9116617c-c1d1-46fe-b28c-c2d161a8beb8", 00:22:54.075 "assigned_rate_limits": { 00:22:54.075 "rw_ios_per_sec": 0, 00:22:54.075 "rw_mbytes_per_sec": 0, 00:22:54.075 "r_mbytes_per_sec": 0, 00:22:54.075 "w_mbytes_per_sec": 0 00:22:54.075 }, 00:22:54.075 "claimed": false, 00:22:54.075 "zoned": false, 00:22:54.075 "supported_io_types": { 00:22:54.075 "read": true, 00:22:54.075 "write": true, 00:22:54.075 "unmap": true, 00:22:54.075 "write_zeroes": true, 00:22:54.075 "flush": true, 00:22:54.075 "reset": true, 00:22:54.075 "compare": false, 00:22:54.075 "compare_and_write": false, 00:22:54.075 "abort": false, 00:22:54.075 "nvme_admin": false, 00:22:54.075 "nvme_io": false 00:22:54.075 }, 00:22:54.075 "memory_domains": [ 00:22:54.075 { 00:22:54.075 "dma_device_id": "system", 00:22:54.075 "dma_device_type": 1 00:22:54.075 }, 00:22:54.075 { 00:22:54.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.075 "dma_device_type": 2 00:22:54.075 }, 00:22:54.075 { 00:22:54.075 "dma_device_id": "system", 00:22:54.075 "dma_device_type": 1 00:22:54.075 }, 00:22:54.075 { 00:22:54.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.075 "dma_device_type": 2 00:22:54.075 }, 00:22:54.075 { 00:22:54.075 "dma_device_id": "system", 00:22:54.075 "dma_device_type": 1 00:22:54.075 }, 00:22:54.075 { 00:22:54.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.075 "dma_device_type": 2 00:22:54.075 }, 00:22:54.075 { 00:22:54.075 "dma_device_id": "system", 00:22:54.075 "dma_device_type": 1 00:22:54.075 }, 00:22:54.075 { 00:22:54.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.075 "dma_device_type": 2 00:22:54.075 } 00:22:54.075 ], 00:22:54.075 "driver_specific": { 00:22:54.075 "raid": { 00:22:54.075 "uuid": "9116617c-c1d1-46fe-b28c-c2d161a8beb8", 00:22:54.075 "strip_size_kb": 64, 00:22:54.075 "state": "online", 00:22:54.075 "raid_level": "raid0", 00:22:54.075 "superblock": true, 00:22:54.075 "num_base_bdevs": 4, 00:22:54.075 "num_base_bdevs_discovered": 4, 00:22:54.075 "num_base_bdevs_operational": 4, 00:22:54.075 "base_bdevs_list": [ 00:22:54.075 { 00:22:54.075 "name": "pt1", 00:22:54.075 "uuid": "9e587aa0-7521-5ae1-b725-170b649efe09", 00:22:54.075 "is_configured": true, 00:22:54.075 "data_offset": 2048, 00:22:54.075 "data_size": 63488 00:22:54.075 }, 00:22:54.075 { 00:22:54.075 "name": "pt2", 00:22:54.075 "uuid": "7e1e1b4d-0110-5852-b3dd-1f8d09e185ab", 00:22:54.075 "is_configured": true, 00:22:54.075 "data_offset": 2048, 00:22:54.075 "data_size": 63488 00:22:54.075 }, 00:22:54.075 { 00:22:54.075 "name": "pt3", 00:22:54.075 "uuid": "520d19d9-bb83-5f5a-9aa1-609c0132ae56", 00:22:54.075 "is_configured": true, 00:22:54.075 "data_offset": 2048, 00:22:54.075 "data_size": 63488 00:22:54.075 }, 00:22:54.075 { 00:22:54.075 "name": "pt4", 00:22:54.075 "uuid": "769510ee-b7f7-5df1-86cb-21142639380f", 00:22:54.075 "is_configured": true, 00:22:54.075 "data_offset": 2048, 00:22:54.075 "data_size": 63488 00:22:54.075 } 00:22:54.075 ] 00:22:54.075 } 00:22:54.075 } 00:22:54.075 }' 00:22:54.075 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:54.075 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:54.075 pt2 00:22:54.075 pt3 00:22:54.075 pt4' 00:22:54.075 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:54.075 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:54.075 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:54.335 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:54.335 "name": "pt1", 00:22:54.335 "aliases": [ 00:22:54.335 "9e587aa0-7521-5ae1-b725-170b649efe09" 00:22:54.335 ], 00:22:54.335 "product_name": "passthru", 00:22:54.335 "block_size": 512, 00:22:54.335 "num_blocks": 65536, 00:22:54.335 "uuid": "9e587aa0-7521-5ae1-b725-170b649efe09", 00:22:54.335 "assigned_rate_limits": { 00:22:54.335 "rw_ios_per_sec": 0, 00:22:54.335 "rw_mbytes_per_sec": 0, 00:22:54.335 "r_mbytes_per_sec": 0, 00:22:54.335 "w_mbytes_per_sec": 0 00:22:54.335 }, 00:22:54.335 "claimed": true, 00:22:54.335 "claim_type": "exclusive_write", 00:22:54.335 "zoned": false, 00:22:54.335 "supported_io_types": { 00:22:54.335 "read": true, 00:22:54.335 "write": true, 00:22:54.335 "unmap": true, 00:22:54.335 "write_zeroes": true, 00:22:54.335 "flush": true, 00:22:54.335 "reset": true, 00:22:54.335 "compare": false, 00:22:54.335 "compare_and_write": false, 00:22:54.335 "abort": true, 00:22:54.335 "nvme_admin": false, 00:22:54.335 "nvme_io": false 00:22:54.335 }, 00:22:54.335 "memory_domains": [ 00:22:54.335 { 00:22:54.335 "dma_device_id": "system", 00:22:54.335 "dma_device_type": 1 00:22:54.335 }, 00:22:54.335 { 00:22:54.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.335 "dma_device_type": 2 00:22:54.335 } 00:22:54.335 ], 00:22:54.335 "driver_specific": { 00:22:54.335 "passthru": { 00:22:54.335 "name": "pt1", 00:22:54.335 "base_bdev_name": "malloc1" 00:22:54.335 } 00:22:54.335 } 00:22:54.335 }' 00:22:54.335 07:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.335 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.335 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:54.335 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.335 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.335 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:54.335 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.593 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.593 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:54.593 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.593 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.593 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:54.593 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:54.593 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:54.593 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:54.852 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:54.852 "name": "pt2", 00:22:54.852 "aliases": [ 00:22:54.852 "7e1e1b4d-0110-5852-b3dd-1f8d09e185ab" 00:22:54.852 ], 00:22:54.852 "product_name": "passthru", 00:22:54.852 "block_size": 512, 00:22:54.852 "num_blocks": 65536, 00:22:54.852 "uuid": "7e1e1b4d-0110-5852-b3dd-1f8d09e185ab", 00:22:54.852 "assigned_rate_limits": { 00:22:54.852 "rw_ios_per_sec": 0, 00:22:54.852 "rw_mbytes_per_sec": 0, 00:22:54.852 "r_mbytes_per_sec": 0, 00:22:54.852 "w_mbytes_per_sec": 0 00:22:54.852 }, 00:22:54.852 "claimed": true, 00:22:54.852 "claim_type": "exclusive_write", 00:22:54.852 "zoned": false, 00:22:54.852 "supported_io_types": { 00:22:54.852 "read": true, 00:22:54.852 "write": true, 00:22:54.852 "unmap": true, 00:22:54.852 "write_zeroes": true, 00:22:54.852 "flush": true, 00:22:54.852 "reset": true, 00:22:54.852 "compare": false, 00:22:54.852 "compare_and_write": false, 00:22:54.852 "abort": true, 00:22:54.852 "nvme_admin": false, 00:22:54.852 "nvme_io": false 00:22:54.852 }, 00:22:54.852 "memory_domains": [ 00:22:54.852 { 00:22:54.852 "dma_device_id": "system", 00:22:54.852 "dma_device_type": 1 00:22:54.852 }, 00:22:54.852 { 00:22:54.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.852 "dma_device_type": 2 00:22:54.852 } 00:22:54.852 ], 00:22:54.852 "driver_specific": { 00:22:54.852 "passthru": { 00:22:54.852 "name": "pt2", 00:22:54.852 "base_bdev_name": "malloc2" 00:22:54.852 } 00:22:54.852 } 00:22:54.852 }' 00:22:54.852 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.852 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.852 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:54.852 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.112 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.112 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:55.112 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.112 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.112 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:55.112 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.112 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.112 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:55.112 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:55.112 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:55.112 07:33:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:55.679 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:55.679 "name": "pt3", 00:22:55.679 "aliases": [ 00:22:55.679 "520d19d9-bb83-5f5a-9aa1-609c0132ae56" 00:22:55.679 ], 00:22:55.679 "product_name": "passthru", 00:22:55.679 "block_size": 512, 00:22:55.679 "num_blocks": 65536, 00:22:55.679 "uuid": "520d19d9-bb83-5f5a-9aa1-609c0132ae56", 00:22:55.679 "assigned_rate_limits": { 00:22:55.679 "rw_ios_per_sec": 0, 00:22:55.679 "rw_mbytes_per_sec": 0, 00:22:55.679 "r_mbytes_per_sec": 0, 00:22:55.679 "w_mbytes_per_sec": 0 00:22:55.679 }, 00:22:55.679 "claimed": true, 00:22:55.679 "claim_type": "exclusive_write", 00:22:55.679 "zoned": false, 00:22:55.679 "supported_io_types": { 00:22:55.679 "read": true, 00:22:55.679 "write": true, 00:22:55.679 "unmap": true, 00:22:55.679 "write_zeroes": true, 00:22:55.679 "flush": true, 00:22:55.679 "reset": true, 00:22:55.679 "compare": false, 00:22:55.679 "compare_and_write": false, 00:22:55.679 "abort": true, 00:22:55.679 "nvme_admin": false, 00:22:55.679 "nvme_io": false 00:22:55.679 }, 00:22:55.679 "memory_domains": [ 00:22:55.679 { 00:22:55.679 "dma_device_id": "system", 00:22:55.679 "dma_device_type": 1 00:22:55.679 }, 00:22:55.679 { 00:22:55.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.679 "dma_device_type": 2 00:22:55.679 } 00:22:55.679 ], 00:22:55.679 "driver_specific": { 00:22:55.679 "passthru": { 00:22:55.679 "name": "pt3", 00:22:55.679 "base_bdev_name": "malloc3" 00:22:55.679 } 00:22:55.679 } 00:22:55.679 }' 00:22:55.679 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:55.679 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:55.679 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:55.679 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.679 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.679 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:55.679 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.679 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.679 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:55.679 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.938 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.938 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:55.938 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:55.938 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:55.938 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:56.196 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:56.196 "name": "pt4", 00:22:56.196 "aliases": [ 00:22:56.196 "769510ee-b7f7-5df1-86cb-21142639380f" 00:22:56.196 ], 00:22:56.196 "product_name": "passthru", 00:22:56.196 "block_size": 512, 00:22:56.196 "num_blocks": 65536, 00:22:56.196 "uuid": "769510ee-b7f7-5df1-86cb-21142639380f", 00:22:56.196 "assigned_rate_limits": { 00:22:56.196 "rw_ios_per_sec": 0, 00:22:56.196 "rw_mbytes_per_sec": 0, 00:22:56.196 "r_mbytes_per_sec": 0, 00:22:56.196 "w_mbytes_per_sec": 0 00:22:56.196 }, 00:22:56.196 "claimed": true, 00:22:56.196 "claim_type": "exclusive_write", 00:22:56.196 "zoned": false, 00:22:56.196 "supported_io_types": { 00:22:56.196 "read": true, 00:22:56.196 "write": true, 00:22:56.196 "unmap": true, 00:22:56.196 "write_zeroes": true, 00:22:56.196 "flush": true, 00:22:56.196 "reset": true, 00:22:56.196 "compare": false, 00:22:56.196 "compare_and_write": false, 00:22:56.196 "abort": true, 00:22:56.196 "nvme_admin": false, 00:22:56.196 "nvme_io": false 00:22:56.196 }, 00:22:56.196 "memory_domains": [ 00:22:56.196 { 00:22:56.196 "dma_device_id": "system", 00:22:56.196 "dma_device_type": 1 00:22:56.196 }, 00:22:56.196 { 00:22:56.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.196 "dma_device_type": 2 00:22:56.196 } 00:22:56.196 ], 00:22:56.196 "driver_specific": { 00:22:56.196 "passthru": { 00:22:56.196 "name": "pt4", 00:22:56.196 "base_bdev_name": "malloc4" 00:22:56.196 } 00:22:56.196 } 00:22:56.196 }' 00:22:56.196 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:56.196 07:33:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:56.196 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:56.196 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:56.196 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:56.456 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:56.456 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:56.456 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:56.456 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:56.456 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:56.456 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:56.456 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:56.456 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:56.456 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:22:56.715 [2024-07-12 07:33:30.570366] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:56.715 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 9116617c-c1d1-46fe-b28c-c2d161a8beb8 '!=' 9116617c-c1d1-46fe-b28c-c2d161a8beb8 ']' 00:22:56.715 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:22:56.715 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:56.715 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:56.715 07:33:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 146336 00:22:56.715 07:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 146336 ']' 00:22:56.715 07:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 146336 00:22:56.715 07:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:22:56.988 07:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:56.988 07:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 146336 00:22:56.988 07:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:56.988 07:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:56.988 07:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 146336' 00:22:56.988 killing process with pid 146336 00:22:56.988 07:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 146336 00:22:56.988 [2024-07-12 07:33:30.627220] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:56.988 07:33:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 146336 00:22:56.988 [2024-07-12 07:33:30.627469] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:56.988 [2024-07-12 07:33:30.627741] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:56.988 [2024-07-12 07:33:30.627787] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:22:56.988 [2024-07-12 07:33:30.711189] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:57.246 07:33:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:22:57.246 00:22:57.246 real 0m16.549s 00:22:57.246 user 0m29.546s 00:22:57.246 sys 0m3.035s 00:22:57.246 07:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:57.246 07:33:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.246 ************************************ 00:22:57.246 END TEST raid_superblock_test 00:22:57.246 ************************************ 00:22:57.505 07:33:31 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:22:57.505 07:33:31 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:22:57.505 07:33:31 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:57.505 07:33:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:57.505 ************************************ 00:22:57.505 START TEST raid_read_error_test 00:22:57.505 ************************************ 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 4 read 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.yZ4qUq1I1s 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=146872 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 146872 /var/tmp/spdk-raid.sock 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 146872 ']' 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:57.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:57.505 07:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:57.505 [2024-07-12 07:33:31.280620] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:57.505 [2024-07-12 07:33:31.281083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146872 ] 00:22:57.764 [2024-07-12 07:33:31.425051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.764 [2024-07-12 07:33:31.513067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.764 [2024-07-12 07:33:31.594152] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:58.699 07:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:58.699 07:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:22:58.699 07:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:58.699 07:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:58.699 BaseBdev1_malloc 00:22:58.699 07:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:58.957 true 00:22:58.957 07:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:59.215 [2024-07-12 07:33:32.928067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:59.215 [2024-07-12 07:33:32.928384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.215 [2024-07-12 07:33:32.928470] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:22:59.215 [2024-07-12 07:33:32.928698] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.215 [2024-07-12 07:33:32.931867] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.215 [2024-07-12 07:33:32.932050] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:59.215 BaseBdev1 00:22:59.215 07:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:59.215 07:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:59.473 BaseBdev2_malloc 00:22:59.473 07:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:59.473 true 00:22:59.731 07:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:59.731 [2024-07-12 07:33:33.528151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:59.731 [2024-07-12 07:33:33.528504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.731 [2024-07-12 07:33:33.528595] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:59.731 [2024-07-12 07:33:33.528749] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.731 [2024-07-12 07:33:33.531660] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.731 [2024-07-12 07:33:33.531817] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:59.731 BaseBdev2 00:22:59.731 07:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:59.731 07:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:59.989 BaseBdev3_malloc 00:22:59.989 07:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:00.247 true 00:23:00.247 07:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:00.525 [2024-07-12 07:33:34.241218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:00.526 [2024-07-12 07:33:34.241563] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.526 [2024-07-12 07:33:34.241648] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:23:00.526 [2024-07-12 07:33:34.241776] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.526 [2024-07-12 07:33:34.244781] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.526 [2024-07-12 07:33:34.244945] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:00.526 BaseBdev3 00:23:00.526 07:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:00.526 07:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:00.819 BaseBdev4_malloc 00:23:00.819 07:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:00.819 true 00:23:00.819 07:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:01.077 [2024-07-12 07:33:34.853296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:01.077 [2024-07-12 07:33:34.853637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.077 [2024-07-12 07:33:34.853717] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:01.077 [2024-07-12 07:33:34.853847] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.077 [2024-07-12 07:33:34.856739] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.077 [2024-07-12 07:33:34.856906] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:01.077 BaseBdev4 00:23:01.077 07:33:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:01.336 [2024-07-12 07:33:35.097513] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:01.336 [2024-07-12 07:33:35.100289] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:01.336 [2024-07-12 07:33:35.100529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:01.336 [2024-07-12 07:33:35.100631] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:01.336 [2024-07-12 07:33:35.101025] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:23:01.336 [2024-07-12 07:33:35.101069] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:01.336 [2024-07-12 07:33:35.101366] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:23:01.336 [2024-07-12 07:33:35.101929] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:23:01.336 [2024-07-12 07:33:35.102036] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:23:01.336 [2024-07-12 07:33:35.102397] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.336 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:01.336 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:01.336 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:01.336 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:01.336 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:01.336 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:01.336 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:01.336 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:01.336 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:01.336 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:01.336 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.336 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.595 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:01.595 "name": "raid_bdev1", 00:23:01.595 "uuid": "ae8d9efe-fc2a-4bb8-8ed3-e9e4211cc3df", 00:23:01.595 "strip_size_kb": 64, 00:23:01.595 "state": "online", 00:23:01.595 "raid_level": "raid0", 00:23:01.595 "superblock": true, 00:23:01.595 "num_base_bdevs": 4, 00:23:01.595 "num_base_bdevs_discovered": 4, 00:23:01.595 "num_base_bdevs_operational": 4, 00:23:01.595 "base_bdevs_list": [ 00:23:01.595 { 00:23:01.595 "name": "BaseBdev1", 00:23:01.595 "uuid": "1d06e37d-578c-5ce2-a584-16510317c5a2", 00:23:01.595 "is_configured": true, 00:23:01.595 "data_offset": 2048, 00:23:01.595 "data_size": 63488 00:23:01.595 }, 00:23:01.595 { 00:23:01.595 "name": "BaseBdev2", 00:23:01.595 "uuid": "4acc43fb-1a00-544a-948e-d071648872f8", 00:23:01.595 "is_configured": true, 00:23:01.595 "data_offset": 2048, 00:23:01.595 "data_size": 63488 00:23:01.595 }, 00:23:01.595 { 00:23:01.595 "name": "BaseBdev3", 00:23:01.595 "uuid": "3d0e6aa5-c956-58fe-bc8d-2feb3f23cf3d", 00:23:01.595 "is_configured": true, 00:23:01.595 "data_offset": 2048, 00:23:01.595 "data_size": 63488 00:23:01.595 }, 00:23:01.595 { 00:23:01.595 "name": "BaseBdev4", 00:23:01.595 "uuid": "6b5da28a-28a2-5ec7-b0e6-10f0ce945f76", 00:23:01.595 "is_configured": true, 00:23:01.595 "data_offset": 2048, 00:23:01.595 "data_size": 63488 00:23:01.595 } 00:23:01.595 ] 00:23:01.595 }' 00:23:01.595 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:01.595 07:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.162 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:02.162 07:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:02.162 [2024-07-12 07:33:36.015022] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:23:03.100 07:33:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:03.360 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:03.360 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:03.360 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:23:03.360 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:03.360 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:03.360 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:03.360 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:03.360 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:03.360 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:03.360 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:03.360 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:03.360 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:03.360 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:03.360 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.360 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.620 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.620 "name": "raid_bdev1", 00:23:03.620 "uuid": "ae8d9efe-fc2a-4bb8-8ed3-e9e4211cc3df", 00:23:03.620 "strip_size_kb": 64, 00:23:03.620 "state": "online", 00:23:03.620 "raid_level": "raid0", 00:23:03.620 "superblock": true, 00:23:03.620 "num_base_bdevs": 4, 00:23:03.620 "num_base_bdevs_discovered": 4, 00:23:03.620 "num_base_bdevs_operational": 4, 00:23:03.620 "base_bdevs_list": [ 00:23:03.620 { 00:23:03.620 "name": "BaseBdev1", 00:23:03.620 "uuid": "1d06e37d-578c-5ce2-a584-16510317c5a2", 00:23:03.620 "is_configured": true, 00:23:03.620 "data_offset": 2048, 00:23:03.620 "data_size": 63488 00:23:03.620 }, 00:23:03.620 { 00:23:03.620 "name": "BaseBdev2", 00:23:03.620 "uuid": "4acc43fb-1a00-544a-948e-d071648872f8", 00:23:03.620 "is_configured": true, 00:23:03.620 "data_offset": 2048, 00:23:03.620 "data_size": 63488 00:23:03.620 }, 00:23:03.620 { 00:23:03.620 "name": "BaseBdev3", 00:23:03.620 "uuid": "3d0e6aa5-c956-58fe-bc8d-2feb3f23cf3d", 00:23:03.620 "is_configured": true, 00:23:03.620 "data_offset": 2048, 00:23:03.620 "data_size": 63488 00:23:03.620 }, 00:23:03.620 { 00:23:03.620 "name": "BaseBdev4", 00:23:03.620 "uuid": "6b5da28a-28a2-5ec7-b0e6-10f0ce945f76", 00:23:03.620 "is_configured": true, 00:23:03.620 "data_offset": 2048, 00:23:03.620 "data_size": 63488 00:23:03.620 } 00:23:03.620 ] 00:23:03.620 }' 00:23:03.620 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.620 07:33:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.186 07:33:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:04.445 [2024-07-12 07:33:38.163812] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:04.445 [2024-07-12 07:33:38.164115] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:04.445 [2024-07-12 07:33:38.166905] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:04.445 [2024-07-12 07:33:38.167077] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.445 [2024-07-12 07:33:38.167159] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:04.445 [2024-07-12 07:33:38.167240] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:23:04.445 0 00:23:04.445 07:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 146872 00:23:04.445 07:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 146872 ']' 00:23:04.445 07:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 146872 00:23:04.445 07:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:23:04.445 07:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:04.445 07:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 146872 00:23:04.445 07:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:04.445 07:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:04.445 07:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 146872' 00:23:04.445 killing process with pid 146872 00:23:04.445 07:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 146872 00:23:04.445 [2024-07-12 07:33:38.217650] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:04.445 07:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 146872 00:23:04.445 [2024-07-12 07:33:38.285429] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:05.013 07:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.yZ4qUq1I1s 00:23:05.013 07:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:05.013 07:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:05.013 07:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:23:05.013 07:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:23:05.013 07:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:05.013 07:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:05.013 07:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:23:05.013 00:23:05.013 real 0m7.517s 00:23:05.013 user 0m11.799s 00:23:05.013 sys 0m1.196s 00:23:05.013 07:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:05.013 07:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.013 ************************************ 00:23:05.013 END TEST raid_read_error_test 00:23:05.013 ************************************ 00:23:05.013 07:33:38 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:23:05.013 07:33:38 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:23:05.013 07:33:38 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:05.013 07:33:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:05.013 ************************************ 00:23:05.013 START TEST raid_write_error_test 00:23:05.013 ************************************ 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 4 write 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.zXDmb5eW9S 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=147070 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 147070 /var/tmp/spdk-raid.sock 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 147070 ']' 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:05.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:05.013 07:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.013 [2024-07-12 07:33:38.881652] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:05.013 [2024-07-12 07:33:38.882085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147070 ] 00:23:05.271 [2024-07-12 07:33:39.032612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.271 [2024-07-12 07:33:39.129683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.529 [2024-07-12 07:33:39.219815] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:06.094 07:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:06.094 07:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:23:06.094 07:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:06.094 07:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:06.352 BaseBdev1_malloc 00:23:06.353 07:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:06.610 true 00:23:06.610 07:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:06.869 [2024-07-12 07:33:40.523710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:06.869 [2024-07-12 07:33:40.524097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.869 [2024-07-12 07:33:40.524184] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:23:06.869 [2024-07-12 07:33:40.524353] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.869 [2024-07-12 07:33:40.527437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.869 [2024-07-12 07:33:40.527619] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:06.869 BaseBdev1 00:23:06.869 07:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:06.869 07:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:07.127 BaseBdev2_malloc 00:23:07.127 07:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:07.386 true 00:23:07.386 07:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:07.386 [2024-07-12 07:33:41.228263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:07.386 [2024-07-12 07:33:41.228613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.386 [2024-07-12 07:33:41.228698] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:07.386 [2024-07-12 07:33:41.228830] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.386 [2024-07-12 07:33:41.231704] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.386 [2024-07-12 07:33:41.231890] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:07.386 BaseBdev2 00:23:07.386 07:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:07.386 07:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:07.645 BaseBdev3_malloc 00:23:07.645 07:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:07.904 true 00:23:07.904 07:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:08.164 [2024-07-12 07:33:41.954126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:08.164 [2024-07-12 07:33:41.954519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.164 [2024-07-12 07:33:41.954603] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:23:08.164 [2024-07-12 07:33:41.954734] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.164 [2024-07-12 07:33:41.957680] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.164 [2024-07-12 07:33:41.957872] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:08.164 BaseBdev3 00:23:08.164 07:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:08.164 07:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:08.423 BaseBdev4_malloc 00:23:08.423 07:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:08.696 true 00:23:08.696 07:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:08.956 [2024-07-12 07:33:42.586046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:08.956 [2024-07-12 07:33:42.586411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.956 [2024-07-12 07:33:42.586503] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:08.956 [2024-07-12 07:33:42.586680] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.956 [2024-07-12 07:33:42.589637] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.956 [2024-07-12 07:33:42.589834] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:08.956 BaseBdev4 00:23:08.956 07:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:08.956 [2024-07-12 07:33:42.802342] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:08.956 [2024-07-12 07:33:42.805104] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:08.956 [2024-07-12 07:33:42.805317] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:08.956 [2024-07-12 07:33:42.805423] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:08.956 [2024-07-12 07:33:42.805786] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:23:08.956 [2024-07-12 07:33:42.805886] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:08.956 [2024-07-12 07:33:42.806109] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:23:08.956 [2024-07-12 07:33:42.806636] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:23:08.956 [2024-07-12 07:33:42.806746] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:23:08.956 [2024-07-12 07:33:42.807105] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.956 07:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:08.956 07:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:08.956 07:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:08.956 07:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:08.956 07:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:08.956 07:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:08.956 07:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:08.956 07:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:08.956 07:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:08.956 07:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:08.956 07:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.956 07:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.524 07:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:09.524 "name": "raid_bdev1", 00:23:09.524 "uuid": "f9467f29-3b9a-4952-ac9c-15e5d9ee6290", 00:23:09.524 "strip_size_kb": 64, 00:23:09.524 "state": "online", 00:23:09.524 "raid_level": "raid0", 00:23:09.524 "superblock": true, 00:23:09.524 "num_base_bdevs": 4, 00:23:09.524 "num_base_bdevs_discovered": 4, 00:23:09.524 "num_base_bdevs_operational": 4, 00:23:09.524 "base_bdevs_list": [ 00:23:09.524 { 00:23:09.524 "name": "BaseBdev1", 00:23:09.524 "uuid": "845805f0-518e-5a54-aee7-86e46c7d2d87", 00:23:09.524 "is_configured": true, 00:23:09.524 "data_offset": 2048, 00:23:09.524 "data_size": 63488 00:23:09.524 }, 00:23:09.524 { 00:23:09.524 "name": "BaseBdev2", 00:23:09.524 "uuid": "db0aa818-8cf8-5ad0-b4ea-769e7ed50b1f", 00:23:09.524 "is_configured": true, 00:23:09.524 "data_offset": 2048, 00:23:09.524 "data_size": 63488 00:23:09.524 }, 00:23:09.524 { 00:23:09.524 "name": "BaseBdev3", 00:23:09.524 "uuid": "0f623e46-c8ac-5d4e-85b9-1d1c46757e55", 00:23:09.524 "is_configured": true, 00:23:09.524 "data_offset": 2048, 00:23:09.524 "data_size": 63488 00:23:09.524 }, 00:23:09.524 { 00:23:09.524 "name": "BaseBdev4", 00:23:09.524 "uuid": "713f09e0-19f1-580d-aee4-c31730ec0627", 00:23:09.524 "is_configured": true, 00:23:09.524 "data_offset": 2048, 00:23:09.524 "data_size": 63488 00:23:09.524 } 00:23:09.524 ] 00:23:09.524 }' 00:23:09.524 07:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:09.524 07:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.092 07:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:10.092 07:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:10.092 [2024-07-12 07:33:43.823787] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:23:11.027 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:11.286 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:11.286 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:11.286 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:23:11.286 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:11.286 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:11.286 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:11.287 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:11.287 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:11.287 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:11.287 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:11.287 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:11.287 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:11.287 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:11.287 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.287 07:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.546 07:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:11.546 "name": "raid_bdev1", 00:23:11.546 "uuid": "f9467f29-3b9a-4952-ac9c-15e5d9ee6290", 00:23:11.546 "strip_size_kb": 64, 00:23:11.546 "state": "online", 00:23:11.547 "raid_level": "raid0", 00:23:11.547 "superblock": true, 00:23:11.547 "num_base_bdevs": 4, 00:23:11.547 "num_base_bdevs_discovered": 4, 00:23:11.547 "num_base_bdevs_operational": 4, 00:23:11.547 "base_bdevs_list": [ 00:23:11.547 { 00:23:11.547 "name": "BaseBdev1", 00:23:11.547 "uuid": "845805f0-518e-5a54-aee7-86e46c7d2d87", 00:23:11.547 "is_configured": true, 00:23:11.547 "data_offset": 2048, 00:23:11.547 "data_size": 63488 00:23:11.547 }, 00:23:11.547 { 00:23:11.547 "name": "BaseBdev2", 00:23:11.547 "uuid": "db0aa818-8cf8-5ad0-b4ea-769e7ed50b1f", 00:23:11.547 "is_configured": true, 00:23:11.547 "data_offset": 2048, 00:23:11.547 "data_size": 63488 00:23:11.547 }, 00:23:11.547 { 00:23:11.547 "name": "BaseBdev3", 00:23:11.547 "uuid": "0f623e46-c8ac-5d4e-85b9-1d1c46757e55", 00:23:11.547 "is_configured": true, 00:23:11.547 "data_offset": 2048, 00:23:11.547 "data_size": 63488 00:23:11.547 }, 00:23:11.547 { 00:23:11.547 "name": "BaseBdev4", 00:23:11.547 "uuid": "713f09e0-19f1-580d-aee4-c31730ec0627", 00:23:11.547 "is_configured": true, 00:23:11.547 "data_offset": 2048, 00:23:11.547 "data_size": 63488 00:23:11.547 } 00:23:11.547 ] 00:23:11.547 }' 00:23:11.547 07:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:11.547 07:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.118 07:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:12.401 [2024-07-12 07:33:46.041804] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:12.401 [2024-07-12 07:33:46.042098] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:12.401 [2024-07-12 07:33:46.044822] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:12.401 [2024-07-12 07:33:46.045003] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:12.401 [2024-07-12 07:33:46.045087] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:12.401 [2024-07-12 07:33:46.045161] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:23:12.401 0 00:23:12.401 07:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 147070 00:23:12.401 07:33:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 147070 ']' 00:23:12.401 07:33:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 147070 00:23:12.401 07:33:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:23:12.401 07:33:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:12.401 07:33:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 147070 00:23:12.401 07:33:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:12.401 07:33:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:12.401 07:33:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 147070' 00:23:12.401 killing process with pid 147070 00:23:12.401 07:33:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 147070 00:23:12.401 07:33:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 147070 00:23:12.401 [2024-07-12 07:33:46.100594] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:12.401 [2024-07-12 07:33:46.168298] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:12.970 07:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.zXDmb5eW9S 00:23:12.970 07:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:12.970 07:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:12.970 ************************************ 00:23:12.970 END TEST raid_write_error_test 00:23:12.970 ************************************ 00:23:12.970 07:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:23:12.970 07:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:23:12.970 07:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:12.971 07:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:12.971 07:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:23:12.971 00:23:12.971 real 0m7.806s 00:23:12.971 user 0m12.227s 00:23:12.971 sys 0m1.359s 00:23:12.971 07:33:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:12.971 07:33:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.971 07:33:46 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:23:12.971 07:33:46 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:23:12.971 07:33:46 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:23:12.971 07:33:46 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:12.971 07:33:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:12.971 ************************************ 00:23:12.971 START TEST raid_state_function_test 00:23:12.971 ************************************ 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 false 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=147279 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 147279' 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:12.971 Process raid pid: 147279 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 147279 /var/tmp/spdk-raid.sock 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 147279 ']' 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:12.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.971 07:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.971 [2024-07-12 07:33:46.750180] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:12.971 [2024-07-12 07:33:46.750672] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.230 [2024-07-12 07:33:46.897617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.230 [2024-07-12 07:33:46.983293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.230 [2024-07-12 07:33:47.071040] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:14.168 07:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:14.168 07:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:23:14.168 07:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:14.168 [2024-07-12 07:33:48.012892] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:14.168 [2024-07-12 07:33:48.013175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:14.168 [2024-07-12 07:33:48.013303] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:14.168 [2024-07-12 07:33:48.013417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:14.168 [2024-07-12 07:33:48.013501] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:14.168 [2024-07-12 07:33:48.013583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:14.168 [2024-07-12 07:33:48.013659] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:14.168 [2024-07-12 07:33:48.013711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:14.168 07:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:14.168 07:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:14.168 07:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:14.168 07:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:14.168 07:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:14.168 07:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:14.168 07:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:14.168 07:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:14.168 07:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:14.168 07:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:14.168 07:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.168 07:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:14.426 07:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:14.426 "name": "Existed_Raid", 00:23:14.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.426 "strip_size_kb": 64, 00:23:14.426 "state": "configuring", 00:23:14.426 "raid_level": "concat", 00:23:14.426 "superblock": false, 00:23:14.426 "num_base_bdevs": 4, 00:23:14.426 "num_base_bdevs_discovered": 0, 00:23:14.426 "num_base_bdevs_operational": 4, 00:23:14.426 "base_bdevs_list": [ 00:23:14.426 { 00:23:14.426 "name": "BaseBdev1", 00:23:14.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.426 "is_configured": false, 00:23:14.426 "data_offset": 0, 00:23:14.426 "data_size": 0 00:23:14.426 }, 00:23:14.426 { 00:23:14.426 "name": "BaseBdev2", 00:23:14.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.426 "is_configured": false, 00:23:14.426 "data_offset": 0, 00:23:14.426 "data_size": 0 00:23:14.426 }, 00:23:14.426 { 00:23:14.426 "name": "BaseBdev3", 00:23:14.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.426 "is_configured": false, 00:23:14.426 "data_offset": 0, 00:23:14.426 "data_size": 0 00:23:14.426 }, 00:23:14.426 { 00:23:14.426 "name": "BaseBdev4", 00:23:14.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.426 "is_configured": false, 00:23:14.426 "data_offset": 0, 00:23:14.426 "data_size": 0 00:23:14.426 } 00:23:14.426 ] 00:23:14.426 }' 00:23:14.426 07:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:14.426 07:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.363 07:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:15.363 [2024-07-12 07:33:49.148913] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:15.363 [2024-07-12 07:33:49.149173] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:23:15.363 07:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:15.622 [2024-07-12 07:33:49.412968] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:15.622 [2024-07-12 07:33:49.413314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:15.622 [2024-07-12 07:33:49.413446] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:15.622 [2024-07-12 07:33:49.413511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:15.622 [2024-07-12 07:33:49.413596] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:15.622 [2024-07-12 07:33:49.413644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:15.622 [2024-07-12 07:33:49.413671] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:15.622 [2024-07-12 07:33:49.413761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:15.622 07:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:15.881 [2024-07-12 07:33:49.621049] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:15.881 BaseBdev1 00:23:15.881 07:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:15.881 07:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:23:15.881 07:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:15.881 07:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:15.881 07:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:15.881 07:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:15.881 07:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:16.140 07:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:16.399 [ 00:23:16.399 { 00:23:16.399 "name": "BaseBdev1", 00:23:16.399 "aliases": [ 00:23:16.399 "c100a9db-f8f4-4658-a554-b3b7de049144" 00:23:16.399 ], 00:23:16.399 "product_name": "Malloc disk", 00:23:16.399 "block_size": 512, 00:23:16.399 "num_blocks": 65536, 00:23:16.399 "uuid": "c100a9db-f8f4-4658-a554-b3b7de049144", 00:23:16.399 "assigned_rate_limits": { 00:23:16.399 "rw_ios_per_sec": 0, 00:23:16.399 "rw_mbytes_per_sec": 0, 00:23:16.399 "r_mbytes_per_sec": 0, 00:23:16.399 "w_mbytes_per_sec": 0 00:23:16.399 }, 00:23:16.399 "claimed": true, 00:23:16.399 "claim_type": "exclusive_write", 00:23:16.399 "zoned": false, 00:23:16.399 "supported_io_types": { 00:23:16.399 "read": true, 00:23:16.399 "write": true, 00:23:16.399 "unmap": true, 00:23:16.399 "write_zeroes": true, 00:23:16.399 "flush": true, 00:23:16.399 "reset": true, 00:23:16.399 "compare": false, 00:23:16.399 "compare_and_write": false, 00:23:16.399 "abort": true, 00:23:16.399 "nvme_admin": false, 00:23:16.399 "nvme_io": false 00:23:16.399 }, 00:23:16.399 "memory_domains": [ 00:23:16.399 { 00:23:16.399 "dma_device_id": "system", 00:23:16.399 "dma_device_type": 1 00:23:16.399 }, 00:23:16.399 { 00:23:16.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.399 "dma_device_type": 2 00:23:16.399 } 00:23:16.399 ], 00:23:16.399 "driver_specific": {} 00:23:16.399 } 00:23:16.399 ] 00:23:16.399 07:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:16.399 07:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:16.399 07:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:16.399 07:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:16.399 07:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:16.399 07:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:16.399 07:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:16.399 07:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:16.399 07:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:16.399 07:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:16.399 07:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:16.399 07:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.399 07:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:16.658 07:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:16.658 "name": "Existed_Raid", 00:23:16.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.658 "strip_size_kb": 64, 00:23:16.658 "state": "configuring", 00:23:16.658 "raid_level": "concat", 00:23:16.658 "superblock": false, 00:23:16.658 "num_base_bdevs": 4, 00:23:16.658 "num_base_bdevs_discovered": 1, 00:23:16.658 "num_base_bdevs_operational": 4, 00:23:16.658 "base_bdevs_list": [ 00:23:16.658 { 00:23:16.658 "name": "BaseBdev1", 00:23:16.658 "uuid": "c100a9db-f8f4-4658-a554-b3b7de049144", 00:23:16.658 "is_configured": true, 00:23:16.658 "data_offset": 0, 00:23:16.658 "data_size": 65536 00:23:16.658 }, 00:23:16.658 { 00:23:16.658 "name": "BaseBdev2", 00:23:16.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.658 "is_configured": false, 00:23:16.658 "data_offset": 0, 00:23:16.658 "data_size": 0 00:23:16.658 }, 00:23:16.658 { 00:23:16.658 "name": "BaseBdev3", 00:23:16.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.658 "is_configured": false, 00:23:16.658 "data_offset": 0, 00:23:16.658 "data_size": 0 00:23:16.658 }, 00:23:16.658 { 00:23:16.658 "name": "BaseBdev4", 00:23:16.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.658 "is_configured": false, 00:23:16.658 "data_offset": 0, 00:23:16.658 "data_size": 0 00:23:16.658 } 00:23:16.658 ] 00:23:16.658 }' 00:23:16.658 07:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:16.658 07:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.226 07:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:17.226 [2024-07-12 07:33:51.077429] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:17.226 [2024-07-12 07:33:51.077754] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:23:17.226 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:17.485 [2024-07-12 07:33:51.325650] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:17.485 [2024-07-12 07:33:51.328417] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:17.485 [2024-07-12 07:33:51.328622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:17.485 [2024-07-12 07:33:51.328712] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:17.485 [2024-07-12 07:33:51.328773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:17.485 [2024-07-12 07:33:51.328843] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:17.485 [2024-07-12 07:33:51.328978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:17.485 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:17.485 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:17.485 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:17.485 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:17.485 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:17.485 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:17.485 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:17.485 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:17.485 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:17.485 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:17.485 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:17.485 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:17.485 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.485 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.745 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:17.745 "name": "Existed_Raid", 00:23:17.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.745 "strip_size_kb": 64, 00:23:17.745 "state": "configuring", 00:23:17.745 "raid_level": "concat", 00:23:17.745 "superblock": false, 00:23:17.745 "num_base_bdevs": 4, 00:23:17.745 "num_base_bdevs_discovered": 1, 00:23:17.745 "num_base_bdevs_operational": 4, 00:23:17.745 "base_bdevs_list": [ 00:23:17.745 { 00:23:17.745 "name": "BaseBdev1", 00:23:17.745 "uuid": "c100a9db-f8f4-4658-a554-b3b7de049144", 00:23:17.745 "is_configured": true, 00:23:17.745 "data_offset": 0, 00:23:17.745 "data_size": 65536 00:23:17.745 }, 00:23:17.745 { 00:23:17.745 "name": "BaseBdev2", 00:23:17.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.745 "is_configured": false, 00:23:17.745 "data_offset": 0, 00:23:17.745 "data_size": 0 00:23:17.745 }, 00:23:17.745 { 00:23:17.745 "name": "BaseBdev3", 00:23:17.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.745 "is_configured": false, 00:23:17.745 "data_offset": 0, 00:23:17.745 "data_size": 0 00:23:17.745 }, 00:23:17.745 { 00:23:17.745 "name": "BaseBdev4", 00:23:17.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.745 "is_configured": false, 00:23:17.745 "data_offset": 0, 00:23:17.745 "data_size": 0 00:23:17.745 } 00:23:17.745 ] 00:23:17.745 }' 00:23:17.745 07:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:17.745 07:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.314 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:18.572 [2024-07-12 07:33:52.451586] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:18.572 BaseBdev2 00:23:18.831 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:18.831 07:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:23:18.831 07:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:18.831 07:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:18.831 07:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:18.831 07:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:18.831 07:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:18.832 07:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:19.091 [ 00:23:19.091 { 00:23:19.091 "name": "BaseBdev2", 00:23:19.091 "aliases": [ 00:23:19.091 "c7bd2839-5797-4b56-a6e3-424cbd9077cc" 00:23:19.091 ], 00:23:19.091 "product_name": "Malloc disk", 00:23:19.091 "block_size": 512, 00:23:19.091 "num_blocks": 65536, 00:23:19.091 "uuid": "c7bd2839-5797-4b56-a6e3-424cbd9077cc", 00:23:19.091 "assigned_rate_limits": { 00:23:19.091 "rw_ios_per_sec": 0, 00:23:19.091 "rw_mbytes_per_sec": 0, 00:23:19.091 "r_mbytes_per_sec": 0, 00:23:19.091 "w_mbytes_per_sec": 0 00:23:19.091 }, 00:23:19.091 "claimed": true, 00:23:19.091 "claim_type": "exclusive_write", 00:23:19.091 "zoned": false, 00:23:19.091 "supported_io_types": { 00:23:19.091 "read": true, 00:23:19.091 "write": true, 00:23:19.091 "unmap": true, 00:23:19.091 "write_zeroes": true, 00:23:19.091 "flush": true, 00:23:19.091 "reset": true, 00:23:19.091 "compare": false, 00:23:19.091 "compare_and_write": false, 00:23:19.091 "abort": true, 00:23:19.091 "nvme_admin": false, 00:23:19.091 "nvme_io": false 00:23:19.091 }, 00:23:19.091 "memory_domains": [ 00:23:19.091 { 00:23:19.091 "dma_device_id": "system", 00:23:19.091 "dma_device_type": 1 00:23:19.091 }, 00:23:19.091 { 00:23:19.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.091 "dma_device_type": 2 00:23:19.091 } 00:23:19.091 ], 00:23:19.091 "driver_specific": {} 00:23:19.091 } 00:23:19.091 ] 00:23:19.091 07:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:19.091 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:19.091 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:19.091 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:19.091 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:19.091 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:19.091 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:19.091 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:19.091 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:19.091 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.091 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.091 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.091 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.091 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.091 07:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.358 07:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:19.358 "name": "Existed_Raid", 00:23:19.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.358 "strip_size_kb": 64, 00:23:19.358 "state": "configuring", 00:23:19.358 "raid_level": "concat", 00:23:19.358 "superblock": false, 00:23:19.358 "num_base_bdevs": 4, 00:23:19.358 "num_base_bdevs_discovered": 2, 00:23:19.358 "num_base_bdevs_operational": 4, 00:23:19.358 "base_bdevs_list": [ 00:23:19.358 { 00:23:19.358 "name": "BaseBdev1", 00:23:19.358 "uuid": "c100a9db-f8f4-4658-a554-b3b7de049144", 00:23:19.358 "is_configured": true, 00:23:19.358 "data_offset": 0, 00:23:19.358 "data_size": 65536 00:23:19.358 }, 00:23:19.358 { 00:23:19.358 "name": "BaseBdev2", 00:23:19.358 "uuid": "c7bd2839-5797-4b56-a6e3-424cbd9077cc", 00:23:19.358 "is_configured": true, 00:23:19.358 "data_offset": 0, 00:23:19.358 "data_size": 65536 00:23:19.358 }, 00:23:19.358 { 00:23:19.358 "name": "BaseBdev3", 00:23:19.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.358 "is_configured": false, 00:23:19.358 "data_offset": 0, 00:23:19.358 "data_size": 0 00:23:19.358 }, 00:23:19.358 { 00:23:19.358 "name": "BaseBdev4", 00:23:19.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.358 "is_configured": false, 00:23:19.358 "data_offset": 0, 00:23:19.358 "data_size": 0 00:23:19.358 } 00:23:19.358 ] 00:23:19.358 }' 00:23:19.358 07:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:19.358 07:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.924 07:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:20.181 [2024-07-12 07:33:54.061529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:20.181 BaseBdev3 00:23:20.439 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:20.439 07:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:23:20.439 07:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:20.439 07:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:20.439 07:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:20.439 07:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:20.439 07:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:20.698 [ 00:23:20.698 { 00:23:20.698 "name": "BaseBdev3", 00:23:20.698 "aliases": [ 00:23:20.698 "4a9a433a-d210-4ee9-ad9d-320d41839503" 00:23:20.698 ], 00:23:20.698 "product_name": "Malloc disk", 00:23:20.698 "block_size": 512, 00:23:20.698 "num_blocks": 65536, 00:23:20.698 "uuid": "4a9a433a-d210-4ee9-ad9d-320d41839503", 00:23:20.698 "assigned_rate_limits": { 00:23:20.698 "rw_ios_per_sec": 0, 00:23:20.698 "rw_mbytes_per_sec": 0, 00:23:20.698 "r_mbytes_per_sec": 0, 00:23:20.698 "w_mbytes_per_sec": 0 00:23:20.698 }, 00:23:20.698 "claimed": true, 00:23:20.698 "claim_type": "exclusive_write", 00:23:20.698 "zoned": false, 00:23:20.698 "supported_io_types": { 00:23:20.698 "read": true, 00:23:20.698 "write": true, 00:23:20.698 "unmap": true, 00:23:20.698 "write_zeroes": true, 00:23:20.698 "flush": true, 00:23:20.698 "reset": true, 00:23:20.698 "compare": false, 00:23:20.698 "compare_and_write": false, 00:23:20.698 "abort": true, 00:23:20.698 "nvme_admin": false, 00:23:20.698 "nvme_io": false 00:23:20.698 }, 00:23:20.698 "memory_domains": [ 00:23:20.698 { 00:23:20.698 "dma_device_id": "system", 00:23:20.698 "dma_device_type": 1 00:23:20.698 }, 00:23:20.698 { 00:23:20.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.698 "dma_device_type": 2 00:23:20.698 } 00:23:20.698 ], 00:23:20.698 "driver_specific": {} 00:23:20.698 } 00:23:20.698 ] 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.698 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:20.958 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:20.958 "name": "Existed_Raid", 00:23:20.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.958 "strip_size_kb": 64, 00:23:20.958 "state": "configuring", 00:23:20.958 "raid_level": "concat", 00:23:20.958 "superblock": false, 00:23:20.958 "num_base_bdevs": 4, 00:23:20.958 "num_base_bdevs_discovered": 3, 00:23:20.958 "num_base_bdevs_operational": 4, 00:23:20.958 "base_bdevs_list": [ 00:23:20.958 { 00:23:20.958 "name": "BaseBdev1", 00:23:20.958 "uuid": "c100a9db-f8f4-4658-a554-b3b7de049144", 00:23:20.958 "is_configured": true, 00:23:20.958 "data_offset": 0, 00:23:20.958 "data_size": 65536 00:23:20.958 }, 00:23:20.958 { 00:23:20.958 "name": "BaseBdev2", 00:23:20.958 "uuid": "c7bd2839-5797-4b56-a6e3-424cbd9077cc", 00:23:20.958 "is_configured": true, 00:23:20.958 "data_offset": 0, 00:23:20.958 "data_size": 65536 00:23:20.958 }, 00:23:20.958 { 00:23:20.958 "name": "BaseBdev3", 00:23:20.958 "uuid": "4a9a433a-d210-4ee9-ad9d-320d41839503", 00:23:20.958 "is_configured": true, 00:23:20.958 "data_offset": 0, 00:23:20.958 "data_size": 65536 00:23:20.958 }, 00:23:20.958 { 00:23:20.958 "name": "BaseBdev4", 00:23:20.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.958 "is_configured": false, 00:23:20.958 "data_offset": 0, 00:23:20.958 "data_size": 0 00:23:20.958 } 00:23:20.958 ] 00:23:20.958 }' 00:23:20.958 07:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:20.958 07:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.526 07:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:21.784 [2024-07-12 07:33:55.655472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:21.784 [2024-07-12 07:33:55.655781] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:23:21.784 [2024-07-12 07:33:55.655822] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:21.784 [2024-07-12 07:33:55.656083] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:23:21.784 [2024-07-12 07:33:55.656592] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:23:21.784 [2024-07-12 07:33:55.656703] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:23:21.784 [2024-07-12 07:33:55.657052] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.784 BaseBdev4 00:23:22.043 07:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:22.043 07:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:23:22.043 07:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:22.043 07:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:22.043 07:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:22.043 07:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:22.043 07:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:22.302 07:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:22.302 [ 00:23:22.302 { 00:23:22.302 "name": "BaseBdev4", 00:23:22.302 "aliases": [ 00:23:22.302 "a81b28c4-b657-4f44-9205-d8cd50f57ea0" 00:23:22.302 ], 00:23:22.302 "product_name": "Malloc disk", 00:23:22.302 "block_size": 512, 00:23:22.302 "num_blocks": 65536, 00:23:22.302 "uuid": "a81b28c4-b657-4f44-9205-d8cd50f57ea0", 00:23:22.302 "assigned_rate_limits": { 00:23:22.302 "rw_ios_per_sec": 0, 00:23:22.302 "rw_mbytes_per_sec": 0, 00:23:22.302 "r_mbytes_per_sec": 0, 00:23:22.302 "w_mbytes_per_sec": 0 00:23:22.302 }, 00:23:22.302 "claimed": true, 00:23:22.302 "claim_type": "exclusive_write", 00:23:22.302 "zoned": false, 00:23:22.302 "supported_io_types": { 00:23:22.302 "read": true, 00:23:22.302 "write": true, 00:23:22.302 "unmap": true, 00:23:22.302 "write_zeroes": true, 00:23:22.302 "flush": true, 00:23:22.302 "reset": true, 00:23:22.302 "compare": false, 00:23:22.302 "compare_and_write": false, 00:23:22.302 "abort": true, 00:23:22.302 "nvme_admin": false, 00:23:22.302 "nvme_io": false 00:23:22.302 }, 00:23:22.302 "memory_domains": [ 00:23:22.302 { 00:23:22.302 "dma_device_id": "system", 00:23:22.302 "dma_device_type": 1 00:23:22.302 }, 00:23:22.302 { 00:23:22.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:22.302 "dma_device_type": 2 00:23:22.302 } 00:23:22.302 ], 00:23:22.302 "driver_specific": {} 00:23:22.302 } 00:23:22.302 ] 00:23:22.302 07:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:22.302 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:22.302 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:22.302 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:22.302 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:22.302 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:22.302 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:22.302 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:22.302 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:22.302 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.302 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.302 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.302 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.302 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.302 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.561 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.561 "name": "Existed_Raid", 00:23:22.561 "uuid": "7053a180-c709-41a9-b0b0-cef00c69b5d2", 00:23:22.561 "strip_size_kb": 64, 00:23:22.561 "state": "online", 00:23:22.561 "raid_level": "concat", 00:23:22.561 "superblock": false, 00:23:22.561 "num_base_bdevs": 4, 00:23:22.561 "num_base_bdevs_discovered": 4, 00:23:22.561 "num_base_bdevs_operational": 4, 00:23:22.561 "base_bdevs_list": [ 00:23:22.561 { 00:23:22.561 "name": "BaseBdev1", 00:23:22.561 "uuid": "c100a9db-f8f4-4658-a554-b3b7de049144", 00:23:22.561 "is_configured": true, 00:23:22.561 "data_offset": 0, 00:23:22.561 "data_size": 65536 00:23:22.561 }, 00:23:22.561 { 00:23:22.561 "name": "BaseBdev2", 00:23:22.561 "uuid": "c7bd2839-5797-4b56-a6e3-424cbd9077cc", 00:23:22.561 "is_configured": true, 00:23:22.561 "data_offset": 0, 00:23:22.561 "data_size": 65536 00:23:22.561 }, 00:23:22.561 { 00:23:22.561 "name": "BaseBdev3", 00:23:22.561 "uuid": "4a9a433a-d210-4ee9-ad9d-320d41839503", 00:23:22.561 "is_configured": true, 00:23:22.561 "data_offset": 0, 00:23:22.561 "data_size": 65536 00:23:22.561 }, 00:23:22.561 { 00:23:22.561 "name": "BaseBdev4", 00:23:22.561 "uuid": "a81b28c4-b657-4f44-9205-d8cd50f57ea0", 00:23:22.561 "is_configured": true, 00:23:22.561 "data_offset": 0, 00:23:22.561 "data_size": 65536 00:23:22.561 } 00:23:22.561 ] 00:23:22.561 }' 00:23:22.561 07:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.561 07:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.498 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:23.498 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:23.498 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:23.498 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:23.498 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:23.498 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:23.498 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:23.498 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:23.498 [2024-07-12 07:33:57.216380] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:23.498 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:23.498 "name": "Existed_Raid", 00:23:23.498 "aliases": [ 00:23:23.498 "7053a180-c709-41a9-b0b0-cef00c69b5d2" 00:23:23.498 ], 00:23:23.498 "product_name": "Raid Volume", 00:23:23.498 "block_size": 512, 00:23:23.498 "num_blocks": 262144, 00:23:23.498 "uuid": "7053a180-c709-41a9-b0b0-cef00c69b5d2", 00:23:23.498 "assigned_rate_limits": { 00:23:23.498 "rw_ios_per_sec": 0, 00:23:23.498 "rw_mbytes_per_sec": 0, 00:23:23.498 "r_mbytes_per_sec": 0, 00:23:23.498 "w_mbytes_per_sec": 0 00:23:23.498 }, 00:23:23.498 "claimed": false, 00:23:23.498 "zoned": false, 00:23:23.498 "supported_io_types": { 00:23:23.498 "read": true, 00:23:23.498 "write": true, 00:23:23.498 "unmap": true, 00:23:23.498 "write_zeroes": true, 00:23:23.498 "flush": true, 00:23:23.498 "reset": true, 00:23:23.498 "compare": false, 00:23:23.498 "compare_and_write": false, 00:23:23.498 "abort": false, 00:23:23.498 "nvme_admin": false, 00:23:23.498 "nvme_io": false 00:23:23.498 }, 00:23:23.498 "memory_domains": [ 00:23:23.498 { 00:23:23.498 "dma_device_id": "system", 00:23:23.498 "dma_device_type": 1 00:23:23.498 }, 00:23:23.498 { 00:23:23.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.498 "dma_device_type": 2 00:23:23.498 }, 00:23:23.498 { 00:23:23.498 "dma_device_id": "system", 00:23:23.498 "dma_device_type": 1 00:23:23.498 }, 00:23:23.498 { 00:23:23.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.498 "dma_device_type": 2 00:23:23.498 }, 00:23:23.498 { 00:23:23.498 "dma_device_id": "system", 00:23:23.498 "dma_device_type": 1 00:23:23.498 }, 00:23:23.498 { 00:23:23.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.498 "dma_device_type": 2 00:23:23.498 }, 00:23:23.498 { 00:23:23.498 "dma_device_id": "system", 00:23:23.498 "dma_device_type": 1 00:23:23.498 }, 00:23:23.498 { 00:23:23.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.498 "dma_device_type": 2 00:23:23.498 } 00:23:23.498 ], 00:23:23.498 "driver_specific": { 00:23:23.498 "raid": { 00:23:23.498 "uuid": "7053a180-c709-41a9-b0b0-cef00c69b5d2", 00:23:23.498 "strip_size_kb": 64, 00:23:23.498 "state": "online", 00:23:23.498 "raid_level": "concat", 00:23:23.498 "superblock": false, 00:23:23.498 "num_base_bdevs": 4, 00:23:23.498 "num_base_bdevs_discovered": 4, 00:23:23.498 "num_base_bdevs_operational": 4, 00:23:23.498 "base_bdevs_list": [ 00:23:23.499 { 00:23:23.499 "name": "BaseBdev1", 00:23:23.499 "uuid": "c100a9db-f8f4-4658-a554-b3b7de049144", 00:23:23.499 "is_configured": true, 00:23:23.499 "data_offset": 0, 00:23:23.499 "data_size": 65536 00:23:23.499 }, 00:23:23.499 { 00:23:23.499 "name": "BaseBdev2", 00:23:23.499 "uuid": "c7bd2839-5797-4b56-a6e3-424cbd9077cc", 00:23:23.499 "is_configured": true, 00:23:23.499 "data_offset": 0, 00:23:23.499 "data_size": 65536 00:23:23.499 }, 00:23:23.499 { 00:23:23.499 "name": "BaseBdev3", 00:23:23.499 "uuid": "4a9a433a-d210-4ee9-ad9d-320d41839503", 00:23:23.499 "is_configured": true, 00:23:23.499 "data_offset": 0, 00:23:23.499 "data_size": 65536 00:23:23.499 }, 00:23:23.499 { 00:23:23.499 "name": "BaseBdev4", 00:23:23.499 "uuid": "a81b28c4-b657-4f44-9205-d8cd50f57ea0", 00:23:23.499 "is_configured": true, 00:23:23.499 "data_offset": 0, 00:23:23.499 "data_size": 65536 00:23:23.499 } 00:23:23.499 ] 00:23:23.499 } 00:23:23.499 } 00:23:23.499 }' 00:23:23.499 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:23.499 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:23.499 BaseBdev2 00:23:23.499 BaseBdev3 00:23:23.499 BaseBdev4' 00:23:23.499 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:23.499 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:23.499 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:23.763 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:23.763 "name": "BaseBdev1", 00:23:23.763 "aliases": [ 00:23:23.763 "c100a9db-f8f4-4658-a554-b3b7de049144" 00:23:23.763 ], 00:23:23.763 "product_name": "Malloc disk", 00:23:23.763 "block_size": 512, 00:23:23.763 "num_blocks": 65536, 00:23:23.763 "uuid": "c100a9db-f8f4-4658-a554-b3b7de049144", 00:23:23.763 "assigned_rate_limits": { 00:23:23.763 "rw_ios_per_sec": 0, 00:23:23.763 "rw_mbytes_per_sec": 0, 00:23:23.763 "r_mbytes_per_sec": 0, 00:23:23.763 "w_mbytes_per_sec": 0 00:23:23.763 }, 00:23:23.763 "claimed": true, 00:23:23.763 "claim_type": "exclusive_write", 00:23:23.763 "zoned": false, 00:23:23.763 "supported_io_types": { 00:23:23.763 "read": true, 00:23:23.763 "write": true, 00:23:23.763 "unmap": true, 00:23:23.763 "write_zeroes": true, 00:23:23.763 "flush": true, 00:23:23.763 "reset": true, 00:23:23.763 "compare": false, 00:23:23.763 "compare_and_write": false, 00:23:23.763 "abort": true, 00:23:23.763 "nvme_admin": false, 00:23:23.763 "nvme_io": false 00:23:23.763 }, 00:23:23.763 "memory_domains": [ 00:23:23.763 { 00:23:23.763 "dma_device_id": "system", 00:23:23.763 "dma_device_type": 1 00:23:23.763 }, 00:23:23.763 { 00:23:23.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.763 "dma_device_type": 2 00:23:23.763 } 00:23:23.763 ], 00:23:23.763 "driver_specific": {} 00:23:23.763 }' 00:23:23.763 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:23.763 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:24.041 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:24.041 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:24.041 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:24.041 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:24.041 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:24.041 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:24.041 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:24.041 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:24.041 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:24.308 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:24.309 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:24.309 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:24.309 07:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:24.309 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:24.309 "name": "BaseBdev2", 00:23:24.309 "aliases": [ 00:23:24.309 "c7bd2839-5797-4b56-a6e3-424cbd9077cc" 00:23:24.309 ], 00:23:24.309 "product_name": "Malloc disk", 00:23:24.309 "block_size": 512, 00:23:24.309 "num_blocks": 65536, 00:23:24.309 "uuid": "c7bd2839-5797-4b56-a6e3-424cbd9077cc", 00:23:24.309 "assigned_rate_limits": { 00:23:24.309 "rw_ios_per_sec": 0, 00:23:24.309 "rw_mbytes_per_sec": 0, 00:23:24.309 "r_mbytes_per_sec": 0, 00:23:24.309 "w_mbytes_per_sec": 0 00:23:24.309 }, 00:23:24.309 "claimed": true, 00:23:24.309 "claim_type": "exclusive_write", 00:23:24.309 "zoned": false, 00:23:24.309 "supported_io_types": { 00:23:24.309 "read": true, 00:23:24.309 "write": true, 00:23:24.309 "unmap": true, 00:23:24.309 "write_zeroes": true, 00:23:24.309 "flush": true, 00:23:24.309 "reset": true, 00:23:24.309 "compare": false, 00:23:24.309 "compare_and_write": false, 00:23:24.309 "abort": true, 00:23:24.309 "nvme_admin": false, 00:23:24.309 "nvme_io": false 00:23:24.309 }, 00:23:24.309 "memory_domains": [ 00:23:24.309 { 00:23:24.309 "dma_device_id": "system", 00:23:24.309 "dma_device_type": 1 00:23:24.309 }, 00:23:24.309 { 00:23:24.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.309 "dma_device_type": 2 00:23:24.309 } 00:23:24.309 ], 00:23:24.309 "driver_specific": {} 00:23:24.309 }' 00:23:24.309 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:24.309 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:24.568 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:24.568 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:24.568 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:24.568 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:24.568 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:24.568 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:24.568 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:24.568 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:24.568 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:24.827 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:24.827 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:24.827 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:24.827 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:24.827 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:24.827 "name": "BaseBdev3", 00:23:24.827 "aliases": [ 00:23:24.827 "4a9a433a-d210-4ee9-ad9d-320d41839503" 00:23:24.827 ], 00:23:24.827 "product_name": "Malloc disk", 00:23:24.827 "block_size": 512, 00:23:24.827 "num_blocks": 65536, 00:23:24.827 "uuid": "4a9a433a-d210-4ee9-ad9d-320d41839503", 00:23:24.827 "assigned_rate_limits": { 00:23:24.827 "rw_ios_per_sec": 0, 00:23:24.827 "rw_mbytes_per_sec": 0, 00:23:24.827 "r_mbytes_per_sec": 0, 00:23:24.827 "w_mbytes_per_sec": 0 00:23:24.827 }, 00:23:24.827 "claimed": true, 00:23:24.827 "claim_type": "exclusive_write", 00:23:24.827 "zoned": false, 00:23:24.827 "supported_io_types": { 00:23:24.827 "read": true, 00:23:24.827 "write": true, 00:23:24.827 "unmap": true, 00:23:24.827 "write_zeroes": true, 00:23:24.827 "flush": true, 00:23:24.827 "reset": true, 00:23:24.827 "compare": false, 00:23:24.827 "compare_and_write": false, 00:23:24.827 "abort": true, 00:23:24.827 "nvme_admin": false, 00:23:24.827 "nvme_io": false 00:23:24.827 }, 00:23:24.827 "memory_domains": [ 00:23:24.827 { 00:23:24.827 "dma_device_id": "system", 00:23:24.827 "dma_device_type": 1 00:23:24.827 }, 00:23:24.827 { 00:23:24.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.827 "dma_device_type": 2 00:23:24.827 } 00:23:24.827 ], 00:23:24.827 "driver_specific": {} 00:23:24.827 }' 00:23:24.827 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:24.827 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:25.085 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:25.085 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:25.085 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:25.085 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:25.085 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:25.085 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:25.085 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:25.085 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:25.344 07:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:25.344 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:25.344 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:25.344 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:25.344 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:25.603 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:25.603 "name": "BaseBdev4", 00:23:25.603 "aliases": [ 00:23:25.603 "a81b28c4-b657-4f44-9205-d8cd50f57ea0" 00:23:25.603 ], 00:23:25.603 "product_name": "Malloc disk", 00:23:25.603 "block_size": 512, 00:23:25.603 "num_blocks": 65536, 00:23:25.603 "uuid": "a81b28c4-b657-4f44-9205-d8cd50f57ea0", 00:23:25.603 "assigned_rate_limits": { 00:23:25.603 "rw_ios_per_sec": 0, 00:23:25.603 "rw_mbytes_per_sec": 0, 00:23:25.603 "r_mbytes_per_sec": 0, 00:23:25.603 "w_mbytes_per_sec": 0 00:23:25.603 }, 00:23:25.603 "claimed": true, 00:23:25.603 "claim_type": "exclusive_write", 00:23:25.603 "zoned": false, 00:23:25.603 "supported_io_types": { 00:23:25.603 "read": true, 00:23:25.603 "write": true, 00:23:25.604 "unmap": true, 00:23:25.604 "write_zeroes": true, 00:23:25.604 "flush": true, 00:23:25.604 "reset": true, 00:23:25.604 "compare": false, 00:23:25.604 "compare_and_write": false, 00:23:25.604 "abort": true, 00:23:25.604 "nvme_admin": false, 00:23:25.604 "nvme_io": false 00:23:25.604 }, 00:23:25.604 "memory_domains": [ 00:23:25.604 { 00:23:25.604 "dma_device_id": "system", 00:23:25.604 "dma_device_type": 1 00:23:25.604 }, 00:23:25.604 { 00:23:25.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.604 "dma_device_type": 2 00:23:25.604 } 00:23:25.604 ], 00:23:25.604 "driver_specific": {} 00:23:25.604 }' 00:23:25.604 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:25.604 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:25.604 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:25.604 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:25.604 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:25.863 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:25.863 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:25.863 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:25.863 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:25.863 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:25.863 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:25.863 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:25.863 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:26.122 [2024-07-12 07:33:59.896793] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:26.122 [2024-07-12 07:33:59.897078] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:26.122 [2024-07-12 07:33:59.897259] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.122 07:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:26.380 07:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:26.380 "name": "Existed_Raid", 00:23:26.380 "uuid": "7053a180-c709-41a9-b0b0-cef00c69b5d2", 00:23:26.380 "strip_size_kb": 64, 00:23:26.380 "state": "offline", 00:23:26.380 "raid_level": "concat", 00:23:26.380 "superblock": false, 00:23:26.380 "num_base_bdevs": 4, 00:23:26.380 "num_base_bdevs_discovered": 3, 00:23:26.380 "num_base_bdevs_operational": 3, 00:23:26.380 "base_bdevs_list": [ 00:23:26.380 { 00:23:26.380 "name": null, 00:23:26.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.380 "is_configured": false, 00:23:26.380 "data_offset": 0, 00:23:26.380 "data_size": 65536 00:23:26.380 }, 00:23:26.380 { 00:23:26.380 "name": "BaseBdev2", 00:23:26.380 "uuid": "c7bd2839-5797-4b56-a6e3-424cbd9077cc", 00:23:26.380 "is_configured": true, 00:23:26.380 "data_offset": 0, 00:23:26.380 "data_size": 65536 00:23:26.380 }, 00:23:26.380 { 00:23:26.380 "name": "BaseBdev3", 00:23:26.380 "uuid": "4a9a433a-d210-4ee9-ad9d-320d41839503", 00:23:26.380 "is_configured": true, 00:23:26.380 "data_offset": 0, 00:23:26.380 "data_size": 65536 00:23:26.380 }, 00:23:26.380 { 00:23:26.380 "name": "BaseBdev4", 00:23:26.380 "uuid": "a81b28c4-b657-4f44-9205-d8cd50f57ea0", 00:23:26.380 "is_configured": true, 00:23:26.380 "data_offset": 0, 00:23:26.380 "data_size": 65536 00:23:26.380 } 00:23:26.380 ] 00:23:26.381 }' 00:23:26.381 07:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:26.381 07:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.948 07:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:26.948 07:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:27.207 07:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.207 07:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:27.467 07:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:27.467 07:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:27.467 07:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:27.467 [2024-07-12 07:34:01.277970] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:27.467 07:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:27.467 07:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:27.467 07:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.467 07:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:27.725 07:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:27.726 07:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:27.726 07:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:28.292 [2024-07-12 07:34:01.869444] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:28.292 07:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:28.292 07:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:28.292 07:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.292 07:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:28.292 07:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:28.292 07:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:28.292 07:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:28.551 [2024-07-12 07:34:02.294690] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:28.551 [2024-07-12 07:34:02.295049] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:23:28.551 07:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:28.551 07:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:28.551 07:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.551 07:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:28.810 07:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:28.810 07:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:28.810 07:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:28.810 07:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:28.810 07:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:28.810 07:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:29.069 BaseBdev2 00:23:29.069 07:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:29.069 07:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:23:29.069 07:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:29.069 07:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:29.069 07:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:29.069 07:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:29.069 07:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:29.327 07:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:29.327 [ 00:23:29.327 { 00:23:29.327 "name": "BaseBdev2", 00:23:29.327 "aliases": [ 00:23:29.327 "de8e3b18-023d-441a-85d9-49011fe04fb5" 00:23:29.327 ], 00:23:29.327 "product_name": "Malloc disk", 00:23:29.327 "block_size": 512, 00:23:29.327 "num_blocks": 65536, 00:23:29.327 "uuid": "de8e3b18-023d-441a-85d9-49011fe04fb5", 00:23:29.327 "assigned_rate_limits": { 00:23:29.327 "rw_ios_per_sec": 0, 00:23:29.327 "rw_mbytes_per_sec": 0, 00:23:29.327 "r_mbytes_per_sec": 0, 00:23:29.327 "w_mbytes_per_sec": 0 00:23:29.327 }, 00:23:29.327 "claimed": false, 00:23:29.327 "zoned": false, 00:23:29.327 "supported_io_types": { 00:23:29.327 "read": true, 00:23:29.327 "write": true, 00:23:29.327 "unmap": true, 00:23:29.327 "write_zeroes": true, 00:23:29.327 "flush": true, 00:23:29.327 "reset": true, 00:23:29.327 "compare": false, 00:23:29.327 "compare_and_write": false, 00:23:29.327 "abort": true, 00:23:29.327 "nvme_admin": false, 00:23:29.327 "nvme_io": false 00:23:29.327 }, 00:23:29.327 "memory_domains": [ 00:23:29.327 { 00:23:29.327 "dma_device_id": "system", 00:23:29.327 "dma_device_type": 1 00:23:29.327 }, 00:23:29.327 { 00:23:29.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.327 "dma_device_type": 2 00:23:29.327 } 00:23:29.327 ], 00:23:29.327 "driver_specific": {} 00:23:29.327 } 00:23:29.327 ] 00:23:29.327 07:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:29.327 07:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:29.327 07:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:29.327 07:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:29.584 BaseBdev3 00:23:29.584 07:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:29.584 07:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:23:29.584 07:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:29.584 07:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:29.584 07:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:29.584 07:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:29.584 07:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:29.842 07:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:30.099 [ 00:23:30.099 { 00:23:30.099 "name": "BaseBdev3", 00:23:30.099 "aliases": [ 00:23:30.099 "c5dbac5b-49c8-4542-898a-96780c16507e" 00:23:30.099 ], 00:23:30.099 "product_name": "Malloc disk", 00:23:30.099 "block_size": 512, 00:23:30.099 "num_blocks": 65536, 00:23:30.099 "uuid": "c5dbac5b-49c8-4542-898a-96780c16507e", 00:23:30.099 "assigned_rate_limits": { 00:23:30.099 "rw_ios_per_sec": 0, 00:23:30.099 "rw_mbytes_per_sec": 0, 00:23:30.099 "r_mbytes_per_sec": 0, 00:23:30.099 "w_mbytes_per_sec": 0 00:23:30.099 }, 00:23:30.099 "claimed": false, 00:23:30.099 "zoned": false, 00:23:30.099 "supported_io_types": { 00:23:30.099 "read": true, 00:23:30.099 "write": true, 00:23:30.099 "unmap": true, 00:23:30.099 "write_zeroes": true, 00:23:30.099 "flush": true, 00:23:30.099 "reset": true, 00:23:30.099 "compare": false, 00:23:30.099 "compare_and_write": false, 00:23:30.099 "abort": true, 00:23:30.099 "nvme_admin": false, 00:23:30.099 "nvme_io": false 00:23:30.099 }, 00:23:30.099 "memory_domains": [ 00:23:30.099 { 00:23:30.099 "dma_device_id": "system", 00:23:30.099 "dma_device_type": 1 00:23:30.099 }, 00:23:30.099 { 00:23:30.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.099 "dma_device_type": 2 00:23:30.099 } 00:23:30.099 ], 00:23:30.099 "driver_specific": {} 00:23:30.099 } 00:23:30.099 ] 00:23:30.099 07:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:30.099 07:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:30.099 07:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:30.099 07:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:30.358 BaseBdev4 00:23:30.358 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:30.358 07:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:23:30.358 07:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:30.358 07:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:30.358 07:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:30.358 07:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:30.358 07:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:30.616 07:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:30.616 [ 00:23:30.616 { 00:23:30.616 "name": "BaseBdev4", 00:23:30.616 "aliases": [ 00:23:30.616 "e19fe2d9-a3dc-4209-9649-4f873d40d625" 00:23:30.616 ], 00:23:30.616 "product_name": "Malloc disk", 00:23:30.616 "block_size": 512, 00:23:30.616 "num_blocks": 65536, 00:23:30.616 "uuid": "e19fe2d9-a3dc-4209-9649-4f873d40d625", 00:23:30.616 "assigned_rate_limits": { 00:23:30.616 "rw_ios_per_sec": 0, 00:23:30.616 "rw_mbytes_per_sec": 0, 00:23:30.616 "r_mbytes_per_sec": 0, 00:23:30.616 "w_mbytes_per_sec": 0 00:23:30.616 }, 00:23:30.616 "claimed": false, 00:23:30.616 "zoned": false, 00:23:30.616 "supported_io_types": { 00:23:30.616 "read": true, 00:23:30.616 "write": true, 00:23:30.616 "unmap": true, 00:23:30.616 "write_zeroes": true, 00:23:30.616 "flush": true, 00:23:30.616 "reset": true, 00:23:30.616 "compare": false, 00:23:30.616 "compare_and_write": false, 00:23:30.616 "abort": true, 00:23:30.616 "nvme_admin": false, 00:23:30.617 "nvme_io": false 00:23:30.617 }, 00:23:30.617 "memory_domains": [ 00:23:30.617 { 00:23:30.617 "dma_device_id": "system", 00:23:30.617 "dma_device_type": 1 00:23:30.617 }, 00:23:30.617 { 00:23:30.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.617 "dma_device_type": 2 00:23:30.617 } 00:23:30.617 ], 00:23:30.617 "driver_specific": {} 00:23:30.617 } 00:23:30.617 ] 00:23:30.617 07:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:30.617 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:30.617 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:30.617 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:30.875 [2024-07-12 07:34:04.640866] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:30.875 [2024-07-12 07:34:04.641230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:30.875 [2024-07-12 07:34:04.641357] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:30.875 [2024-07-12 07:34:04.643897] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:30.875 [2024-07-12 07:34:04.644053] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:30.875 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:30.875 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:30.876 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:30.876 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:30.876 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:30.876 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:30.876 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:30.876 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:30.876 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:30.876 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:30.876 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.876 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.135 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.135 "name": "Existed_Raid", 00:23:31.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.135 "strip_size_kb": 64, 00:23:31.135 "state": "configuring", 00:23:31.135 "raid_level": "concat", 00:23:31.135 "superblock": false, 00:23:31.135 "num_base_bdevs": 4, 00:23:31.135 "num_base_bdevs_discovered": 3, 00:23:31.135 "num_base_bdevs_operational": 4, 00:23:31.135 "base_bdevs_list": [ 00:23:31.135 { 00:23:31.135 "name": "BaseBdev1", 00:23:31.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.135 "is_configured": false, 00:23:31.135 "data_offset": 0, 00:23:31.135 "data_size": 0 00:23:31.135 }, 00:23:31.135 { 00:23:31.135 "name": "BaseBdev2", 00:23:31.135 "uuid": "de8e3b18-023d-441a-85d9-49011fe04fb5", 00:23:31.135 "is_configured": true, 00:23:31.135 "data_offset": 0, 00:23:31.135 "data_size": 65536 00:23:31.135 }, 00:23:31.135 { 00:23:31.135 "name": "BaseBdev3", 00:23:31.135 "uuid": "c5dbac5b-49c8-4542-898a-96780c16507e", 00:23:31.135 "is_configured": true, 00:23:31.135 "data_offset": 0, 00:23:31.135 "data_size": 65536 00:23:31.135 }, 00:23:31.135 { 00:23:31.135 "name": "BaseBdev4", 00:23:31.135 "uuid": "e19fe2d9-a3dc-4209-9649-4f873d40d625", 00:23:31.135 "is_configured": true, 00:23:31.135 "data_offset": 0, 00:23:31.135 "data_size": 65536 00:23:31.135 } 00:23:31.135 ] 00:23:31.135 }' 00:23:31.135 07:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.135 07:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.722 07:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:31.979 [2024-07-12 07:34:05.809109] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:31.979 07:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:31.979 07:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:31.979 07:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:31.979 07:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:31.979 07:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:31.979 07:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:31.979 07:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:31.979 07:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:31.979 07:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:31.979 07:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:31.979 07:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.979 07:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:32.238 07:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:32.238 "name": "Existed_Raid", 00:23:32.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.238 "strip_size_kb": 64, 00:23:32.238 "state": "configuring", 00:23:32.238 "raid_level": "concat", 00:23:32.238 "superblock": false, 00:23:32.238 "num_base_bdevs": 4, 00:23:32.238 "num_base_bdevs_discovered": 2, 00:23:32.238 "num_base_bdevs_operational": 4, 00:23:32.238 "base_bdevs_list": [ 00:23:32.238 { 00:23:32.238 "name": "BaseBdev1", 00:23:32.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.238 "is_configured": false, 00:23:32.238 "data_offset": 0, 00:23:32.238 "data_size": 0 00:23:32.238 }, 00:23:32.238 { 00:23:32.238 "name": null, 00:23:32.238 "uuid": "de8e3b18-023d-441a-85d9-49011fe04fb5", 00:23:32.238 "is_configured": false, 00:23:32.238 "data_offset": 0, 00:23:32.238 "data_size": 65536 00:23:32.238 }, 00:23:32.238 { 00:23:32.238 "name": "BaseBdev3", 00:23:32.238 "uuid": "c5dbac5b-49c8-4542-898a-96780c16507e", 00:23:32.238 "is_configured": true, 00:23:32.238 "data_offset": 0, 00:23:32.238 "data_size": 65536 00:23:32.238 }, 00:23:32.238 { 00:23:32.238 "name": "BaseBdev4", 00:23:32.238 "uuid": "e19fe2d9-a3dc-4209-9649-4f873d40d625", 00:23:32.238 "is_configured": true, 00:23:32.238 "data_offset": 0, 00:23:32.238 "data_size": 65536 00:23:32.238 } 00:23:32.238 ] 00:23:32.238 }' 00:23:32.238 07:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:32.238 07:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.804 07:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.804 07:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:33.062 07:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:33.062 07:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:33.320 [2024-07-12 07:34:07.078756] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:33.320 BaseBdev1 00:23:33.320 07:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:33.320 07:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:23:33.320 07:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:33.320 07:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:33.320 07:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:33.320 07:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:33.320 07:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:33.580 07:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:33.839 [ 00:23:33.839 { 00:23:33.839 "name": "BaseBdev1", 00:23:33.839 "aliases": [ 00:23:33.839 "f745021a-ea8e-48b2-88f2-0c6ded46f8ab" 00:23:33.839 ], 00:23:33.839 "product_name": "Malloc disk", 00:23:33.839 "block_size": 512, 00:23:33.839 "num_blocks": 65536, 00:23:33.839 "uuid": "f745021a-ea8e-48b2-88f2-0c6ded46f8ab", 00:23:33.839 "assigned_rate_limits": { 00:23:33.839 "rw_ios_per_sec": 0, 00:23:33.839 "rw_mbytes_per_sec": 0, 00:23:33.839 "r_mbytes_per_sec": 0, 00:23:33.839 "w_mbytes_per_sec": 0 00:23:33.839 }, 00:23:33.839 "claimed": true, 00:23:33.839 "claim_type": "exclusive_write", 00:23:33.839 "zoned": false, 00:23:33.839 "supported_io_types": { 00:23:33.839 "read": true, 00:23:33.839 "write": true, 00:23:33.839 "unmap": true, 00:23:33.839 "write_zeroes": true, 00:23:33.839 "flush": true, 00:23:33.839 "reset": true, 00:23:33.839 "compare": false, 00:23:33.839 "compare_and_write": false, 00:23:33.839 "abort": true, 00:23:33.839 "nvme_admin": false, 00:23:33.839 "nvme_io": false 00:23:33.839 }, 00:23:33.839 "memory_domains": [ 00:23:33.839 { 00:23:33.839 "dma_device_id": "system", 00:23:33.839 "dma_device_type": 1 00:23:33.839 }, 00:23:33.839 { 00:23:33.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:33.839 "dma_device_type": 2 00:23:33.839 } 00:23:33.839 ], 00:23:33.839 "driver_specific": {} 00:23:33.839 } 00:23:33.839 ] 00:23:33.839 07:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:33.839 07:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:33.839 07:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:33.839 07:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:33.839 07:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:33.839 07:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:33.839 07:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:33.839 07:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:33.839 07:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:33.839 07:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:33.839 07:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:33.839 07:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:33.839 07:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.099 07:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:34.099 "name": "Existed_Raid", 00:23:34.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.099 "strip_size_kb": 64, 00:23:34.099 "state": "configuring", 00:23:34.099 "raid_level": "concat", 00:23:34.099 "superblock": false, 00:23:34.099 "num_base_bdevs": 4, 00:23:34.099 "num_base_bdevs_discovered": 3, 00:23:34.099 "num_base_bdevs_operational": 4, 00:23:34.099 "base_bdevs_list": [ 00:23:34.099 { 00:23:34.099 "name": "BaseBdev1", 00:23:34.099 "uuid": "f745021a-ea8e-48b2-88f2-0c6ded46f8ab", 00:23:34.099 "is_configured": true, 00:23:34.099 "data_offset": 0, 00:23:34.099 "data_size": 65536 00:23:34.099 }, 00:23:34.099 { 00:23:34.099 "name": null, 00:23:34.099 "uuid": "de8e3b18-023d-441a-85d9-49011fe04fb5", 00:23:34.099 "is_configured": false, 00:23:34.099 "data_offset": 0, 00:23:34.099 "data_size": 65536 00:23:34.099 }, 00:23:34.099 { 00:23:34.099 "name": "BaseBdev3", 00:23:34.099 "uuid": "c5dbac5b-49c8-4542-898a-96780c16507e", 00:23:34.099 "is_configured": true, 00:23:34.099 "data_offset": 0, 00:23:34.099 "data_size": 65536 00:23:34.099 }, 00:23:34.099 { 00:23:34.099 "name": "BaseBdev4", 00:23:34.099 "uuid": "e19fe2d9-a3dc-4209-9649-4f873d40d625", 00:23:34.099 "is_configured": true, 00:23:34.099 "data_offset": 0, 00:23:34.099 "data_size": 65536 00:23:34.099 } 00:23:34.099 ] 00:23:34.099 }' 00:23:34.099 07:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:34.099 07:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.668 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:34.668 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.963 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:34.963 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:34.963 [2024-07-12 07:34:08.827237] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:35.229 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:35.229 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:35.229 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:35.229 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:35.229 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:35.229 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:35.229 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:35.229 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:35.229 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:35.229 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:35.230 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.230 07:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:35.488 07:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:35.488 "name": "Existed_Raid", 00:23:35.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.488 "strip_size_kb": 64, 00:23:35.488 "state": "configuring", 00:23:35.488 "raid_level": "concat", 00:23:35.488 "superblock": false, 00:23:35.488 "num_base_bdevs": 4, 00:23:35.488 "num_base_bdevs_discovered": 2, 00:23:35.488 "num_base_bdevs_operational": 4, 00:23:35.488 "base_bdevs_list": [ 00:23:35.488 { 00:23:35.488 "name": "BaseBdev1", 00:23:35.488 "uuid": "f745021a-ea8e-48b2-88f2-0c6ded46f8ab", 00:23:35.488 "is_configured": true, 00:23:35.488 "data_offset": 0, 00:23:35.488 "data_size": 65536 00:23:35.488 }, 00:23:35.488 { 00:23:35.488 "name": null, 00:23:35.488 "uuid": "de8e3b18-023d-441a-85d9-49011fe04fb5", 00:23:35.488 "is_configured": false, 00:23:35.488 "data_offset": 0, 00:23:35.488 "data_size": 65536 00:23:35.488 }, 00:23:35.488 { 00:23:35.488 "name": null, 00:23:35.488 "uuid": "c5dbac5b-49c8-4542-898a-96780c16507e", 00:23:35.488 "is_configured": false, 00:23:35.488 "data_offset": 0, 00:23:35.488 "data_size": 65536 00:23:35.488 }, 00:23:35.488 { 00:23:35.488 "name": "BaseBdev4", 00:23:35.488 "uuid": "e19fe2d9-a3dc-4209-9649-4f873d40d625", 00:23:35.488 "is_configured": true, 00:23:35.488 "data_offset": 0, 00:23:35.488 "data_size": 65536 00:23:35.488 } 00:23:35.488 ] 00:23:35.488 }' 00:23:35.488 07:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:35.489 07:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.058 07:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:36.058 07:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.317 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:36.317 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:36.577 [2024-07-12 07:34:10.279535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:36.577 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:36.577 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:36.577 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:36.577 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:36.577 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:36.577 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:36.577 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.577 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.577 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.577 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.577 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.577 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:36.836 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:36.836 "name": "Existed_Raid", 00:23:36.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.836 "strip_size_kb": 64, 00:23:36.836 "state": "configuring", 00:23:36.836 "raid_level": "concat", 00:23:36.836 "superblock": false, 00:23:36.836 "num_base_bdevs": 4, 00:23:36.836 "num_base_bdevs_discovered": 3, 00:23:36.836 "num_base_bdevs_operational": 4, 00:23:36.836 "base_bdevs_list": [ 00:23:36.836 { 00:23:36.836 "name": "BaseBdev1", 00:23:36.836 "uuid": "f745021a-ea8e-48b2-88f2-0c6ded46f8ab", 00:23:36.836 "is_configured": true, 00:23:36.836 "data_offset": 0, 00:23:36.836 "data_size": 65536 00:23:36.836 }, 00:23:36.836 { 00:23:36.836 "name": null, 00:23:36.836 "uuid": "de8e3b18-023d-441a-85d9-49011fe04fb5", 00:23:36.836 "is_configured": false, 00:23:36.836 "data_offset": 0, 00:23:36.836 "data_size": 65536 00:23:36.836 }, 00:23:36.836 { 00:23:36.836 "name": "BaseBdev3", 00:23:36.836 "uuid": "c5dbac5b-49c8-4542-898a-96780c16507e", 00:23:36.836 "is_configured": true, 00:23:36.836 "data_offset": 0, 00:23:36.836 "data_size": 65536 00:23:36.836 }, 00:23:36.836 { 00:23:36.836 "name": "BaseBdev4", 00:23:36.836 "uuid": "e19fe2d9-a3dc-4209-9649-4f873d40d625", 00:23:36.836 "is_configured": true, 00:23:36.836 "data_offset": 0, 00:23:36.836 "data_size": 65536 00:23:36.836 } 00:23:36.836 ] 00:23:36.836 }' 00:23:36.836 07:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:36.836 07:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.403 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.403 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:37.662 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:37.662 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:37.662 [2024-07-12 07:34:11.543800] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:37.921 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:37.921 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:37.921 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:37.921 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:37.921 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:37.921 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:37.921 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:37.921 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:37.921 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:37.921 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:37.921 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.921 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.180 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.180 "name": "Existed_Raid", 00:23:38.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.180 "strip_size_kb": 64, 00:23:38.180 "state": "configuring", 00:23:38.180 "raid_level": "concat", 00:23:38.180 "superblock": false, 00:23:38.180 "num_base_bdevs": 4, 00:23:38.180 "num_base_bdevs_discovered": 2, 00:23:38.180 "num_base_bdevs_operational": 4, 00:23:38.180 "base_bdevs_list": [ 00:23:38.180 { 00:23:38.180 "name": null, 00:23:38.180 "uuid": "f745021a-ea8e-48b2-88f2-0c6ded46f8ab", 00:23:38.180 "is_configured": false, 00:23:38.180 "data_offset": 0, 00:23:38.180 "data_size": 65536 00:23:38.180 }, 00:23:38.180 { 00:23:38.180 "name": null, 00:23:38.180 "uuid": "de8e3b18-023d-441a-85d9-49011fe04fb5", 00:23:38.180 "is_configured": false, 00:23:38.180 "data_offset": 0, 00:23:38.180 "data_size": 65536 00:23:38.180 }, 00:23:38.180 { 00:23:38.180 "name": "BaseBdev3", 00:23:38.180 "uuid": "c5dbac5b-49c8-4542-898a-96780c16507e", 00:23:38.180 "is_configured": true, 00:23:38.180 "data_offset": 0, 00:23:38.180 "data_size": 65536 00:23:38.180 }, 00:23:38.180 { 00:23:38.180 "name": "BaseBdev4", 00:23:38.180 "uuid": "e19fe2d9-a3dc-4209-9649-4f873d40d625", 00:23:38.180 "is_configured": true, 00:23:38.180 "data_offset": 0, 00:23:38.180 "data_size": 65536 00:23:38.180 } 00:23:38.180 ] 00:23:38.180 }' 00:23:38.180 07:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.180 07:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.748 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.748 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:39.008 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:39.008 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:39.266 [2024-07-12 07:34:12.964329] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:39.266 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:39.266 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:39.266 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:39.266 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:39.266 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:39.266 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:39.266 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:39.266 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:39.266 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:39.266 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:39.266 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.266 07:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:39.524 07:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:39.524 "name": "Existed_Raid", 00:23:39.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.524 "strip_size_kb": 64, 00:23:39.524 "state": "configuring", 00:23:39.524 "raid_level": "concat", 00:23:39.524 "superblock": false, 00:23:39.524 "num_base_bdevs": 4, 00:23:39.524 "num_base_bdevs_discovered": 3, 00:23:39.524 "num_base_bdevs_operational": 4, 00:23:39.524 "base_bdevs_list": [ 00:23:39.524 { 00:23:39.524 "name": null, 00:23:39.524 "uuid": "f745021a-ea8e-48b2-88f2-0c6ded46f8ab", 00:23:39.524 "is_configured": false, 00:23:39.524 "data_offset": 0, 00:23:39.524 "data_size": 65536 00:23:39.524 }, 00:23:39.524 { 00:23:39.524 "name": "BaseBdev2", 00:23:39.524 "uuid": "de8e3b18-023d-441a-85d9-49011fe04fb5", 00:23:39.524 "is_configured": true, 00:23:39.524 "data_offset": 0, 00:23:39.524 "data_size": 65536 00:23:39.524 }, 00:23:39.524 { 00:23:39.524 "name": "BaseBdev3", 00:23:39.524 "uuid": "c5dbac5b-49c8-4542-898a-96780c16507e", 00:23:39.524 "is_configured": true, 00:23:39.524 "data_offset": 0, 00:23:39.524 "data_size": 65536 00:23:39.524 }, 00:23:39.524 { 00:23:39.524 "name": "BaseBdev4", 00:23:39.524 "uuid": "e19fe2d9-a3dc-4209-9649-4f873d40d625", 00:23:39.524 "is_configured": true, 00:23:39.524 "data_offset": 0, 00:23:39.524 "data_size": 65536 00:23:39.524 } 00:23:39.524 ] 00:23:39.524 }' 00:23:39.524 07:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:39.524 07:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.457 07:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.457 07:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:40.457 07:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:40.457 07:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:40.457 07:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.715 07:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f745021a-ea8e-48b2-88f2-0c6ded46f8ab 00:23:40.974 [2024-07-12 07:34:14.670021] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:40.974 [2024-07-12 07:34:14.670305] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:23:40.974 [2024-07-12 07:34:14.670347] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:40.974 [2024-07-12 07:34:14.670540] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:23:40.974 [2024-07-12 07:34:14.671030] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:23:40.974 [2024-07-12 07:34:14.671146] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:23:40.974 [2024-07-12 07:34:14.671445] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:40.974 NewBaseBdev 00:23:40.974 07:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:40.974 07:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:23:40.974 07:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:40.974 07:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:40.974 07:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:40.974 07:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:40.974 07:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:41.233 07:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:41.491 [ 00:23:41.491 { 00:23:41.491 "name": "NewBaseBdev", 00:23:41.491 "aliases": [ 00:23:41.491 "f745021a-ea8e-48b2-88f2-0c6ded46f8ab" 00:23:41.491 ], 00:23:41.491 "product_name": "Malloc disk", 00:23:41.491 "block_size": 512, 00:23:41.491 "num_blocks": 65536, 00:23:41.491 "uuid": "f745021a-ea8e-48b2-88f2-0c6ded46f8ab", 00:23:41.491 "assigned_rate_limits": { 00:23:41.491 "rw_ios_per_sec": 0, 00:23:41.491 "rw_mbytes_per_sec": 0, 00:23:41.491 "r_mbytes_per_sec": 0, 00:23:41.491 "w_mbytes_per_sec": 0 00:23:41.491 }, 00:23:41.491 "claimed": true, 00:23:41.491 "claim_type": "exclusive_write", 00:23:41.491 "zoned": false, 00:23:41.491 "supported_io_types": { 00:23:41.491 "read": true, 00:23:41.491 "write": true, 00:23:41.491 "unmap": true, 00:23:41.491 "write_zeroes": true, 00:23:41.491 "flush": true, 00:23:41.491 "reset": true, 00:23:41.491 "compare": false, 00:23:41.491 "compare_and_write": false, 00:23:41.491 "abort": true, 00:23:41.491 "nvme_admin": false, 00:23:41.491 "nvme_io": false 00:23:41.491 }, 00:23:41.491 "memory_domains": [ 00:23:41.491 { 00:23:41.491 "dma_device_id": "system", 00:23:41.491 "dma_device_type": 1 00:23:41.491 }, 00:23:41.491 { 00:23:41.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:41.491 "dma_device_type": 2 00:23:41.491 } 00:23:41.491 ], 00:23:41.491 "driver_specific": {} 00:23:41.491 } 00:23:41.491 ] 00:23:41.491 07:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:41.491 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:41.491 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:41.491 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:41.491 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:41.491 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:41.491 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:41.491 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.491 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.491 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.491 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.491 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.491 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:41.749 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:41.749 "name": "Existed_Raid", 00:23:41.749 "uuid": "1014cb5b-e2c4-46a5-85dc-a9ed96f5d261", 00:23:41.749 "strip_size_kb": 64, 00:23:41.749 "state": "online", 00:23:41.749 "raid_level": "concat", 00:23:41.749 "superblock": false, 00:23:41.749 "num_base_bdevs": 4, 00:23:41.749 "num_base_bdevs_discovered": 4, 00:23:41.749 "num_base_bdevs_operational": 4, 00:23:41.749 "base_bdevs_list": [ 00:23:41.749 { 00:23:41.749 "name": "NewBaseBdev", 00:23:41.749 "uuid": "f745021a-ea8e-48b2-88f2-0c6ded46f8ab", 00:23:41.749 "is_configured": true, 00:23:41.749 "data_offset": 0, 00:23:41.749 "data_size": 65536 00:23:41.749 }, 00:23:41.749 { 00:23:41.749 "name": "BaseBdev2", 00:23:41.749 "uuid": "de8e3b18-023d-441a-85d9-49011fe04fb5", 00:23:41.749 "is_configured": true, 00:23:41.749 "data_offset": 0, 00:23:41.749 "data_size": 65536 00:23:41.749 }, 00:23:41.749 { 00:23:41.749 "name": "BaseBdev3", 00:23:41.749 "uuid": "c5dbac5b-49c8-4542-898a-96780c16507e", 00:23:41.749 "is_configured": true, 00:23:41.749 "data_offset": 0, 00:23:41.749 "data_size": 65536 00:23:41.749 }, 00:23:41.749 { 00:23:41.749 "name": "BaseBdev4", 00:23:41.749 "uuid": "e19fe2d9-a3dc-4209-9649-4f873d40d625", 00:23:41.749 "is_configured": true, 00:23:41.749 "data_offset": 0, 00:23:41.749 "data_size": 65536 00:23:41.749 } 00:23:41.749 ] 00:23:41.749 }' 00:23:41.749 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:41.749 07:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.316 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:42.316 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:42.316 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:42.316 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:42.316 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:42.316 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:42.316 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:42.316 07:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:42.574 [2024-07-12 07:34:16.262706] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:42.574 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:42.574 "name": "Existed_Raid", 00:23:42.574 "aliases": [ 00:23:42.574 "1014cb5b-e2c4-46a5-85dc-a9ed96f5d261" 00:23:42.574 ], 00:23:42.574 "product_name": "Raid Volume", 00:23:42.574 "block_size": 512, 00:23:42.574 "num_blocks": 262144, 00:23:42.574 "uuid": "1014cb5b-e2c4-46a5-85dc-a9ed96f5d261", 00:23:42.574 "assigned_rate_limits": { 00:23:42.574 "rw_ios_per_sec": 0, 00:23:42.574 "rw_mbytes_per_sec": 0, 00:23:42.574 "r_mbytes_per_sec": 0, 00:23:42.574 "w_mbytes_per_sec": 0 00:23:42.574 }, 00:23:42.574 "claimed": false, 00:23:42.574 "zoned": false, 00:23:42.574 "supported_io_types": { 00:23:42.574 "read": true, 00:23:42.574 "write": true, 00:23:42.574 "unmap": true, 00:23:42.574 "write_zeroes": true, 00:23:42.574 "flush": true, 00:23:42.574 "reset": true, 00:23:42.574 "compare": false, 00:23:42.574 "compare_and_write": false, 00:23:42.574 "abort": false, 00:23:42.574 "nvme_admin": false, 00:23:42.574 "nvme_io": false 00:23:42.574 }, 00:23:42.574 "memory_domains": [ 00:23:42.574 { 00:23:42.574 "dma_device_id": "system", 00:23:42.574 "dma_device_type": 1 00:23:42.574 }, 00:23:42.574 { 00:23:42.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:42.574 "dma_device_type": 2 00:23:42.574 }, 00:23:42.574 { 00:23:42.574 "dma_device_id": "system", 00:23:42.574 "dma_device_type": 1 00:23:42.574 }, 00:23:42.574 { 00:23:42.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:42.574 "dma_device_type": 2 00:23:42.574 }, 00:23:42.574 { 00:23:42.574 "dma_device_id": "system", 00:23:42.574 "dma_device_type": 1 00:23:42.574 }, 00:23:42.574 { 00:23:42.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:42.574 "dma_device_type": 2 00:23:42.574 }, 00:23:42.574 { 00:23:42.574 "dma_device_id": "system", 00:23:42.575 "dma_device_type": 1 00:23:42.575 }, 00:23:42.575 { 00:23:42.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:42.575 "dma_device_type": 2 00:23:42.575 } 00:23:42.575 ], 00:23:42.575 "driver_specific": { 00:23:42.575 "raid": { 00:23:42.575 "uuid": "1014cb5b-e2c4-46a5-85dc-a9ed96f5d261", 00:23:42.575 "strip_size_kb": 64, 00:23:42.575 "state": "online", 00:23:42.575 "raid_level": "concat", 00:23:42.575 "superblock": false, 00:23:42.575 "num_base_bdevs": 4, 00:23:42.575 "num_base_bdevs_discovered": 4, 00:23:42.575 "num_base_bdevs_operational": 4, 00:23:42.575 "base_bdevs_list": [ 00:23:42.575 { 00:23:42.575 "name": "NewBaseBdev", 00:23:42.575 "uuid": "f745021a-ea8e-48b2-88f2-0c6ded46f8ab", 00:23:42.575 "is_configured": true, 00:23:42.575 "data_offset": 0, 00:23:42.575 "data_size": 65536 00:23:42.575 }, 00:23:42.575 { 00:23:42.575 "name": "BaseBdev2", 00:23:42.575 "uuid": "de8e3b18-023d-441a-85d9-49011fe04fb5", 00:23:42.575 "is_configured": true, 00:23:42.575 "data_offset": 0, 00:23:42.575 "data_size": 65536 00:23:42.575 }, 00:23:42.575 { 00:23:42.575 "name": "BaseBdev3", 00:23:42.575 "uuid": "c5dbac5b-49c8-4542-898a-96780c16507e", 00:23:42.575 "is_configured": true, 00:23:42.575 "data_offset": 0, 00:23:42.575 "data_size": 65536 00:23:42.575 }, 00:23:42.575 { 00:23:42.575 "name": "BaseBdev4", 00:23:42.575 "uuid": "e19fe2d9-a3dc-4209-9649-4f873d40d625", 00:23:42.575 "is_configured": true, 00:23:42.575 "data_offset": 0, 00:23:42.575 "data_size": 65536 00:23:42.575 } 00:23:42.575 ] 00:23:42.575 } 00:23:42.575 } 00:23:42.575 }' 00:23:42.575 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:42.575 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:42.575 BaseBdev2 00:23:42.575 BaseBdev3 00:23:42.575 BaseBdev4' 00:23:42.575 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:42.575 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:42.575 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:42.834 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:42.834 "name": "NewBaseBdev", 00:23:42.834 "aliases": [ 00:23:42.834 "f745021a-ea8e-48b2-88f2-0c6ded46f8ab" 00:23:42.834 ], 00:23:42.834 "product_name": "Malloc disk", 00:23:42.834 "block_size": 512, 00:23:42.834 "num_blocks": 65536, 00:23:42.834 "uuid": "f745021a-ea8e-48b2-88f2-0c6ded46f8ab", 00:23:42.834 "assigned_rate_limits": { 00:23:42.834 "rw_ios_per_sec": 0, 00:23:42.834 "rw_mbytes_per_sec": 0, 00:23:42.834 "r_mbytes_per_sec": 0, 00:23:42.834 "w_mbytes_per_sec": 0 00:23:42.834 }, 00:23:42.834 "claimed": true, 00:23:42.834 "claim_type": "exclusive_write", 00:23:42.834 "zoned": false, 00:23:42.834 "supported_io_types": { 00:23:42.834 "read": true, 00:23:42.834 "write": true, 00:23:42.834 "unmap": true, 00:23:42.834 "write_zeroes": true, 00:23:42.834 "flush": true, 00:23:42.834 "reset": true, 00:23:42.834 "compare": false, 00:23:42.834 "compare_and_write": false, 00:23:42.834 "abort": true, 00:23:42.834 "nvme_admin": false, 00:23:42.834 "nvme_io": false 00:23:42.834 }, 00:23:42.834 "memory_domains": [ 00:23:42.834 { 00:23:42.834 "dma_device_id": "system", 00:23:42.834 "dma_device_type": 1 00:23:42.834 }, 00:23:42.834 { 00:23:42.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:42.834 "dma_device_type": 2 00:23:42.834 } 00:23:42.834 ], 00:23:42.834 "driver_specific": {} 00:23:42.834 }' 00:23:42.834 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:42.834 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:42.834 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:42.834 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:43.093 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:43.093 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:43.093 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:43.093 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:43.093 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:43.093 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:43.093 07:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:43.352 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:43.352 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:43.352 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:43.352 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:43.610 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:43.610 "name": "BaseBdev2", 00:23:43.610 "aliases": [ 00:23:43.610 "de8e3b18-023d-441a-85d9-49011fe04fb5" 00:23:43.610 ], 00:23:43.610 "product_name": "Malloc disk", 00:23:43.610 "block_size": 512, 00:23:43.610 "num_blocks": 65536, 00:23:43.610 "uuid": "de8e3b18-023d-441a-85d9-49011fe04fb5", 00:23:43.610 "assigned_rate_limits": { 00:23:43.610 "rw_ios_per_sec": 0, 00:23:43.610 "rw_mbytes_per_sec": 0, 00:23:43.610 "r_mbytes_per_sec": 0, 00:23:43.610 "w_mbytes_per_sec": 0 00:23:43.610 }, 00:23:43.610 "claimed": true, 00:23:43.610 "claim_type": "exclusive_write", 00:23:43.610 "zoned": false, 00:23:43.610 "supported_io_types": { 00:23:43.610 "read": true, 00:23:43.610 "write": true, 00:23:43.610 "unmap": true, 00:23:43.610 "write_zeroes": true, 00:23:43.610 "flush": true, 00:23:43.610 "reset": true, 00:23:43.610 "compare": false, 00:23:43.610 "compare_and_write": false, 00:23:43.610 "abort": true, 00:23:43.610 "nvme_admin": false, 00:23:43.610 "nvme_io": false 00:23:43.610 }, 00:23:43.610 "memory_domains": [ 00:23:43.610 { 00:23:43.610 "dma_device_id": "system", 00:23:43.610 "dma_device_type": 1 00:23:43.611 }, 00:23:43.611 { 00:23:43.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:43.611 "dma_device_type": 2 00:23:43.611 } 00:23:43.611 ], 00:23:43.611 "driver_specific": {} 00:23:43.611 }' 00:23:43.611 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:43.611 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:43.611 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:43.611 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:43.611 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:43.611 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:43.611 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:43.611 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:43.869 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:43.869 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:43.869 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:43.869 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:43.869 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:43.869 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:43.869 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:44.128 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:44.128 "name": "BaseBdev3", 00:23:44.128 "aliases": [ 00:23:44.128 "c5dbac5b-49c8-4542-898a-96780c16507e" 00:23:44.128 ], 00:23:44.128 "product_name": "Malloc disk", 00:23:44.128 "block_size": 512, 00:23:44.128 "num_blocks": 65536, 00:23:44.128 "uuid": "c5dbac5b-49c8-4542-898a-96780c16507e", 00:23:44.128 "assigned_rate_limits": { 00:23:44.128 "rw_ios_per_sec": 0, 00:23:44.128 "rw_mbytes_per_sec": 0, 00:23:44.128 "r_mbytes_per_sec": 0, 00:23:44.128 "w_mbytes_per_sec": 0 00:23:44.128 }, 00:23:44.128 "claimed": true, 00:23:44.128 "claim_type": "exclusive_write", 00:23:44.128 "zoned": false, 00:23:44.128 "supported_io_types": { 00:23:44.128 "read": true, 00:23:44.128 "write": true, 00:23:44.128 "unmap": true, 00:23:44.128 "write_zeroes": true, 00:23:44.128 "flush": true, 00:23:44.128 "reset": true, 00:23:44.128 "compare": false, 00:23:44.128 "compare_and_write": false, 00:23:44.128 "abort": true, 00:23:44.128 "nvme_admin": false, 00:23:44.128 "nvme_io": false 00:23:44.128 }, 00:23:44.128 "memory_domains": [ 00:23:44.128 { 00:23:44.128 "dma_device_id": "system", 00:23:44.128 "dma_device_type": 1 00:23:44.128 }, 00:23:44.128 { 00:23:44.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.128 "dma_device_type": 2 00:23:44.128 } 00:23:44.128 ], 00:23:44.128 "driver_specific": {} 00:23:44.128 }' 00:23:44.128 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:44.128 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:44.128 07:34:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:44.128 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:44.386 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:44.386 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:44.386 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:44.386 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:44.386 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:44.386 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:44.386 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:44.645 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:44.645 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:44.645 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:44.645 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:44.903 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:44.904 "name": "BaseBdev4", 00:23:44.904 "aliases": [ 00:23:44.904 "e19fe2d9-a3dc-4209-9649-4f873d40d625" 00:23:44.904 ], 00:23:44.904 "product_name": "Malloc disk", 00:23:44.904 "block_size": 512, 00:23:44.904 "num_blocks": 65536, 00:23:44.904 "uuid": "e19fe2d9-a3dc-4209-9649-4f873d40d625", 00:23:44.904 "assigned_rate_limits": { 00:23:44.904 "rw_ios_per_sec": 0, 00:23:44.904 "rw_mbytes_per_sec": 0, 00:23:44.904 "r_mbytes_per_sec": 0, 00:23:44.904 "w_mbytes_per_sec": 0 00:23:44.904 }, 00:23:44.904 "claimed": true, 00:23:44.904 "claim_type": "exclusive_write", 00:23:44.904 "zoned": false, 00:23:44.904 "supported_io_types": { 00:23:44.904 "read": true, 00:23:44.904 "write": true, 00:23:44.904 "unmap": true, 00:23:44.904 "write_zeroes": true, 00:23:44.904 "flush": true, 00:23:44.904 "reset": true, 00:23:44.904 "compare": false, 00:23:44.904 "compare_and_write": false, 00:23:44.904 "abort": true, 00:23:44.904 "nvme_admin": false, 00:23:44.904 "nvme_io": false 00:23:44.904 }, 00:23:44.904 "memory_domains": [ 00:23:44.904 { 00:23:44.904 "dma_device_id": "system", 00:23:44.904 "dma_device_type": 1 00:23:44.904 }, 00:23:44.904 { 00:23:44.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.904 "dma_device_type": 2 00:23:44.904 } 00:23:44.904 ], 00:23:44.904 "driver_specific": {} 00:23:44.904 }' 00:23:44.904 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:44.904 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:44.904 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:44.904 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:44.904 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:44.904 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:44.904 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:45.195 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:45.195 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:45.195 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:45.195 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:45.195 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:45.195 07:34:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:45.501 [2024-07-12 07:34:19.226291] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:45.501 [2024-07-12 07:34:19.226567] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:45.501 [2024-07-12 07:34:19.226755] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:45.501 [2024-07-12 07:34:19.226938] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:45.501 [2024-07-12 07:34:19.227019] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:23:45.501 07:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 147279 00:23:45.501 07:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 147279 ']' 00:23:45.501 07:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 147279 00:23:45.501 07:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:23:45.501 07:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:45.501 07:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 147279 00:23:45.501 07:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:45.501 07:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:45.501 07:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 147279' 00:23:45.501 killing process with pid 147279 00:23:45.501 07:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 147279 00:23:45.501 [2024-07-12 07:34:19.279634] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:45.501 07:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 147279 00:23:45.501 [2024-07-12 07:34:19.357133] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:46.066 07:34:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:23:46.066 00:23:46.066 real 0m33.098s 00:23:46.066 user 1m0.694s 00:23:46.066 sys 0m5.777s 00:23:46.066 07:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:46.066 07:34:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.066 ************************************ 00:23:46.066 END TEST raid_state_function_test 00:23:46.066 ************************************ 00:23:46.066 07:34:19 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:23:46.066 07:34:19 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:23:46.067 07:34:19 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:46.067 07:34:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:46.067 ************************************ 00:23:46.067 START TEST raid_state_function_test_sb 00:23:46.067 ************************************ 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 true 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=148368 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 148368' 00:23:46.067 Process raid pid: 148368 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 148368 /var/tmp/spdk-raid.sock 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 148368 ']' 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:46.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:46.067 07:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.067 [2024-07-12 07:34:19.945497] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:46.067 [2024-07-12 07:34:19.945971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.324 [2024-07-12 07:34:20.102352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.324 [2024-07-12 07:34:20.191919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.582 [2024-07-12 07:34:20.278441] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:47.150 07:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:47.150 07:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:23:47.150 07:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:47.150 [2024-07-12 07:34:20.989667] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:47.150 [2024-07-12 07:34:20.989970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:47.150 [2024-07-12 07:34:20.990056] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:47.150 [2024-07-12 07:34:20.990109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:47.150 [2024-07-12 07:34:20.990136] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:47.150 [2024-07-12 07:34:20.990205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:47.150 [2024-07-12 07:34:20.990364] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:47.150 [2024-07-12 07:34:20.990455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:47.150 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:47.150 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:47.150 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:47.150 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:47.150 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:47.150 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:47.150 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:47.150 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:47.150 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:47.150 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:47.150 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.150 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:47.407 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:47.407 "name": "Existed_Raid", 00:23:47.407 "uuid": "8cf63840-bf7e-4e1c-b3c0-3ecef407c6a8", 00:23:47.407 "strip_size_kb": 64, 00:23:47.407 "state": "configuring", 00:23:47.407 "raid_level": "concat", 00:23:47.407 "superblock": true, 00:23:47.407 "num_base_bdevs": 4, 00:23:47.407 "num_base_bdevs_discovered": 0, 00:23:47.407 "num_base_bdevs_operational": 4, 00:23:47.407 "base_bdevs_list": [ 00:23:47.407 { 00:23:47.407 "name": "BaseBdev1", 00:23:47.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.407 "is_configured": false, 00:23:47.407 "data_offset": 0, 00:23:47.407 "data_size": 0 00:23:47.407 }, 00:23:47.407 { 00:23:47.407 "name": "BaseBdev2", 00:23:47.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.407 "is_configured": false, 00:23:47.407 "data_offset": 0, 00:23:47.407 "data_size": 0 00:23:47.407 }, 00:23:47.407 { 00:23:47.407 "name": "BaseBdev3", 00:23:47.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.407 "is_configured": false, 00:23:47.407 "data_offset": 0, 00:23:47.407 "data_size": 0 00:23:47.407 }, 00:23:47.407 { 00:23:47.407 "name": "BaseBdev4", 00:23:47.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.407 "is_configured": false, 00:23:47.407 "data_offset": 0, 00:23:47.407 "data_size": 0 00:23:47.407 } 00:23:47.407 ] 00:23:47.407 }' 00:23:47.407 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:47.407 07:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:47.970 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:48.226 [2024-07-12 07:34:21.954151] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:48.226 [2024-07-12 07:34:21.954457] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:23:48.226 07:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:48.483 [2024-07-12 07:34:22.151118] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:48.483 [2024-07-12 07:34:22.151473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:48.483 [2024-07-12 07:34:22.151635] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:48.483 [2024-07-12 07:34:22.151863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:48.483 [2024-07-12 07:34:22.152008] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:48.483 [2024-07-12 07:34:22.152104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:48.483 [2024-07-12 07:34:22.152338] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:48.483 [2024-07-12 07:34:22.152462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:48.483 07:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:48.740 [2024-07-12 07:34:22.412615] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:48.740 BaseBdev1 00:23:48.740 07:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:48.740 07:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:23:48.740 07:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:48.740 07:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:48.740 07:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:48.740 07:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:48.740 07:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:48.997 07:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:48.997 [ 00:23:48.997 { 00:23:48.997 "name": "BaseBdev1", 00:23:48.997 "aliases": [ 00:23:48.997 "e70bb7ef-64c6-4911-bd8c-1d329bb75462" 00:23:48.997 ], 00:23:48.997 "product_name": "Malloc disk", 00:23:48.997 "block_size": 512, 00:23:48.997 "num_blocks": 65536, 00:23:48.997 "uuid": "e70bb7ef-64c6-4911-bd8c-1d329bb75462", 00:23:48.997 "assigned_rate_limits": { 00:23:48.997 "rw_ios_per_sec": 0, 00:23:48.997 "rw_mbytes_per_sec": 0, 00:23:48.997 "r_mbytes_per_sec": 0, 00:23:48.997 "w_mbytes_per_sec": 0 00:23:48.997 }, 00:23:48.997 "claimed": true, 00:23:48.997 "claim_type": "exclusive_write", 00:23:48.997 "zoned": false, 00:23:48.997 "supported_io_types": { 00:23:48.997 "read": true, 00:23:48.997 "write": true, 00:23:48.997 "unmap": true, 00:23:48.997 "write_zeroes": true, 00:23:48.997 "flush": true, 00:23:48.997 "reset": true, 00:23:48.997 "compare": false, 00:23:48.997 "compare_and_write": false, 00:23:48.997 "abort": true, 00:23:48.997 "nvme_admin": false, 00:23:48.997 "nvme_io": false 00:23:48.997 }, 00:23:48.997 "memory_domains": [ 00:23:48.997 { 00:23:48.997 "dma_device_id": "system", 00:23:48.997 "dma_device_type": 1 00:23:48.997 }, 00:23:48.997 { 00:23:48.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.997 "dma_device_type": 2 00:23:48.997 } 00:23:48.997 ], 00:23:48.997 "driver_specific": {} 00:23:48.997 } 00:23:48.997 ] 00:23:48.997 07:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:48.997 07:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:48.997 07:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:48.997 07:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:48.997 07:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:48.997 07:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:48.997 07:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:48.997 07:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:48.997 07:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:48.997 07:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:48.997 07:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:48.997 07:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.997 07:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:49.255 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:49.255 "name": "Existed_Raid", 00:23:49.255 "uuid": "0084df37-2f36-4e9a-8248-ee2ccaa09185", 00:23:49.255 "strip_size_kb": 64, 00:23:49.255 "state": "configuring", 00:23:49.255 "raid_level": "concat", 00:23:49.255 "superblock": true, 00:23:49.255 "num_base_bdevs": 4, 00:23:49.255 "num_base_bdevs_discovered": 1, 00:23:49.255 "num_base_bdevs_operational": 4, 00:23:49.255 "base_bdevs_list": [ 00:23:49.255 { 00:23:49.255 "name": "BaseBdev1", 00:23:49.256 "uuid": "e70bb7ef-64c6-4911-bd8c-1d329bb75462", 00:23:49.256 "is_configured": true, 00:23:49.256 "data_offset": 2048, 00:23:49.256 "data_size": 63488 00:23:49.256 }, 00:23:49.256 { 00:23:49.256 "name": "BaseBdev2", 00:23:49.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.256 "is_configured": false, 00:23:49.256 "data_offset": 0, 00:23:49.256 "data_size": 0 00:23:49.256 }, 00:23:49.256 { 00:23:49.256 "name": "BaseBdev3", 00:23:49.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.256 "is_configured": false, 00:23:49.256 "data_offset": 0, 00:23:49.256 "data_size": 0 00:23:49.256 }, 00:23:49.256 { 00:23:49.256 "name": "BaseBdev4", 00:23:49.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.256 "is_configured": false, 00:23:49.256 "data_offset": 0, 00:23:49.256 "data_size": 0 00:23:49.256 } 00:23:49.256 ] 00:23:49.256 }' 00:23:49.256 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:49.256 07:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:49.819 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:50.077 [2024-07-12 07:34:23.749293] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:50.077 [2024-07-12 07:34:23.749561] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:23:50.077 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:50.077 [2024-07-12 07:34:23.945407] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:50.077 [2024-07-12 07:34:23.948104] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:50.077 [2024-07-12 07:34:23.948307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:50.077 [2024-07-12 07:34:23.948395] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:50.077 [2024-07-12 07:34:23.948492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:50.077 [2024-07-12 07:34:23.948560] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:50.077 [2024-07-12 07:34:23.948611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:50.336 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:50.336 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:50.336 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:50.336 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:50.336 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:50.336 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:50.336 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:50.336 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:50.336 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:50.336 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:50.336 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:50.336 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:50.336 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.336 07:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:50.336 07:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:50.336 "name": "Existed_Raid", 00:23:50.336 "uuid": "a8a2b11e-4f90-4786-aa94-c5a94738f506", 00:23:50.336 "strip_size_kb": 64, 00:23:50.336 "state": "configuring", 00:23:50.336 "raid_level": "concat", 00:23:50.336 "superblock": true, 00:23:50.336 "num_base_bdevs": 4, 00:23:50.336 "num_base_bdevs_discovered": 1, 00:23:50.336 "num_base_bdevs_operational": 4, 00:23:50.336 "base_bdevs_list": [ 00:23:50.336 { 00:23:50.336 "name": "BaseBdev1", 00:23:50.336 "uuid": "e70bb7ef-64c6-4911-bd8c-1d329bb75462", 00:23:50.336 "is_configured": true, 00:23:50.336 "data_offset": 2048, 00:23:50.336 "data_size": 63488 00:23:50.336 }, 00:23:50.336 { 00:23:50.336 "name": "BaseBdev2", 00:23:50.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.336 "is_configured": false, 00:23:50.336 "data_offset": 0, 00:23:50.336 "data_size": 0 00:23:50.336 }, 00:23:50.336 { 00:23:50.336 "name": "BaseBdev3", 00:23:50.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.336 "is_configured": false, 00:23:50.336 "data_offset": 0, 00:23:50.336 "data_size": 0 00:23:50.336 }, 00:23:50.336 { 00:23:50.336 "name": "BaseBdev4", 00:23:50.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.336 "is_configured": false, 00:23:50.336 "data_offset": 0, 00:23:50.336 "data_size": 0 00:23:50.336 } 00:23:50.336 ] 00:23:50.336 }' 00:23:50.336 07:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:50.336 07:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.921 07:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:51.180 [2024-07-12 07:34:25.011456] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:51.180 BaseBdev2 00:23:51.180 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:51.180 07:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:23:51.180 07:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:51.180 07:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:51.180 07:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:51.180 07:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:51.180 07:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:51.438 07:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:51.697 [ 00:23:51.697 { 00:23:51.697 "name": "BaseBdev2", 00:23:51.697 "aliases": [ 00:23:51.697 "c6824c68-dcd0-4291-87fd-5a3cb25f9c00" 00:23:51.697 ], 00:23:51.697 "product_name": "Malloc disk", 00:23:51.697 "block_size": 512, 00:23:51.697 "num_blocks": 65536, 00:23:51.697 "uuid": "c6824c68-dcd0-4291-87fd-5a3cb25f9c00", 00:23:51.697 "assigned_rate_limits": { 00:23:51.697 "rw_ios_per_sec": 0, 00:23:51.697 "rw_mbytes_per_sec": 0, 00:23:51.697 "r_mbytes_per_sec": 0, 00:23:51.697 "w_mbytes_per_sec": 0 00:23:51.697 }, 00:23:51.697 "claimed": true, 00:23:51.697 "claim_type": "exclusive_write", 00:23:51.697 "zoned": false, 00:23:51.697 "supported_io_types": { 00:23:51.697 "read": true, 00:23:51.697 "write": true, 00:23:51.697 "unmap": true, 00:23:51.697 "write_zeroes": true, 00:23:51.697 "flush": true, 00:23:51.697 "reset": true, 00:23:51.697 "compare": false, 00:23:51.697 "compare_and_write": false, 00:23:51.697 "abort": true, 00:23:51.697 "nvme_admin": false, 00:23:51.697 "nvme_io": false 00:23:51.697 }, 00:23:51.697 "memory_domains": [ 00:23:51.697 { 00:23:51.697 "dma_device_id": "system", 00:23:51.697 "dma_device_type": 1 00:23:51.697 }, 00:23:51.697 { 00:23:51.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.697 "dma_device_type": 2 00:23:51.697 } 00:23:51.697 ], 00:23:51.697 "driver_specific": {} 00:23:51.697 } 00:23:51.697 ] 00:23:51.697 07:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:51.697 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:51.697 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:51.697 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:51.697 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:51.697 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:51.697 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:51.697 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:51.697 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:51.697 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.697 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.697 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.697 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.697 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.697 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.955 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:51.955 "name": "Existed_Raid", 00:23:51.955 "uuid": "a8a2b11e-4f90-4786-aa94-c5a94738f506", 00:23:51.955 "strip_size_kb": 64, 00:23:51.955 "state": "configuring", 00:23:51.955 "raid_level": "concat", 00:23:51.955 "superblock": true, 00:23:51.955 "num_base_bdevs": 4, 00:23:51.955 "num_base_bdevs_discovered": 2, 00:23:51.955 "num_base_bdevs_operational": 4, 00:23:51.955 "base_bdevs_list": [ 00:23:51.955 { 00:23:51.955 "name": "BaseBdev1", 00:23:51.955 "uuid": "e70bb7ef-64c6-4911-bd8c-1d329bb75462", 00:23:51.955 "is_configured": true, 00:23:51.955 "data_offset": 2048, 00:23:51.955 "data_size": 63488 00:23:51.955 }, 00:23:51.955 { 00:23:51.955 "name": "BaseBdev2", 00:23:51.955 "uuid": "c6824c68-dcd0-4291-87fd-5a3cb25f9c00", 00:23:51.955 "is_configured": true, 00:23:51.955 "data_offset": 2048, 00:23:51.955 "data_size": 63488 00:23:51.955 }, 00:23:51.955 { 00:23:51.955 "name": "BaseBdev3", 00:23:51.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.955 "is_configured": false, 00:23:51.955 "data_offset": 0, 00:23:51.955 "data_size": 0 00:23:51.955 }, 00:23:51.955 { 00:23:51.955 "name": "BaseBdev4", 00:23:51.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.955 "is_configured": false, 00:23:51.955 "data_offset": 0, 00:23:51.955 "data_size": 0 00:23:51.955 } 00:23:51.955 ] 00:23:51.955 }' 00:23:51.955 07:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:51.955 07:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.520 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:52.778 [2024-07-12 07:34:26.549475] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:52.778 BaseBdev3 00:23:52.778 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:52.778 07:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:23:52.778 07:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:52.778 07:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:52.778 07:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:52.778 07:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:52.778 07:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:53.036 07:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:53.293 [ 00:23:53.293 { 00:23:53.293 "name": "BaseBdev3", 00:23:53.293 "aliases": [ 00:23:53.293 "2147f562-244c-40bf-b01d-9cd6636bd0b6" 00:23:53.293 ], 00:23:53.293 "product_name": "Malloc disk", 00:23:53.293 "block_size": 512, 00:23:53.294 "num_blocks": 65536, 00:23:53.294 "uuid": "2147f562-244c-40bf-b01d-9cd6636bd0b6", 00:23:53.294 "assigned_rate_limits": { 00:23:53.294 "rw_ios_per_sec": 0, 00:23:53.294 "rw_mbytes_per_sec": 0, 00:23:53.294 "r_mbytes_per_sec": 0, 00:23:53.294 "w_mbytes_per_sec": 0 00:23:53.294 }, 00:23:53.294 "claimed": true, 00:23:53.294 "claim_type": "exclusive_write", 00:23:53.294 "zoned": false, 00:23:53.294 "supported_io_types": { 00:23:53.294 "read": true, 00:23:53.294 "write": true, 00:23:53.294 "unmap": true, 00:23:53.294 "write_zeroes": true, 00:23:53.294 "flush": true, 00:23:53.294 "reset": true, 00:23:53.294 "compare": false, 00:23:53.294 "compare_and_write": false, 00:23:53.294 "abort": true, 00:23:53.294 "nvme_admin": false, 00:23:53.294 "nvme_io": false 00:23:53.294 }, 00:23:53.294 "memory_domains": [ 00:23:53.294 { 00:23:53.294 "dma_device_id": "system", 00:23:53.294 "dma_device_type": 1 00:23:53.294 }, 00:23:53.294 { 00:23:53.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.294 "dma_device_type": 2 00:23:53.294 } 00:23:53.294 ], 00:23:53.294 "driver_specific": {} 00:23:53.294 } 00:23:53.294 ] 00:23:53.294 07:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:53.294 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:53.294 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:53.294 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:53.294 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:53.294 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:53.294 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:53.294 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:53.294 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:53.294 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:53.294 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:53.294 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:53.294 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:53.294 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.294 07:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:53.551 07:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:53.551 "name": "Existed_Raid", 00:23:53.551 "uuid": "a8a2b11e-4f90-4786-aa94-c5a94738f506", 00:23:53.551 "strip_size_kb": 64, 00:23:53.551 "state": "configuring", 00:23:53.551 "raid_level": "concat", 00:23:53.551 "superblock": true, 00:23:53.551 "num_base_bdevs": 4, 00:23:53.551 "num_base_bdevs_discovered": 3, 00:23:53.551 "num_base_bdevs_operational": 4, 00:23:53.551 "base_bdevs_list": [ 00:23:53.551 { 00:23:53.551 "name": "BaseBdev1", 00:23:53.551 "uuid": "e70bb7ef-64c6-4911-bd8c-1d329bb75462", 00:23:53.551 "is_configured": true, 00:23:53.551 "data_offset": 2048, 00:23:53.551 "data_size": 63488 00:23:53.551 }, 00:23:53.551 { 00:23:53.551 "name": "BaseBdev2", 00:23:53.551 "uuid": "c6824c68-dcd0-4291-87fd-5a3cb25f9c00", 00:23:53.551 "is_configured": true, 00:23:53.551 "data_offset": 2048, 00:23:53.552 "data_size": 63488 00:23:53.552 }, 00:23:53.552 { 00:23:53.552 "name": "BaseBdev3", 00:23:53.552 "uuid": "2147f562-244c-40bf-b01d-9cd6636bd0b6", 00:23:53.552 "is_configured": true, 00:23:53.552 "data_offset": 2048, 00:23:53.552 "data_size": 63488 00:23:53.552 }, 00:23:53.552 { 00:23:53.552 "name": "BaseBdev4", 00:23:53.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.552 "is_configured": false, 00:23:53.552 "data_offset": 0, 00:23:53.552 "data_size": 0 00:23:53.552 } 00:23:53.552 ] 00:23:53.552 }' 00:23:53.552 07:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:53.552 07:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:54.119 07:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:54.377 [2024-07-12 07:34:28.027342] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:54.377 [2024-07-12 07:34:28.027897] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:23:54.377 [2024-07-12 07:34:28.028019] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:54.377 [2024-07-12 07:34:28.028206] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:23:54.377 [2024-07-12 07:34:28.028667] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:23:54.377 [2024-07-12 07:34:28.028815] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:23:54.377 [2024-07-12 07:34:28.029057] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:54.377 BaseBdev4 00:23:54.377 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:54.377 07:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:23:54.377 07:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:54.377 07:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:54.377 07:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:54.377 07:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:54.377 07:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:54.635 07:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:54.893 [ 00:23:54.893 { 00:23:54.893 "name": "BaseBdev4", 00:23:54.893 "aliases": [ 00:23:54.893 "ed88ade7-0b80-49b7-80fc-7fb51e795da4" 00:23:54.893 ], 00:23:54.893 "product_name": "Malloc disk", 00:23:54.893 "block_size": 512, 00:23:54.893 "num_blocks": 65536, 00:23:54.893 "uuid": "ed88ade7-0b80-49b7-80fc-7fb51e795da4", 00:23:54.893 "assigned_rate_limits": { 00:23:54.893 "rw_ios_per_sec": 0, 00:23:54.893 "rw_mbytes_per_sec": 0, 00:23:54.893 "r_mbytes_per_sec": 0, 00:23:54.893 "w_mbytes_per_sec": 0 00:23:54.893 }, 00:23:54.893 "claimed": true, 00:23:54.893 "claim_type": "exclusive_write", 00:23:54.893 "zoned": false, 00:23:54.893 "supported_io_types": { 00:23:54.893 "read": true, 00:23:54.893 "write": true, 00:23:54.893 "unmap": true, 00:23:54.893 "write_zeroes": true, 00:23:54.893 "flush": true, 00:23:54.893 "reset": true, 00:23:54.893 "compare": false, 00:23:54.893 "compare_and_write": false, 00:23:54.893 "abort": true, 00:23:54.893 "nvme_admin": false, 00:23:54.893 "nvme_io": false 00:23:54.893 }, 00:23:54.893 "memory_domains": [ 00:23:54.893 { 00:23:54.893 "dma_device_id": "system", 00:23:54.893 "dma_device_type": 1 00:23:54.893 }, 00:23:54.893 { 00:23:54.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:54.893 "dma_device_type": 2 00:23:54.893 } 00:23:54.893 ], 00:23:54.893 "driver_specific": {} 00:23:54.893 } 00:23:54.893 ] 00:23:54.893 07:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:54.893 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:54.893 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:54.893 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:54.893 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:54.893 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:54.893 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:54.893 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:54.893 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:54.893 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:54.893 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:54.893 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:54.893 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:54.893 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.893 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:55.151 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:55.151 "name": "Existed_Raid", 00:23:55.151 "uuid": "a8a2b11e-4f90-4786-aa94-c5a94738f506", 00:23:55.151 "strip_size_kb": 64, 00:23:55.151 "state": "online", 00:23:55.151 "raid_level": "concat", 00:23:55.151 "superblock": true, 00:23:55.151 "num_base_bdevs": 4, 00:23:55.151 "num_base_bdevs_discovered": 4, 00:23:55.151 "num_base_bdevs_operational": 4, 00:23:55.151 "base_bdevs_list": [ 00:23:55.151 { 00:23:55.151 "name": "BaseBdev1", 00:23:55.151 "uuid": "e70bb7ef-64c6-4911-bd8c-1d329bb75462", 00:23:55.151 "is_configured": true, 00:23:55.151 "data_offset": 2048, 00:23:55.151 "data_size": 63488 00:23:55.151 }, 00:23:55.151 { 00:23:55.151 "name": "BaseBdev2", 00:23:55.151 "uuid": "c6824c68-dcd0-4291-87fd-5a3cb25f9c00", 00:23:55.151 "is_configured": true, 00:23:55.151 "data_offset": 2048, 00:23:55.151 "data_size": 63488 00:23:55.151 }, 00:23:55.151 { 00:23:55.151 "name": "BaseBdev3", 00:23:55.151 "uuid": "2147f562-244c-40bf-b01d-9cd6636bd0b6", 00:23:55.151 "is_configured": true, 00:23:55.151 "data_offset": 2048, 00:23:55.151 "data_size": 63488 00:23:55.151 }, 00:23:55.151 { 00:23:55.151 "name": "BaseBdev4", 00:23:55.151 "uuid": "ed88ade7-0b80-49b7-80fc-7fb51e795da4", 00:23:55.151 "is_configured": true, 00:23:55.151 "data_offset": 2048, 00:23:55.151 "data_size": 63488 00:23:55.151 } 00:23:55.151 ] 00:23:55.151 }' 00:23:55.151 07:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:55.151 07:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:55.717 07:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:55.717 07:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:55.717 07:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:55.717 07:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:55.717 07:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:55.717 07:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:55.717 07:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:55.717 07:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:55.975 [2024-07-12 07:34:29.632009] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:55.975 07:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:55.975 "name": "Existed_Raid", 00:23:55.975 "aliases": [ 00:23:55.975 "a8a2b11e-4f90-4786-aa94-c5a94738f506" 00:23:55.975 ], 00:23:55.975 "product_name": "Raid Volume", 00:23:55.975 "block_size": 512, 00:23:55.975 "num_blocks": 253952, 00:23:55.975 "uuid": "a8a2b11e-4f90-4786-aa94-c5a94738f506", 00:23:55.975 "assigned_rate_limits": { 00:23:55.975 "rw_ios_per_sec": 0, 00:23:55.975 "rw_mbytes_per_sec": 0, 00:23:55.975 "r_mbytes_per_sec": 0, 00:23:55.975 "w_mbytes_per_sec": 0 00:23:55.975 }, 00:23:55.975 "claimed": false, 00:23:55.975 "zoned": false, 00:23:55.975 "supported_io_types": { 00:23:55.975 "read": true, 00:23:55.975 "write": true, 00:23:55.975 "unmap": true, 00:23:55.975 "write_zeroes": true, 00:23:55.975 "flush": true, 00:23:55.975 "reset": true, 00:23:55.975 "compare": false, 00:23:55.975 "compare_and_write": false, 00:23:55.975 "abort": false, 00:23:55.975 "nvme_admin": false, 00:23:55.975 "nvme_io": false 00:23:55.975 }, 00:23:55.975 "memory_domains": [ 00:23:55.975 { 00:23:55.975 "dma_device_id": "system", 00:23:55.975 "dma_device_type": 1 00:23:55.975 }, 00:23:55.975 { 00:23:55.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:55.975 "dma_device_type": 2 00:23:55.975 }, 00:23:55.975 { 00:23:55.975 "dma_device_id": "system", 00:23:55.975 "dma_device_type": 1 00:23:55.975 }, 00:23:55.975 { 00:23:55.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:55.975 "dma_device_type": 2 00:23:55.975 }, 00:23:55.975 { 00:23:55.975 "dma_device_id": "system", 00:23:55.975 "dma_device_type": 1 00:23:55.975 }, 00:23:55.975 { 00:23:55.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:55.975 "dma_device_type": 2 00:23:55.975 }, 00:23:55.975 { 00:23:55.975 "dma_device_id": "system", 00:23:55.975 "dma_device_type": 1 00:23:55.975 }, 00:23:55.975 { 00:23:55.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:55.975 "dma_device_type": 2 00:23:55.975 } 00:23:55.975 ], 00:23:55.975 "driver_specific": { 00:23:55.975 "raid": { 00:23:55.975 "uuid": "a8a2b11e-4f90-4786-aa94-c5a94738f506", 00:23:55.975 "strip_size_kb": 64, 00:23:55.975 "state": "online", 00:23:55.975 "raid_level": "concat", 00:23:55.975 "superblock": true, 00:23:55.975 "num_base_bdevs": 4, 00:23:55.975 "num_base_bdevs_discovered": 4, 00:23:55.975 "num_base_bdevs_operational": 4, 00:23:55.975 "base_bdevs_list": [ 00:23:55.975 { 00:23:55.975 "name": "BaseBdev1", 00:23:55.975 "uuid": "e70bb7ef-64c6-4911-bd8c-1d329bb75462", 00:23:55.975 "is_configured": true, 00:23:55.975 "data_offset": 2048, 00:23:55.975 "data_size": 63488 00:23:55.975 }, 00:23:55.975 { 00:23:55.975 "name": "BaseBdev2", 00:23:55.975 "uuid": "c6824c68-dcd0-4291-87fd-5a3cb25f9c00", 00:23:55.975 "is_configured": true, 00:23:55.975 "data_offset": 2048, 00:23:55.975 "data_size": 63488 00:23:55.975 }, 00:23:55.975 { 00:23:55.975 "name": "BaseBdev3", 00:23:55.975 "uuid": "2147f562-244c-40bf-b01d-9cd6636bd0b6", 00:23:55.975 "is_configured": true, 00:23:55.975 "data_offset": 2048, 00:23:55.975 "data_size": 63488 00:23:55.975 }, 00:23:55.975 { 00:23:55.975 "name": "BaseBdev4", 00:23:55.975 "uuid": "ed88ade7-0b80-49b7-80fc-7fb51e795da4", 00:23:55.975 "is_configured": true, 00:23:55.975 "data_offset": 2048, 00:23:55.975 "data_size": 63488 00:23:55.975 } 00:23:55.975 ] 00:23:55.975 } 00:23:55.975 } 00:23:55.975 }' 00:23:55.975 07:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:55.975 07:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:55.975 BaseBdev2 00:23:55.975 BaseBdev3 00:23:55.975 BaseBdev4' 00:23:55.975 07:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:55.975 07:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:55.976 07:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:56.233 07:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:56.233 "name": "BaseBdev1", 00:23:56.233 "aliases": [ 00:23:56.233 "e70bb7ef-64c6-4911-bd8c-1d329bb75462" 00:23:56.233 ], 00:23:56.233 "product_name": "Malloc disk", 00:23:56.233 "block_size": 512, 00:23:56.233 "num_blocks": 65536, 00:23:56.233 "uuid": "e70bb7ef-64c6-4911-bd8c-1d329bb75462", 00:23:56.233 "assigned_rate_limits": { 00:23:56.233 "rw_ios_per_sec": 0, 00:23:56.233 "rw_mbytes_per_sec": 0, 00:23:56.233 "r_mbytes_per_sec": 0, 00:23:56.233 "w_mbytes_per_sec": 0 00:23:56.233 }, 00:23:56.233 "claimed": true, 00:23:56.233 "claim_type": "exclusive_write", 00:23:56.233 "zoned": false, 00:23:56.233 "supported_io_types": { 00:23:56.233 "read": true, 00:23:56.233 "write": true, 00:23:56.233 "unmap": true, 00:23:56.233 "write_zeroes": true, 00:23:56.233 "flush": true, 00:23:56.233 "reset": true, 00:23:56.233 "compare": false, 00:23:56.233 "compare_and_write": false, 00:23:56.233 "abort": true, 00:23:56.233 "nvme_admin": false, 00:23:56.233 "nvme_io": false 00:23:56.233 }, 00:23:56.233 "memory_domains": [ 00:23:56.233 { 00:23:56.233 "dma_device_id": "system", 00:23:56.233 "dma_device_type": 1 00:23:56.233 }, 00:23:56.233 { 00:23:56.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.233 "dma_device_type": 2 00:23:56.233 } 00:23:56.233 ], 00:23:56.233 "driver_specific": {} 00:23:56.233 }' 00:23:56.233 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:56.233 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:56.233 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:56.233 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:56.491 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:56.491 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:56.491 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:56.491 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:56.491 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:56.491 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:56.491 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:56.760 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:56.760 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:56.760 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:56.760 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:57.020 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:57.020 "name": "BaseBdev2", 00:23:57.020 "aliases": [ 00:23:57.020 "c6824c68-dcd0-4291-87fd-5a3cb25f9c00" 00:23:57.020 ], 00:23:57.020 "product_name": "Malloc disk", 00:23:57.020 "block_size": 512, 00:23:57.020 "num_blocks": 65536, 00:23:57.020 "uuid": "c6824c68-dcd0-4291-87fd-5a3cb25f9c00", 00:23:57.020 "assigned_rate_limits": { 00:23:57.020 "rw_ios_per_sec": 0, 00:23:57.020 "rw_mbytes_per_sec": 0, 00:23:57.020 "r_mbytes_per_sec": 0, 00:23:57.020 "w_mbytes_per_sec": 0 00:23:57.020 }, 00:23:57.020 "claimed": true, 00:23:57.020 "claim_type": "exclusive_write", 00:23:57.020 "zoned": false, 00:23:57.020 "supported_io_types": { 00:23:57.020 "read": true, 00:23:57.020 "write": true, 00:23:57.020 "unmap": true, 00:23:57.020 "write_zeroes": true, 00:23:57.020 "flush": true, 00:23:57.020 "reset": true, 00:23:57.020 "compare": false, 00:23:57.020 "compare_and_write": false, 00:23:57.020 "abort": true, 00:23:57.020 "nvme_admin": false, 00:23:57.020 "nvme_io": false 00:23:57.020 }, 00:23:57.020 "memory_domains": [ 00:23:57.020 { 00:23:57.020 "dma_device_id": "system", 00:23:57.020 "dma_device_type": 1 00:23:57.020 }, 00:23:57.020 { 00:23:57.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.020 "dma_device_type": 2 00:23:57.020 } 00:23:57.020 ], 00:23:57.020 "driver_specific": {} 00:23:57.020 }' 00:23:57.020 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.020 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.020 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:57.020 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.020 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.020 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:57.021 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.021 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.277 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:57.277 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.277 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.277 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:57.277 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:57.277 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:57.277 07:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:57.535 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:57.535 "name": "BaseBdev3", 00:23:57.535 "aliases": [ 00:23:57.535 "2147f562-244c-40bf-b01d-9cd6636bd0b6" 00:23:57.535 ], 00:23:57.535 "product_name": "Malloc disk", 00:23:57.535 "block_size": 512, 00:23:57.535 "num_blocks": 65536, 00:23:57.535 "uuid": "2147f562-244c-40bf-b01d-9cd6636bd0b6", 00:23:57.535 "assigned_rate_limits": { 00:23:57.535 "rw_ios_per_sec": 0, 00:23:57.535 "rw_mbytes_per_sec": 0, 00:23:57.535 "r_mbytes_per_sec": 0, 00:23:57.535 "w_mbytes_per_sec": 0 00:23:57.535 }, 00:23:57.535 "claimed": true, 00:23:57.535 "claim_type": "exclusive_write", 00:23:57.535 "zoned": false, 00:23:57.535 "supported_io_types": { 00:23:57.535 "read": true, 00:23:57.535 "write": true, 00:23:57.535 "unmap": true, 00:23:57.535 "write_zeroes": true, 00:23:57.535 "flush": true, 00:23:57.535 "reset": true, 00:23:57.535 "compare": false, 00:23:57.535 "compare_and_write": false, 00:23:57.535 "abort": true, 00:23:57.535 "nvme_admin": false, 00:23:57.535 "nvme_io": false 00:23:57.535 }, 00:23:57.535 "memory_domains": [ 00:23:57.535 { 00:23:57.535 "dma_device_id": "system", 00:23:57.535 "dma_device_type": 1 00:23:57.535 }, 00:23:57.535 { 00:23:57.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.535 "dma_device_type": 2 00:23:57.535 } 00:23:57.535 ], 00:23:57.535 "driver_specific": {} 00:23:57.535 }' 00:23:57.535 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.535 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.535 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:57.535 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.535 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.793 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:57.793 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.793 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.793 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:57.793 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.793 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.793 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:57.793 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:57.793 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:57.793 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:58.051 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:58.051 "name": "BaseBdev4", 00:23:58.051 "aliases": [ 00:23:58.051 "ed88ade7-0b80-49b7-80fc-7fb51e795da4" 00:23:58.051 ], 00:23:58.051 "product_name": "Malloc disk", 00:23:58.051 "block_size": 512, 00:23:58.051 "num_blocks": 65536, 00:23:58.051 "uuid": "ed88ade7-0b80-49b7-80fc-7fb51e795da4", 00:23:58.051 "assigned_rate_limits": { 00:23:58.051 "rw_ios_per_sec": 0, 00:23:58.051 "rw_mbytes_per_sec": 0, 00:23:58.051 "r_mbytes_per_sec": 0, 00:23:58.051 "w_mbytes_per_sec": 0 00:23:58.051 }, 00:23:58.051 "claimed": true, 00:23:58.051 "claim_type": "exclusive_write", 00:23:58.051 "zoned": false, 00:23:58.051 "supported_io_types": { 00:23:58.051 "read": true, 00:23:58.051 "write": true, 00:23:58.051 "unmap": true, 00:23:58.051 "write_zeroes": true, 00:23:58.051 "flush": true, 00:23:58.051 "reset": true, 00:23:58.051 "compare": false, 00:23:58.051 "compare_and_write": false, 00:23:58.051 "abort": true, 00:23:58.051 "nvme_admin": false, 00:23:58.051 "nvme_io": false 00:23:58.051 }, 00:23:58.051 "memory_domains": [ 00:23:58.051 { 00:23:58.051 "dma_device_id": "system", 00:23:58.051 "dma_device_type": 1 00:23:58.051 }, 00:23:58.051 { 00:23:58.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.051 "dma_device_type": 2 00:23:58.051 } 00:23:58.051 ], 00:23:58.051 "driver_specific": {} 00:23:58.051 }' 00:23:58.051 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:58.309 07:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:58.309 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:58.309 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:58.309 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:58.309 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:58.309 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:58.309 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:58.309 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:58.309 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.567 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.567 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:58.567 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:58.825 [2024-07-12 07:34:32.456450] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:58.825 [2024-07-12 07:34:32.456731] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:58.825 [2024-07-12 07:34:32.456974] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.825 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:59.099 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:59.099 "name": "Existed_Raid", 00:23:59.099 "uuid": "a8a2b11e-4f90-4786-aa94-c5a94738f506", 00:23:59.099 "strip_size_kb": 64, 00:23:59.099 "state": "offline", 00:23:59.099 "raid_level": "concat", 00:23:59.099 "superblock": true, 00:23:59.099 "num_base_bdevs": 4, 00:23:59.099 "num_base_bdevs_discovered": 3, 00:23:59.099 "num_base_bdevs_operational": 3, 00:23:59.099 "base_bdevs_list": [ 00:23:59.099 { 00:23:59.099 "name": null, 00:23:59.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.099 "is_configured": false, 00:23:59.099 "data_offset": 2048, 00:23:59.099 "data_size": 63488 00:23:59.099 }, 00:23:59.099 { 00:23:59.099 "name": "BaseBdev2", 00:23:59.099 "uuid": "c6824c68-dcd0-4291-87fd-5a3cb25f9c00", 00:23:59.099 "is_configured": true, 00:23:59.099 "data_offset": 2048, 00:23:59.099 "data_size": 63488 00:23:59.099 }, 00:23:59.099 { 00:23:59.099 "name": "BaseBdev3", 00:23:59.099 "uuid": "2147f562-244c-40bf-b01d-9cd6636bd0b6", 00:23:59.099 "is_configured": true, 00:23:59.099 "data_offset": 2048, 00:23:59.099 "data_size": 63488 00:23:59.099 }, 00:23:59.099 { 00:23:59.099 "name": "BaseBdev4", 00:23:59.099 "uuid": "ed88ade7-0b80-49b7-80fc-7fb51e795da4", 00:23:59.099 "is_configured": true, 00:23:59.099 "data_offset": 2048, 00:23:59.099 "data_size": 63488 00:23:59.099 } 00:23:59.099 ] 00:23:59.099 }' 00:23:59.099 07:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:59.099 07:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:59.665 07:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:59.665 07:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:59.665 07:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:59.665 07:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.924 07:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:59.924 07:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:59.924 07:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:00.181 [2024-07-12 07:34:33.966768] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:00.181 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:00.181 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:00.181 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.181 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:00.439 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:00.439 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:00.439 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:00.698 [2024-07-12 07:34:34.436280] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:00.698 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:00.698 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:00.698 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.698 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:00.956 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:00.956 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:00.956 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:01.215 [2024-07-12 07:34:34.897673] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:01.215 [2024-07-12 07:34:34.897989] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:24:01.215 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:01.215 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:01.215 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.215 07:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:01.473 07:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:01.473 07:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:01.473 07:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:01.473 07:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:01.473 07:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:01.473 07:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:01.731 BaseBdev2 00:24:01.731 07:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:01.731 07:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:01.731 07:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:01.731 07:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:01.731 07:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:01.731 07:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:01.732 07:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:01.990 07:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:02.249 [ 00:24:02.249 { 00:24:02.249 "name": "BaseBdev2", 00:24:02.249 "aliases": [ 00:24:02.249 "c19b06a9-ebad-44f3-a02c-3dd4dc9739e4" 00:24:02.249 ], 00:24:02.249 "product_name": "Malloc disk", 00:24:02.249 "block_size": 512, 00:24:02.249 "num_blocks": 65536, 00:24:02.249 "uuid": "c19b06a9-ebad-44f3-a02c-3dd4dc9739e4", 00:24:02.249 "assigned_rate_limits": { 00:24:02.249 "rw_ios_per_sec": 0, 00:24:02.249 "rw_mbytes_per_sec": 0, 00:24:02.249 "r_mbytes_per_sec": 0, 00:24:02.249 "w_mbytes_per_sec": 0 00:24:02.249 }, 00:24:02.249 "claimed": false, 00:24:02.249 "zoned": false, 00:24:02.249 "supported_io_types": { 00:24:02.249 "read": true, 00:24:02.249 "write": true, 00:24:02.249 "unmap": true, 00:24:02.249 "write_zeroes": true, 00:24:02.249 "flush": true, 00:24:02.249 "reset": true, 00:24:02.249 "compare": false, 00:24:02.249 "compare_and_write": false, 00:24:02.249 "abort": true, 00:24:02.249 "nvme_admin": false, 00:24:02.249 "nvme_io": false 00:24:02.249 }, 00:24:02.249 "memory_domains": [ 00:24:02.249 { 00:24:02.249 "dma_device_id": "system", 00:24:02.249 "dma_device_type": 1 00:24:02.249 }, 00:24:02.249 { 00:24:02.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.249 "dma_device_type": 2 00:24:02.249 } 00:24:02.249 ], 00:24:02.249 "driver_specific": {} 00:24:02.249 } 00:24:02.249 ] 00:24:02.249 07:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:02.249 07:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:02.249 07:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:02.249 07:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:02.507 BaseBdev3 00:24:02.507 07:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:02.507 07:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:24:02.507 07:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:02.507 07:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:02.507 07:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:02.507 07:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:02.507 07:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:02.507 07:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:02.765 [ 00:24:02.765 { 00:24:02.765 "name": "BaseBdev3", 00:24:02.765 "aliases": [ 00:24:02.765 "5fffbf93-8083-457b-bb00-916b06616a81" 00:24:02.765 ], 00:24:02.765 "product_name": "Malloc disk", 00:24:02.765 "block_size": 512, 00:24:02.765 "num_blocks": 65536, 00:24:02.765 "uuid": "5fffbf93-8083-457b-bb00-916b06616a81", 00:24:02.765 "assigned_rate_limits": { 00:24:02.765 "rw_ios_per_sec": 0, 00:24:02.765 "rw_mbytes_per_sec": 0, 00:24:02.765 "r_mbytes_per_sec": 0, 00:24:02.765 "w_mbytes_per_sec": 0 00:24:02.765 }, 00:24:02.765 "claimed": false, 00:24:02.765 "zoned": false, 00:24:02.765 "supported_io_types": { 00:24:02.765 "read": true, 00:24:02.765 "write": true, 00:24:02.765 "unmap": true, 00:24:02.765 "write_zeroes": true, 00:24:02.765 "flush": true, 00:24:02.765 "reset": true, 00:24:02.765 "compare": false, 00:24:02.765 "compare_and_write": false, 00:24:02.765 "abort": true, 00:24:02.765 "nvme_admin": false, 00:24:02.765 "nvme_io": false 00:24:02.765 }, 00:24:02.765 "memory_domains": [ 00:24:02.765 { 00:24:02.765 "dma_device_id": "system", 00:24:02.765 "dma_device_type": 1 00:24:02.765 }, 00:24:02.765 { 00:24:02.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.765 "dma_device_type": 2 00:24:02.765 } 00:24:02.765 ], 00:24:02.765 "driver_specific": {} 00:24:02.765 } 00:24:02.765 ] 00:24:02.765 07:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:02.765 07:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:02.765 07:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:02.765 07:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:03.023 BaseBdev4 00:24:03.023 07:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:03.023 07:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:24:03.023 07:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:03.023 07:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:03.023 07:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:03.023 07:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:03.023 07:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:03.281 07:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:03.539 [ 00:24:03.539 { 00:24:03.539 "name": "BaseBdev4", 00:24:03.539 "aliases": [ 00:24:03.539 "7ac152e2-fbf0-41b9-8a08-3454ced90335" 00:24:03.539 ], 00:24:03.539 "product_name": "Malloc disk", 00:24:03.539 "block_size": 512, 00:24:03.539 "num_blocks": 65536, 00:24:03.539 "uuid": "7ac152e2-fbf0-41b9-8a08-3454ced90335", 00:24:03.539 "assigned_rate_limits": { 00:24:03.539 "rw_ios_per_sec": 0, 00:24:03.539 "rw_mbytes_per_sec": 0, 00:24:03.539 "r_mbytes_per_sec": 0, 00:24:03.539 "w_mbytes_per_sec": 0 00:24:03.539 }, 00:24:03.539 "claimed": false, 00:24:03.539 "zoned": false, 00:24:03.539 "supported_io_types": { 00:24:03.539 "read": true, 00:24:03.539 "write": true, 00:24:03.539 "unmap": true, 00:24:03.539 "write_zeroes": true, 00:24:03.539 "flush": true, 00:24:03.539 "reset": true, 00:24:03.539 "compare": false, 00:24:03.539 "compare_and_write": false, 00:24:03.539 "abort": true, 00:24:03.539 "nvme_admin": false, 00:24:03.539 "nvme_io": false 00:24:03.539 }, 00:24:03.539 "memory_domains": [ 00:24:03.539 { 00:24:03.539 "dma_device_id": "system", 00:24:03.539 "dma_device_type": 1 00:24:03.539 }, 00:24:03.539 { 00:24:03.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.539 "dma_device_type": 2 00:24:03.539 } 00:24:03.539 ], 00:24:03.539 "driver_specific": {} 00:24:03.539 } 00:24:03.539 ] 00:24:03.539 07:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:03.539 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:03.539 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:03.539 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:03.797 [2024-07-12 07:34:37.499144] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:03.797 [2024-07-12 07:34:37.499509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:03.797 [2024-07-12 07:34:37.499633] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:03.797 [2024-07-12 07:34:37.502118] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:03.797 [2024-07-12 07:34:37.502282] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:03.797 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:03.797 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:03.797 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:03.797 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:03.797 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:03.797 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:03.797 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:03.797 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:03.797 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:03.797 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:03.797 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.797 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:04.056 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:04.056 "name": "Existed_Raid", 00:24:04.056 "uuid": "9cc1862e-1671-4c4b-84e5-a74b751bbce2", 00:24:04.056 "strip_size_kb": 64, 00:24:04.056 "state": "configuring", 00:24:04.056 "raid_level": "concat", 00:24:04.056 "superblock": true, 00:24:04.056 "num_base_bdevs": 4, 00:24:04.056 "num_base_bdevs_discovered": 3, 00:24:04.056 "num_base_bdevs_operational": 4, 00:24:04.056 "base_bdevs_list": [ 00:24:04.056 { 00:24:04.056 "name": "BaseBdev1", 00:24:04.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.056 "is_configured": false, 00:24:04.056 "data_offset": 0, 00:24:04.056 "data_size": 0 00:24:04.056 }, 00:24:04.056 { 00:24:04.056 "name": "BaseBdev2", 00:24:04.056 "uuid": "c19b06a9-ebad-44f3-a02c-3dd4dc9739e4", 00:24:04.056 "is_configured": true, 00:24:04.056 "data_offset": 2048, 00:24:04.056 "data_size": 63488 00:24:04.056 }, 00:24:04.056 { 00:24:04.056 "name": "BaseBdev3", 00:24:04.056 "uuid": "5fffbf93-8083-457b-bb00-916b06616a81", 00:24:04.056 "is_configured": true, 00:24:04.056 "data_offset": 2048, 00:24:04.056 "data_size": 63488 00:24:04.056 }, 00:24:04.056 { 00:24:04.056 "name": "BaseBdev4", 00:24:04.056 "uuid": "7ac152e2-fbf0-41b9-8a08-3454ced90335", 00:24:04.056 "is_configured": true, 00:24:04.056 "data_offset": 2048, 00:24:04.056 "data_size": 63488 00:24:04.056 } 00:24:04.056 ] 00:24:04.056 }' 00:24:04.056 07:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:04.056 07:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:04.623 07:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:04.882 [2024-07-12 07:34:38.614159] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:04.882 07:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:04.882 07:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:04.882 07:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:04.882 07:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:04.882 07:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:04.882 07:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:04.882 07:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:04.882 07:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:04.882 07:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:04.882 07:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:04.882 07:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.882 07:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:05.140 07:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:05.140 "name": "Existed_Raid", 00:24:05.140 "uuid": "9cc1862e-1671-4c4b-84e5-a74b751bbce2", 00:24:05.140 "strip_size_kb": 64, 00:24:05.140 "state": "configuring", 00:24:05.140 "raid_level": "concat", 00:24:05.140 "superblock": true, 00:24:05.140 "num_base_bdevs": 4, 00:24:05.140 "num_base_bdevs_discovered": 2, 00:24:05.140 "num_base_bdevs_operational": 4, 00:24:05.140 "base_bdevs_list": [ 00:24:05.140 { 00:24:05.140 "name": "BaseBdev1", 00:24:05.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.140 "is_configured": false, 00:24:05.140 "data_offset": 0, 00:24:05.140 "data_size": 0 00:24:05.140 }, 00:24:05.140 { 00:24:05.140 "name": null, 00:24:05.140 "uuid": "c19b06a9-ebad-44f3-a02c-3dd4dc9739e4", 00:24:05.141 "is_configured": false, 00:24:05.141 "data_offset": 2048, 00:24:05.141 "data_size": 63488 00:24:05.141 }, 00:24:05.141 { 00:24:05.141 "name": "BaseBdev3", 00:24:05.141 "uuid": "5fffbf93-8083-457b-bb00-916b06616a81", 00:24:05.141 "is_configured": true, 00:24:05.141 "data_offset": 2048, 00:24:05.141 "data_size": 63488 00:24:05.141 }, 00:24:05.141 { 00:24:05.141 "name": "BaseBdev4", 00:24:05.141 "uuid": "7ac152e2-fbf0-41b9-8a08-3454ced90335", 00:24:05.141 "is_configured": true, 00:24:05.141 "data_offset": 2048, 00:24:05.141 "data_size": 63488 00:24:05.141 } 00:24:05.141 ] 00:24:05.141 }' 00:24:05.141 07:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:05.141 07:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:05.706 07:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.706 07:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:05.963 07:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:05.963 07:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:06.221 [2024-07-12 07:34:39.963860] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:06.221 BaseBdev1 00:24:06.222 07:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:06.222 07:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:24:06.222 07:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:06.222 07:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:06.222 07:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:06.222 07:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:06.222 07:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:06.480 07:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:06.739 [ 00:24:06.739 { 00:24:06.739 "name": "BaseBdev1", 00:24:06.739 "aliases": [ 00:24:06.739 "9192fd89-b687-42aa-bfa6-ba289195fa88" 00:24:06.739 ], 00:24:06.739 "product_name": "Malloc disk", 00:24:06.739 "block_size": 512, 00:24:06.739 "num_blocks": 65536, 00:24:06.739 "uuid": "9192fd89-b687-42aa-bfa6-ba289195fa88", 00:24:06.739 "assigned_rate_limits": { 00:24:06.739 "rw_ios_per_sec": 0, 00:24:06.739 "rw_mbytes_per_sec": 0, 00:24:06.739 "r_mbytes_per_sec": 0, 00:24:06.739 "w_mbytes_per_sec": 0 00:24:06.739 }, 00:24:06.739 "claimed": true, 00:24:06.739 "claim_type": "exclusive_write", 00:24:06.739 "zoned": false, 00:24:06.739 "supported_io_types": { 00:24:06.739 "read": true, 00:24:06.739 "write": true, 00:24:06.739 "unmap": true, 00:24:06.739 "write_zeroes": true, 00:24:06.739 "flush": true, 00:24:06.739 "reset": true, 00:24:06.739 "compare": false, 00:24:06.739 "compare_and_write": false, 00:24:06.739 "abort": true, 00:24:06.739 "nvme_admin": false, 00:24:06.739 "nvme_io": false 00:24:06.739 }, 00:24:06.739 "memory_domains": [ 00:24:06.739 { 00:24:06.739 "dma_device_id": "system", 00:24:06.739 "dma_device_type": 1 00:24:06.739 }, 00:24:06.739 { 00:24:06.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.739 "dma_device_type": 2 00:24:06.739 } 00:24:06.739 ], 00:24:06.739 "driver_specific": {} 00:24:06.739 } 00:24:06.739 ] 00:24:06.739 07:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:06.739 07:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:06.739 07:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:06.739 07:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:06.739 07:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:06.739 07:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:06.739 07:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:06.739 07:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:06.739 07:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:06.739 07:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:06.739 07:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:06.739 07:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.739 07:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:06.998 07:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:06.998 "name": "Existed_Raid", 00:24:06.998 "uuid": "9cc1862e-1671-4c4b-84e5-a74b751bbce2", 00:24:06.998 "strip_size_kb": 64, 00:24:06.998 "state": "configuring", 00:24:06.998 "raid_level": "concat", 00:24:06.998 "superblock": true, 00:24:06.998 "num_base_bdevs": 4, 00:24:06.998 "num_base_bdevs_discovered": 3, 00:24:06.998 "num_base_bdevs_operational": 4, 00:24:06.998 "base_bdevs_list": [ 00:24:06.998 { 00:24:06.998 "name": "BaseBdev1", 00:24:06.998 "uuid": "9192fd89-b687-42aa-bfa6-ba289195fa88", 00:24:06.998 "is_configured": true, 00:24:06.998 "data_offset": 2048, 00:24:06.998 "data_size": 63488 00:24:06.998 }, 00:24:06.998 { 00:24:06.998 "name": null, 00:24:06.998 "uuid": "c19b06a9-ebad-44f3-a02c-3dd4dc9739e4", 00:24:06.998 "is_configured": false, 00:24:06.998 "data_offset": 2048, 00:24:06.998 "data_size": 63488 00:24:06.998 }, 00:24:06.998 { 00:24:06.998 "name": "BaseBdev3", 00:24:06.998 "uuid": "5fffbf93-8083-457b-bb00-916b06616a81", 00:24:06.998 "is_configured": true, 00:24:06.998 "data_offset": 2048, 00:24:06.998 "data_size": 63488 00:24:06.998 }, 00:24:06.998 { 00:24:06.998 "name": "BaseBdev4", 00:24:06.998 "uuid": "7ac152e2-fbf0-41b9-8a08-3454ced90335", 00:24:06.998 "is_configured": true, 00:24:06.998 "data_offset": 2048, 00:24:06.998 "data_size": 63488 00:24:06.998 } 00:24:06.998 ] 00:24:06.998 }' 00:24:06.998 07:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:06.998 07:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.565 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.565 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:07.823 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:07.823 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:08.081 [2024-07-12 07:34:41.841704] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:08.081 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:08.081 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:08.081 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:08.081 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:08.081 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:08.081 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:08.081 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:08.081 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:08.081 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:08.081 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:08.081 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.081 07:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:08.339 07:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:08.339 "name": "Existed_Raid", 00:24:08.339 "uuid": "9cc1862e-1671-4c4b-84e5-a74b751bbce2", 00:24:08.339 "strip_size_kb": 64, 00:24:08.339 "state": "configuring", 00:24:08.339 "raid_level": "concat", 00:24:08.339 "superblock": true, 00:24:08.339 "num_base_bdevs": 4, 00:24:08.339 "num_base_bdevs_discovered": 2, 00:24:08.339 "num_base_bdevs_operational": 4, 00:24:08.339 "base_bdevs_list": [ 00:24:08.339 { 00:24:08.339 "name": "BaseBdev1", 00:24:08.339 "uuid": "9192fd89-b687-42aa-bfa6-ba289195fa88", 00:24:08.339 "is_configured": true, 00:24:08.339 "data_offset": 2048, 00:24:08.339 "data_size": 63488 00:24:08.339 }, 00:24:08.339 { 00:24:08.339 "name": null, 00:24:08.339 "uuid": "c19b06a9-ebad-44f3-a02c-3dd4dc9739e4", 00:24:08.339 "is_configured": false, 00:24:08.339 "data_offset": 2048, 00:24:08.339 "data_size": 63488 00:24:08.339 }, 00:24:08.339 { 00:24:08.339 "name": null, 00:24:08.339 "uuid": "5fffbf93-8083-457b-bb00-916b06616a81", 00:24:08.339 "is_configured": false, 00:24:08.339 "data_offset": 2048, 00:24:08.339 "data_size": 63488 00:24:08.339 }, 00:24:08.339 { 00:24:08.339 "name": "BaseBdev4", 00:24:08.339 "uuid": "7ac152e2-fbf0-41b9-8a08-3454ced90335", 00:24:08.339 "is_configured": true, 00:24:08.339 "data_offset": 2048, 00:24:08.339 "data_size": 63488 00:24:08.339 } 00:24:08.339 ] 00:24:08.339 }' 00:24:08.339 07:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:08.339 07:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.904 07:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.904 07:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:09.162 07:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:09.162 07:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:09.419 [2024-07-12 07:34:43.234378] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:09.419 07:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:09.419 07:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:09.419 07:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:09.419 07:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:09.419 07:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:09.419 07:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:09.419 07:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:09.419 07:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:09.419 07:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:09.419 07:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:09.419 07:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.419 07:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:09.675 07:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.675 "name": "Existed_Raid", 00:24:09.675 "uuid": "9cc1862e-1671-4c4b-84e5-a74b751bbce2", 00:24:09.675 "strip_size_kb": 64, 00:24:09.675 "state": "configuring", 00:24:09.675 "raid_level": "concat", 00:24:09.675 "superblock": true, 00:24:09.675 "num_base_bdevs": 4, 00:24:09.675 "num_base_bdevs_discovered": 3, 00:24:09.675 "num_base_bdevs_operational": 4, 00:24:09.675 "base_bdevs_list": [ 00:24:09.675 { 00:24:09.675 "name": "BaseBdev1", 00:24:09.675 "uuid": "9192fd89-b687-42aa-bfa6-ba289195fa88", 00:24:09.675 "is_configured": true, 00:24:09.675 "data_offset": 2048, 00:24:09.675 "data_size": 63488 00:24:09.675 }, 00:24:09.675 { 00:24:09.675 "name": null, 00:24:09.675 "uuid": "c19b06a9-ebad-44f3-a02c-3dd4dc9739e4", 00:24:09.675 "is_configured": false, 00:24:09.675 "data_offset": 2048, 00:24:09.675 "data_size": 63488 00:24:09.675 }, 00:24:09.675 { 00:24:09.675 "name": "BaseBdev3", 00:24:09.675 "uuid": "5fffbf93-8083-457b-bb00-916b06616a81", 00:24:09.675 "is_configured": true, 00:24:09.675 "data_offset": 2048, 00:24:09.675 "data_size": 63488 00:24:09.675 }, 00:24:09.675 { 00:24:09.675 "name": "BaseBdev4", 00:24:09.675 "uuid": "7ac152e2-fbf0-41b9-8a08-3454ced90335", 00:24:09.676 "is_configured": true, 00:24:09.676 "data_offset": 2048, 00:24:09.676 "data_size": 63488 00:24:09.676 } 00:24:09.676 ] 00:24:09.676 }' 00:24:09.676 07:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.676 07:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:10.240 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.240 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:10.497 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:10.497 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:10.753 [2024-07-12 07:34:44.486775] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:10.753 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:10.753 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:10.753 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:10.753 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:10.753 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:10.753 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:10.753 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:10.753 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:10.753 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:10.753 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:10.753 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.753 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:11.009 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:11.009 "name": "Existed_Raid", 00:24:11.009 "uuid": "9cc1862e-1671-4c4b-84e5-a74b751bbce2", 00:24:11.009 "strip_size_kb": 64, 00:24:11.009 "state": "configuring", 00:24:11.009 "raid_level": "concat", 00:24:11.009 "superblock": true, 00:24:11.009 "num_base_bdevs": 4, 00:24:11.009 "num_base_bdevs_discovered": 2, 00:24:11.009 "num_base_bdevs_operational": 4, 00:24:11.009 "base_bdevs_list": [ 00:24:11.009 { 00:24:11.009 "name": null, 00:24:11.009 "uuid": "9192fd89-b687-42aa-bfa6-ba289195fa88", 00:24:11.009 "is_configured": false, 00:24:11.009 "data_offset": 2048, 00:24:11.009 "data_size": 63488 00:24:11.009 }, 00:24:11.009 { 00:24:11.009 "name": null, 00:24:11.009 "uuid": "c19b06a9-ebad-44f3-a02c-3dd4dc9739e4", 00:24:11.009 "is_configured": false, 00:24:11.009 "data_offset": 2048, 00:24:11.009 "data_size": 63488 00:24:11.009 }, 00:24:11.009 { 00:24:11.009 "name": "BaseBdev3", 00:24:11.009 "uuid": "5fffbf93-8083-457b-bb00-916b06616a81", 00:24:11.009 "is_configured": true, 00:24:11.009 "data_offset": 2048, 00:24:11.009 "data_size": 63488 00:24:11.009 }, 00:24:11.009 { 00:24:11.009 "name": "BaseBdev4", 00:24:11.009 "uuid": "7ac152e2-fbf0-41b9-8a08-3454ced90335", 00:24:11.009 "is_configured": true, 00:24:11.009 "data_offset": 2048, 00:24:11.009 "data_size": 63488 00:24:11.009 } 00:24:11.009 ] 00:24:11.009 }' 00:24:11.009 07:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:11.009 07:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:11.573 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.573 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:11.831 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:11.831 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:12.087 [2024-07-12 07:34:45.873863] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:12.087 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:12.087 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:12.087 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:12.087 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:12.087 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:12.087 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:12.087 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:12.087 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:12.087 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:12.087 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:12.088 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.088 07:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:12.345 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:12.345 "name": "Existed_Raid", 00:24:12.345 "uuid": "9cc1862e-1671-4c4b-84e5-a74b751bbce2", 00:24:12.345 "strip_size_kb": 64, 00:24:12.345 "state": "configuring", 00:24:12.345 "raid_level": "concat", 00:24:12.345 "superblock": true, 00:24:12.345 "num_base_bdevs": 4, 00:24:12.345 "num_base_bdevs_discovered": 3, 00:24:12.345 "num_base_bdevs_operational": 4, 00:24:12.345 "base_bdevs_list": [ 00:24:12.345 { 00:24:12.345 "name": null, 00:24:12.345 "uuid": "9192fd89-b687-42aa-bfa6-ba289195fa88", 00:24:12.345 "is_configured": false, 00:24:12.345 "data_offset": 2048, 00:24:12.345 "data_size": 63488 00:24:12.345 }, 00:24:12.345 { 00:24:12.345 "name": "BaseBdev2", 00:24:12.345 "uuid": "c19b06a9-ebad-44f3-a02c-3dd4dc9739e4", 00:24:12.345 "is_configured": true, 00:24:12.345 "data_offset": 2048, 00:24:12.345 "data_size": 63488 00:24:12.345 }, 00:24:12.345 { 00:24:12.345 "name": "BaseBdev3", 00:24:12.345 "uuid": "5fffbf93-8083-457b-bb00-916b06616a81", 00:24:12.345 "is_configured": true, 00:24:12.345 "data_offset": 2048, 00:24:12.345 "data_size": 63488 00:24:12.345 }, 00:24:12.345 { 00:24:12.345 "name": "BaseBdev4", 00:24:12.345 "uuid": "7ac152e2-fbf0-41b9-8a08-3454ced90335", 00:24:12.345 "is_configured": true, 00:24:12.345 "data_offset": 2048, 00:24:12.345 "data_size": 63488 00:24:12.345 } 00:24:12.345 ] 00:24:12.345 }' 00:24:12.345 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:12.345 07:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.910 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.910 07:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:13.167 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:13.167 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:13.167 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.425 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9192fd89-b687-42aa-bfa6-ba289195fa88 00:24:13.683 [2024-07-12 07:34:47.563137] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:13.683 [2024-07-12 07:34:47.563681] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:24:13.683 [2024-07-12 07:34:47.563828] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:13.683 [2024-07-12 07:34:47.563950] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:24:13.683 [2024-07-12 07:34:47.564385] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:24:13.683 [2024-07-12 07:34:47.564499] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:24:13.683 [2024-07-12 07:34:47.564729] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.683 NewBaseBdev 00:24:13.940 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:13.940 07:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:24:13.940 07:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:13.940 07:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:24:13.940 07:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:13.940 07:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:13.940 07:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:13.940 07:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:14.197 [ 00:24:14.197 { 00:24:14.197 "name": "NewBaseBdev", 00:24:14.197 "aliases": [ 00:24:14.197 "9192fd89-b687-42aa-bfa6-ba289195fa88" 00:24:14.197 ], 00:24:14.197 "product_name": "Malloc disk", 00:24:14.197 "block_size": 512, 00:24:14.197 "num_blocks": 65536, 00:24:14.197 "uuid": "9192fd89-b687-42aa-bfa6-ba289195fa88", 00:24:14.197 "assigned_rate_limits": { 00:24:14.197 "rw_ios_per_sec": 0, 00:24:14.197 "rw_mbytes_per_sec": 0, 00:24:14.197 "r_mbytes_per_sec": 0, 00:24:14.197 "w_mbytes_per_sec": 0 00:24:14.197 }, 00:24:14.197 "claimed": true, 00:24:14.197 "claim_type": "exclusive_write", 00:24:14.197 "zoned": false, 00:24:14.197 "supported_io_types": { 00:24:14.197 "read": true, 00:24:14.197 "write": true, 00:24:14.197 "unmap": true, 00:24:14.197 "write_zeroes": true, 00:24:14.197 "flush": true, 00:24:14.197 "reset": true, 00:24:14.197 "compare": false, 00:24:14.197 "compare_and_write": false, 00:24:14.197 "abort": true, 00:24:14.197 "nvme_admin": false, 00:24:14.197 "nvme_io": false 00:24:14.197 }, 00:24:14.197 "memory_domains": [ 00:24:14.197 { 00:24:14.197 "dma_device_id": "system", 00:24:14.197 "dma_device_type": 1 00:24:14.197 }, 00:24:14.197 { 00:24:14.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:14.197 "dma_device_type": 2 00:24:14.197 } 00:24:14.197 ], 00:24:14.197 "driver_specific": {} 00:24:14.197 } 00:24:14.197 ] 00:24:14.197 07:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:24:14.197 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:14.197 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:14.197 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:14.197 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:14.197 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:14.197 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:14.197 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:14.197 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:14.197 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:14.197 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:14.197 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.197 07:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.455 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:14.455 "name": "Existed_Raid", 00:24:14.455 "uuid": "9cc1862e-1671-4c4b-84e5-a74b751bbce2", 00:24:14.455 "strip_size_kb": 64, 00:24:14.455 "state": "online", 00:24:14.455 "raid_level": "concat", 00:24:14.455 "superblock": true, 00:24:14.455 "num_base_bdevs": 4, 00:24:14.455 "num_base_bdevs_discovered": 4, 00:24:14.455 "num_base_bdevs_operational": 4, 00:24:14.455 "base_bdevs_list": [ 00:24:14.455 { 00:24:14.455 "name": "NewBaseBdev", 00:24:14.455 "uuid": "9192fd89-b687-42aa-bfa6-ba289195fa88", 00:24:14.455 "is_configured": true, 00:24:14.455 "data_offset": 2048, 00:24:14.455 "data_size": 63488 00:24:14.455 }, 00:24:14.455 { 00:24:14.455 "name": "BaseBdev2", 00:24:14.455 "uuid": "c19b06a9-ebad-44f3-a02c-3dd4dc9739e4", 00:24:14.455 "is_configured": true, 00:24:14.455 "data_offset": 2048, 00:24:14.455 "data_size": 63488 00:24:14.455 }, 00:24:14.455 { 00:24:14.455 "name": "BaseBdev3", 00:24:14.455 "uuid": "5fffbf93-8083-457b-bb00-916b06616a81", 00:24:14.455 "is_configured": true, 00:24:14.455 "data_offset": 2048, 00:24:14.455 "data_size": 63488 00:24:14.455 }, 00:24:14.455 { 00:24:14.455 "name": "BaseBdev4", 00:24:14.455 "uuid": "7ac152e2-fbf0-41b9-8a08-3454ced90335", 00:24:14.455 "is_configured": true, 00:24:14.455 "data_offset": 2048, 00:24:14.455 "data_size": 63488 00:24:14.456 } 00:24:14.456 ] 00:24:14.456 }' 00:24:14.456 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:14.456 07:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.023 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:15.023 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:15.023 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:15.023 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:15.023 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:15.023 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:15.023 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:15.023 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:15.023 [2024-07-12 07:34:48.883737] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:15.023 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:15.023 "name": "Existed_Raid", 00:24:15.023 "aliases": [ 00:24:15.023 "9cc1862e-1671-4c4b-84e5-a74b751bbce2" 00:24:15.023 ], 00:24:15.023 "product_name": "Raid Volume", 00:24:15.023 "block_size": 512, 00:24:15.023 "num_blocks": 253952, 00:24:15.023 "uuid": "9cc1862e-1671-4c4b-84e5-a74b751bbce2", 00:24:15.023 "assigned_rate_limits": { 00:24:15.023 "rw_ios_per_sec": 0, 00:24:15.023 "rw_mbytes_per_sec": 0, 00:24:15.023 "r_mbytes_per_sec": 0, 00:24:15.023 "w_mbytes_per_sec": 0 00:24:15.023 }, 00:24:15.023 "claimed": false, 00:24:15.023 "zoned": false, 00:24:15.023 "supported_io_types": { 00:24:15.023 "read": true, 00:24:15.023 "write": true, 00:24:15.023 "unmap": true, 00:24:15.023 "write_zeroes": true, 00:24:15.023 "flush": true, 00:24:15.023 "reset": true, 00:24:15.023 "compare": false, 00:24:15.023 "compare_and_write": false, 00:24:15.023 "abort": false, 00:24:15.023 "nvme_admin": false, 00:24:15.023 "nvme_io": false 00:24:15.023 }, 00:24:15.023 "memory_domains": [ 00:24:15.023 { 00:24:15.023 "dma_device_id": "system", 00:24:15.023 "dma_device_type": 1 00:24:15.023 }, 00:24:15.023 { 00:24:15.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.023 "dma_device_type": 2 00:24:15.023 }, 00:24:15.023 { 00:24:15.023 "dma_device_id": "system", 00:24:15.023 "dma_device_type": 1 00:24:15.023 }, 00:24:15.023 { 00:24:15.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.023 "dma_device_type": 2 00:24:15.023 }, 00:24:15.023 { 00:24:15.023 "dma_device_id": "system", 00:24:15.023 "dma_device_type": 1 00:24:15.023 }, 00:24:15.023 { 00:24:15.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.023 "dma_device_type": 2 00:24:15.023 }, 00:24:15.023 { 00:24:15.023 "dma_device_id": "system", 00:24:15.023 "dma_device_type": 1 00:24:15.023 }, 00:24:15.023 { 00:24:15.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.023 "dma_device_type": 2 00:24:15.023 } 00:24:15.023 ], 00:24:15.023 "driver_specific": { 00:24:15.023 "raid": { 00:24:15.023 "uuid": "9cc1862e-1671-4c4b-84e5-a74b751bbce2", 00:24:15.023 "strip_size_kb": 64, 00:24:15.023 "state": "online", 00:24:15.023 "raid_level": "concat", 00:24:15.023 "superblock": true, 00:24:15.023 "num_base_bdevs": 4, 00:24:15.023 "num_base_bdevs_discovered": 4, 00:24:15.023 "num_base_bdevs_operational": 4, 00:24:15.023 "base_bdevs_list": [ 00:24:15.023 { 00:24:15.023 "name": "NewBaseBdev", 00:24:15.023 "uuid": "9192fd89-b687-42aa-bfa6-ba289195fa88", 00:24:15.023 "is_configured": true, 00:24:15.023 "data_offset": 2048, 00:24:15.023 "data_size": 63488 00:24:15.023 }, 00:24:15.023 { 00:24:15.023 "name": "BaseBdev2", 00:24:15.023 "uuid": "c19b06a9-ebad-44f3-a02c-3dd4dc9739e4", 00:24:15.023 "is_configured": true, 00:24:15.023 "data_offset": 2048, 00:24:15.023 "data_size": 63488 00:24:15.023 }, 00:24:15.023 { 00:24:15.023 "name": "BaseBdev3", 00:24:15.023 "uuid": "5fffbf93-8083-457b-bb00-916b06616a81", 00:24:15.023 "is_configured": true, 00:24:15.023 "data_offset": 2048, 00:24:15.023 "data_size": 63488 00:24:15.023 }, 00:24:15.023 { 00:24:15.023 "name": "BaseBdev4", 00:24:15.023 "uuid": "7ac152e2-fbf0-41b9-8a08-3454ced90335", 00:24:15.023 "is_configured": true, 00:24:15.023 "data_offset": 2048, 00:24:15.023 "data_size": 63488 00:24:15.023 } 00:24:15.023 ] 00:24:15.023 } 00:24:15.023 } 00:24:15.023 }' 00:24:15.281 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:15.281 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:15.281 BaseBdev2 00:24:15.281 BaseBdev3 00:24:15.281 BaseBdev4' 00:24:15.281 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:15.281 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:15.281 07:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:15.539 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:15.539 "name": "NewBaseBdev", 00:24:15.539 "aliases": [ 00:24:15.539 "9192fd89-b687-42aa-bfa6-ba289195fa88" 00:24:15.539 ], 00:24:15.539 "product_name": "Malloc disk", 00:24:15.539 "block_size": 512, 00:24:15.539 "num_blocks": 65536, 00:24:15.539 "uuid": "9192fd89-b687-42aa-bfa6-ba289195fa88", 00:24:15.539 "assigned_rate_limits": { 00:24:15.539 "rw_ios_per_sec": 0, 00:24:15.539 "rw_mbytes_per_sec": 0, 00:24:15.539 "r_mbytes_per_sec": 0, 00:24:15.539 "w_mbytes_per_sec": 0 00:24:15.539 }, 00:24:15.539 "claimed": true, 00:24:15.539 "claim_type": "exclusive_write", 00:24:15.539 "zoned": false, 00:24:15.539 "supported_io_types": { 00:24:15.539 "read": true, 00:24:15.539 "write": true, 00:24:15.539 "unmap": true, 00:24:15.539 "write_zeroes": true, 00:24:15.539 "flush": true, 00:24:15.539 "reset": true, 00:24:15.539 "compare": false, 00:24:15.539 "compare_and_write": false, 00:24:15.539 "abort": true, 00:24:15.539 "nvme_admin": false, 00:24:15.539 "nvme_io": false 00:24:15.539 }, 00:24:15.539 "memory_domains": [ 00:24:15.539 { 00:24:15.539 "dma_device_id": "system", 00:24:15.539 "dma_device_type": 1 00:24:15.539 }, 00:24:15.539 { 00:24:15.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.539 "dma_device_type": 2 00:24:15.539 } 00:24:15.539 ], 00:24:15.539 "driver_specific": {} 00:24:15.539 }' 00:24:15.539 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:15.539 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:15.539 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:15.539 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:15.539 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:15.539 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:15.539 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:15.539 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:15.798 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:15.798 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:15.798 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:15.798 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:15.798 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:15.798 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:15.798 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:16.056 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:16.056 "name": "BaseBdev2", 00:24:16.056 "aliases": [ 00:24:16.056 "c19b06a9-ebad-44f3-a02c-3dd4dc9739e4" 00:24:16.056 ], 00:24:16.056 "product_name": "Malloc disk", 00:24:16.056 "block_size": 512, 00:24:16.056 "num_blocks": 65536, 00:24:16.056 "uuid": "c19b06a9-ebad-44f3-a02c-3dd4dc9739e4", 00:24:16.056 "assigned_rate_limits": { 00:24:16.056 "rw_ios_per_sec": 0, 00:24:16.056 "rw_mbytes_per_sec": 0, 00:24:16.056 "r_mbytes_per_sec": 0, 00:24:16.056 "w_mbytes_per_sec": 0 00:24:16.056 }, 00:24:16.056 "claimed": true, 00:24:16.056 "claim_type": "exclusive_write", 00:24:16.056 "zoned": false, 00:24:16.056 "supported_io_types": { 00:24:16.056 "read": true, 00:24:16.056 "write": true, 00:24:16.056 "unmap": true, 00:24:16.056 "write_zeroes": true, 00:24:16.056 "flush": true, 00:24:16.056 "reset": true, 00:24:16.056 "compare": false, 00:24:16.056 "compare_and_write": false, 00:24:16.056 "abort": true, 00:24:16.056 "nvme_admin": false, 00:24:16.056 "nvme_io": false 00:24:16.056 }, 00:24:16.056 "memory_domains": [ 00:24:16.056 { 00:24:16.056 "dma_device_id": "system", 00:24:16.056 "dma_device_type": 1 00:24:16.056 }, 00:24:16.056 { 00:24:16.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:16.056 "dma_device_type": 2 00:24:16.056 } 00:24:16.056 ], 00:24:16.056 "driver_specific": {} 00:24:16.056 }' 00:24:16.056 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:16.056 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:16.056 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:16.056 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:16.056 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:16.056 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:16.056 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:16.332 07:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:16.332 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:16.332 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:16.332 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:16.332 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:16.332 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:16.332 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:16.332 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:16.611 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:16.611 "name": "BaseBdev3", 00:24:16.611 "aliases": [ 00:24:16.611 "5fffbf93-8083-457b-bb00-916b06616a81" 00:24:16.611 ], 00:24:16.611 "product_name": "Malloc disk", 00:24:16.611 "block_size": 512, 00:24:16.611 "num_blocks": 65536, 00:24:16.611 "uuid": "5fffbf93-8083-457b-bb00-916b06616a81", 00:24:16.611 "assigned_rate_limits": { 00:24:16.611 "rw_ios_per_sec": 0, 00:24:16.611 "rw_mbytes_per_sec": 0, 00:24:16.611 "r_mbytes_per_sec": 0, 00:24:16.611 "w_mbytes_per_sec": 0 00:24:16.611 }, 00:24:16.611 "claimed": true, 00:24:16.611 "claim_type": "exclusive_write", 00:24:16.611 "zoned": false, 00:24:16.611 "supported_io_types": { 00:24:16.611 "read": true, 00:24:16.612 "write": true, 00:24:16.612 "unmap": true, 00:24:16.612 "write_zeroes": true, 00:24:16.612 "flush": true, 00:24:16.612 "reset": true, 00:24:16.612 "compare": false, 00:24:16.612 "compare_and_write": false, 00:24:16.612 "abort": true, 00:24:16.612 "nvme_admin": false, 00:24:16.612 "nvme_io": false 00:24:16.612 }, 00:24:16.612 "memory_domains": [ 00:24:16.612 { 00:24:16.612 "dma_device_id": "system", 00:24:16.612 "dma_device_type": 1 00:24:16.612 }, 00:24:16.612 { 00:24:16.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:16.612 "dma_device_type": 2 00:24:16.612 } 00:24:16.612 ], 00:24:16.612 "driver_specific": {} 00:24:16.612 }' 00:24:16.612 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:16.612 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:16.612 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:16.612 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:16.869 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:16.869 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:16.869 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:16.869 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:16.869 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:16.869 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:16.869 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:17.128 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:17.128 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:17.128 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:17.128 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:17.128 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:17.128 "name": "BaseBdev4", 00:24:17.128 "aliases": [ 00:24:17.128 "7ac152e2-fbf0-41b9-8a08-3454ced90335" 00:24:17.128 ], 00:24:17.128 "product_name": "Malloc disk", 00:24:17.128 "block_size": 512, 00:24:17.128 "num_blocks": 65536, 00:24:17.128 "uuid": "7ac152e2-fbf0-41b9-8a08-3454ced90335", 00:24:17.128 "assigned_rate_limits": { 00:24:17.128 "rw_ios_per_sec": 0, 00:24:17.128 "rw_mbytes_per_sec": 0, 00:24:17.128 "r_mbytes_per_sec": 0, 00:24:17.128 "w_mbytes_per_sec": 0 00:24:17.128 }, 00:24:17.128 "claimed": true, 00:24:17.128 "claim_type": "exclusive_write", 00:24:17.128 "zoned": false, 00:24:17.128 "supported_io_types": { 00:24:17.128 "read": true, 00:24:17.128 "write": true, 00:24:17.128 "unmap": true, 00:24:17.129 "write_zeroes": true, 00:24:17.129 "flush": true, 00:24:17.129 "reset": true, 00:24:17.129 "compare": false, 00:24:17.129 "compare_and_write": false, 00:24:17.129 "abort": true, 00:24:17.129 "nvme_admin": false, 00:24:17.129 "nvme_io": false 00:24:17.129 }, 00:24:17.129 "memory_domains": [ 00:24:17.129 { 00:24:17.129 "dma_device_id": "system", 00:24:17.129 "dma_device_type": 1 00:24:17.129 }, 00:24:17.129 { 00:24:17.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:17.129 "dma_device_type": 2 00:24:17.129 } 00:24:17.129 ], 00:24:17.129 "driver_specific": {} 00:24:17.129 }' 00:24:17.129 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:17.129 07:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:17.395 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:17.395 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:17.395 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:17.395 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:17.395 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:17.395 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:17.395 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:17.395 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:17.395 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:17.652 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:17.652 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:17.652 [2024-07-12 07:34:51.473948] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:17.652 [2024-07-12 07:34:51.474227] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:17.652 [2024-07-12 07:34:51.474412] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:17.652 [2024-07-12 07:34:51.474534] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:17.652 [2024-07-12 07:34:51.474740] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:24:17.652 07:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 148368 00:24:17.652 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 148368 ']' 00:24:17.652 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 148368 00:24:17.652 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:24:17.652 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:17.652 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 148368 00:24:17.652 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:17.652 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:17.652 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 148368' 00:24:17.652 killing process with pid 148368 00:24:17.652 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 148368 00:24:17.652 [2024-07-12 07:34:51.527139] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:17.652 07:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 148368 00:24:17.910 [2024-07-12 07:34:51.605266] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:18.169 ************************************ 00:24:18.169 END TEST raid_state_function_test_sb 00:24:18.169 ************************************ 00:24:18.169 07:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:24:18.169 00:24:18.169 real 0m32.153s 00:24:18.169 user 0m58.985s 00:24:18.169 sys 0m5.620s 00:24:18.169 07:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:18.169 07:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.427 07:34:52 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:24:18.427 07:34:52 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:24:18.427 07:34:52 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:18.427 07:34:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:18.427 ************************************ 00:24:18.427 START TEST raid_superblock_test 00:24:18.427 ************************************ 00:24:18.427 07:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 4 00:24:18.427 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=149450 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 149450 /var/tmp/spdk-raid.sock 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 149450 ']' 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:18.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:18.428 07:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.428 [2024-07-12 07:34:52.154457] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:18.428 [2024-07-12 07:34:52.154870] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149450 ] 00:24:18.428 [2024-07-12 07:34:52.303384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.686 [2024-07-12 07:34:52.397500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.686 [2024-07-12 07:34:52.483704] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:19.252 07:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:19.252 07:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:24:19.252 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:24:19.252 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:19.252 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:24:19.252 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:24:19.252 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:19.252 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:19.252 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:19.252 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:19.252 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:19.817 malloc1 00:24:19.817 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:19.817 [2024-07-12 07:34:53.593993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:19.817 [2024-07-12 07:34:53.594357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:19.817 [2024-07-12 07:34:53.594441] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:24:19.817 [2024-07-12 07:34:53.594699] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:19.817 [2024-07-12 07:34:53.597670] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:19.817 [2024-07-12 07:34:53.597827] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:19.817 pt1 00:24:19.817 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:19.817 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:19.817 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:24:19.817 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:24:19.817 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:19.817 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:19.817 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:19.817 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:19.818 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:20.075 malloc2 00:24:20.075 07:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:20.331 [2024-07-12 07:34:54.054104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:20.331 [2024-07-12 07:34:54.054356] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.331 [2024-07-12 07:34:54.054490] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:24:20.331 [2024-07-12 07:34:54.054616] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.331 [2024-07-12 07:34:54.057472] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.331 [2024-07-12 07:34:54.057626] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:20.331 pt2 00:24:20.331 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:20.331 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:20.331 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:24:20.331 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:24:20.331 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:20.331 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:20.331 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:20.331 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:20.331 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:20.589 malloc3 00:24:20.589 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:20.847 [2024-07-12 07:34:54.479078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:20.847 [2024-07-12 07:34:54.479420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.847 [2024-07-12 07:34:54.479506] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:20.847 [2024-07-12 07:34:54.479619] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.847 [2024-07-12 07:34:54.482413] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.847 [2024-07-12 07:34:54.482571] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:20.847 pt3 00:24:20.847 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:20.847 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:20.847 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:24:20.847 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:24:20.847 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:20.847 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:20.847 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:20.847 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:20.847 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:20.847 malloc4 00:24:20.847 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:21.105 [2024-07-12 07:34:54.891085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:21.105 [2024-07-12 07:34:54.891459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.105 [2024-07-12 07:34:54.891536] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:21.105 [2024-07-12 07:34:54.891650] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.105 [2024-07-12 07:34:54.894507] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.105 [2024-07-12 07:34:54.894669] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:21.105 pt4 00:24:21.105 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:21.105 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:21.105 07:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:21.363 [2024-07-12 07:34:55.087297] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:21.363 [2024-07-12 07:34:55.089937] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:21.363 [2024-07-12 07:34:55.090150] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:21.363 [2024-07-12 07:34:55.090225] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:21.363 [2024-07-12 07:34:55.090571] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:24:21.363 [2024-07-12 07:34:55.090678] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:21.363 [2024-07-12 07:34:55.090891] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:24:21.363 [2024-07-12 07:34:55.091350] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:24:21.363 [2024-07-12 07:34:55.091452] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:24:21.363 [2024-07-12 07:34:55.091724] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:21.363 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:21.363 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:21.363 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:21.363 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:21.363 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:21.363 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:21.363 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:21.363 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:21.363 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:21.363 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:21.363 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.363 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.621 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:21.621 "name": "raid_bdev1", 00:24:21.621 "uuid": "6a6ea8e3-7b7d-420e-b499-c07702b63b10", 00:24:21.621 "strip_size_kb": 64, 00:24:21.621 "state": "online", 00:24:21.621 "raid_level": "concat", 00:24:21.621 "superblock": true, 00:24:21.621 "num_base_bdevs": 4, 00:24:21.621 "num_base_bdevs_discovered": 4, 00:24:21.621 "num_base_bdevs_operational": 4, 00:24:21.621 "base_bdevs_list": [ 00:24:21.621 { 00:24:21.621 "name": "pt1", 00:24:21.621 "uuid": "778f0635-62dc-5b45-902a-d025bd0e68c2", 00:24:21.621 "is_configured": true, 00:24:21.621 "data_offset": 2048, 00:24:21.621 "data_size": 63488 00:24:21.621 }, 00:24:21.621 { 00:24:21.621 "name": "pt2", 00:24:21.621 "uuid": "b3089c9d-9705-565e-ad30-6344bfbdd8f6", 00:24:21.621 "is_configured": true, 00:24:21.621 "data_offset": 2048, 00:24:21.621 "data_size": 63488 00:24:21.621 }, 00:24:21.621 { 00:24:21.621 "name": "pt3", 00:24:21.621 "uuid": "74a1d042-ae7b-5f22-b1e3-96c45dc30f44", 00:24:21.621 "is_configured": true, 00:24:21.621 "data_offset": 2048, 00:24:21.621 "data_size": 63488 00:24:21.621 }, 00:24:21.621 { 00:24:21.621 "name": "pt4", 00:24:21.621 "uuid": "4745ebca-f866-57a8-a8c9-cb34bbf28f4c", 00:24:21.621 "is_configured": true, 00:24:21.621 "data_offset": 2048, 00:24:21.621 "data_size": 63488 00:24:21.621 } 00:24:21.621 ] 00:24:21.621 }' 00:24:21.621 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:21.621 07:34:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.187 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:24:22.187 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:22.187 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:22.187 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:22.187 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:22.187 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:22.187 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:22.187 07:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:22.444 [2024-07-12 07:34:56.072216] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:22.444 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:22.444 "name": "raid_bdev1", 00:24:22.444 "aliases": [ 00:24:22.444 "6a6ea8e3-7b7d-420e-b499-c07702b63b10" 00:24:22.444 ], 00:24:22.444 "product_name": "Raid Volume", 00:24:22.444 "block_size": 512, 00:24:22.444 "num_blocks": 253952, 00:24:22.444 "uuid": "6a6ea8e3-7b7d-420e-b499-c07702b63b10", 00:24:22.444 "assigned_rate_limits": { 00:24:22.444 "rw_ios_per_sec": 0, 00:24:22.444 "rw_mbytes_per_sec": 0, 00:24:22.444 "r_mbytes_per_sec": 0, 00:24:22.444 "w_mbytes_per_sec": 0 00:24:22.444 }, 00:24:22.444 "claimed": false, 00:24:22.444 "zoned": false, 00:24:22.444 "supported_io_types": { 00:24:22.444 "read": true, 00:24:22.444 "write": true, 00:24:22.444 "unmap": true, 00:24:22.444 "write_zeroes": true, 00:24:22.444 "flush": true, 00:24:22.444 "reset": true, 00:24:22.444 "compare": false, 00:24:22.444 "compare_and_write": false, 00:24:22.444 "abort": false, 00:24:22.444 "nvme_admin": false, 00:24:22.444 "nvme_io": false 00:24:22.444 }, 00:24:22.444 "memory_domains": [ 00:24:22.444 { 00:24:22.444 "dma_device_id": "system", 00:24:22.444 "dma_device_type": 1 00:24:22.444 }, 00:24:22.444 { 00:24:22.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.444 "dma_device_type": 2 00:24:22.444 }, 00:24:22.444 { 00:24:22.444 "dma_device_id": "system", 00:24:22.444 "dma_device_type": 1 00:24:22.444 }, 00:24:22.444 { 00:24:22.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.444 "dma_device_type": 2 00:24:22.444 }, 00:24:22.444 { 00:24:22.444 "dma_device_id": "system", 00:24:22.444 "dma_device_type": 1 00:24:22.444 }, 00:24:22.444 { 00:24:22.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.444 "dma_device_type": 2 00:24:22.444 }, 00:24:22.444 { 00:24:22.444 "dma_device_id": "system", 00:24:22.444 "dma_device_type": 1 00:24:22.444 }, 00:24:22.444 { 00:24:22.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.444 "dma_device_type": 2 00:24:22.444 } 00:24:22.444 ], 00:24:22.444 "driver_specific": { 00:24:22.444 "raid": { 00:24:22.444 "uuid": "6a6ea8e3-7b7d-420e-b499-c07702b63b10", 00:24:22.444 "strip_size_kb": 64, 00:24:22.444 "state": "online", 00:24:22.444 "raid_level": "concat", 00:24:22.444 "superblock": true, 00:24:22.444 "num_base_bdevs": 4, 00:24:22.444 "num_base_bdevs_discovered": 4, 00:24:22.444 "num_base_bdevs_operational": 4, 00:24:22.444 "base_bdevs_list": [ 00:24:22.444 { 00:24:22.444 "name": "pt1", 00:24:22.444 "uuid": "778f0635-62dc-5b45-902a-d025bd0e68c2", 00:24:22.444 "is_configured": true, 00:24:22.444 "data_offset": 2048, 00:24:22.444 "data_size": 63488 00:24:22.444 }, 00:24:22.444 { 00:24:22.444 "name": "pt2", 00:24:22.444 "uuid": "b3089c9d-9705-565e-ad30-6344bfbdd8f6", 00:24:22.444 "is_configured": true, 00:24:22.444 "data_offset": 2048, 00:24:22.444 "data_size": 63488 00:24:22.444 }, 00:24:22.444 { 00:24:22.444 "name": "pt3", 00:24:22.444 "uuid": "74a1d042-ae7b-5f22-b1e3-96c45dc30f44", 00:24:22.444 "is_configured": true, 00:24:22.444 "data_offset": 2048, 00:24:22.444 "data_size": 63488 00:24:22.444 }, 00:24:22.444 { 00:24:22.444 "name": "pt4", 00:24:22.444 "uuid": "4745ebca-f866-57a8-a8c9-cb34bbf28f4c", 00:24:22.444 "is_configured": true, 00:24:22.444 "data_offset": 2048, 00:24:22.444 "data_size": 63488 00:24:22.444 } 00:24:22.444 ] 00:24:22.444 } 00:24:22.444 } 00:24:22.444 }' 00:24:22.444 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:22.444 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:22.444 pt2 00:24:22.444 pt3 00:24:22.444 pt4' 00:24:22.444 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:22.444 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:22.444 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:22.701 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:22.701 "name": "pt1", 00:24:22.701 "aliases": [ 00:24:22.701 "778f0635-62dc-5b45-902a-d025bd0e68c2" 00:24:22.701 ], 00:24:22.701 "product_name": "passthru", 00:24:22.701 "block_size": 512, 00:24:22.701 "num_blocks": 65536, 00:24:22.701 "uuid": "778f0635-62dc-5b45-902a-d025bd0e68c2", 00:24:22.701 "assigned_rate_limits": { 00:24:22.701 "rw_ios_per_sec": 0, 00:24:22.701 "rw_mbytes_per_sec": 0, 00:24:22.701 "r_mbytes_per_sec": 0, 00:24:22.701 "w_mbytes_per_sec": 0 00:24:22.701 }, 00:24:22.701 "claimed": true, 00:24:22.701 "claim_type": "exclusive_write", 00:24:22.701 "zoned": false, 00:24:22.701 "supported_io_types": { 00:24:22.701 "read": true, 00:24:22.701 "write": true, 00:24:22.701 "unmap": true, 00:24:22.701 "write_zeroes": true, 00:24:22.701 "flush": true, 00:24:22.701 "reset": true, 00:24:22.701 "compare": false, 00:24:22.701 "compare_and_write": false, 00:24:22.701 "abort": true, 00:24:22.701 "nvme_admin": false, 00:24:22.701 "nvme_io": false 00:24:22.701 }, 00:24:22.701 "memory_domains": [ 00:24:22.701 { 00:24:22.701 "dma_device_id": "system", 00:24:22.701 "dma_device_type": 1 00:24:22.701 }, 00:24:22.701 { 00:24:22.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.701 "dma_device_type": 2 00:24:22.701 } 00:24:22.701 ], 00:24:22.701 "driver_specific": { 00:24:22.701 "passthru": { 00:24:22.701 "name": "pt1", 00:24:22.701 "base_bdev_name": "malloc1" 00:24:22.701 } 00:24:22.701 } 00:24:22.701 }' 00:24:22.701 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:22.701 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:22.701 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:22.701 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:22.701 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:22.701 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:22.701 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:22.701 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:22.701 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:22.701 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:22.958 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:22.958 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:22.958 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:22.958 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:22.958 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:23.216 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:23.216 "name": "pt2", 00:24:23.216 "aliases": [ 00:24:23.216 "b3089c9d-9705-565e-ad30-6344bfbdd8f6" 00:24:23.216 ], 00:24:23.216 "product_name": "passthru", 00:24:23.216 "block_size": 512, 00:24:23.216 "num_blocks": 65536, 00:24:23.216 "uuid": "b3089c9d-9705-565e-ad30-6344bfbdd8f6", 00:24:23.216 "assigned_rate_limits": { 00:24:23.216 "rw_ios_per_sec": 0, 00:24:23.216 "rw_mbytes_per_sec": 0, 00:24:23.216 "r_mbytes_per_sec": 0, 00:24:23.216 "w_mbytes_per_sec": 0 00:24:23.216 }, 00:24:23.216 "claimed": true, 00:24:23.216 "claim_type": "exclusive_write", 00:24:23.216 "zoned": false, 00:24:23.216 "supported_io_types": { 00:24:23.216 "read": true, 00:24:23.216 "write": true, 00:24:23.216 "unmap": true, 00:24:23.216 "write_zeroes": true, 00:24:23.216 "flush": true, 00:24:23.216 "reset": true, 00:24:23.216 "compare": false, 00:24:23.216 "compare_and_write": false, 00:24:23.216 "abort": true, 00:24:23.216 "nvme_admin": false, 00:24:23.216 "nvme_io": false 00:24:23.216 }, 00:24:23.216 "memory_domains": [ 00:24:23.216 { 00:24:23.216 "dma_device_id": "system", 00:24:23.216 "dma_device_type": 1 00:24:23.216 }, 00:24:23.216 { 00:24:23.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:23.216 "dma_device_type": 2 00:24:23.216 } 00:24:23.216 ], 00:24:23.216 "driver_specific": { 00:24:23.216 "passthru": { 00:24:23.216 "name": "pt2", 00:24:23.216 "base_bdev_name": "malloc2" 00:24:23.216 } 00:24:23.216 } 00:24:23.216 }' 00:24:23.216 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:23.216 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:23.216 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:23.216 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:23.216 07:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:23.216 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:23.216 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:23.216 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:23.216 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:23.216 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:23.474 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:23.474 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:23.474 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:23.474 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:23.474 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:23.474 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:23.474 "name": "pt3", 00:24:23.474 "aliases": [ 00:24:23.474 "74a1d042-ae7b-5f22-b1e3-96c45dc30f44" 00:24:23.474 ], 00:24:23.474 "product_name": "passthru", 00:24:23.474 "block_size": 512, 00:24:23.474 "num_blocks": 65536, 00:24:23.474 "uuid": "74a1d042-ae7b-5f22-b1e3-96c45dc30f44", 00:24:23.474 "assigned_rate_limits": { 00:24:23.474 "rw_ios_per_sec": 0, 00:24:23.474 "rw_mbytes_per_sec": 0, 00:24:23.474 "r_mbytes_per_sec": 0, 00:24:23.474 "w_mbytes_per_sec": 0 00:24:23.474 }, 00:24:23.474 "claimed": true, 00:24:23.474 "claim_type": "exclusive_write", 00:24:23.474 "zoned": false, 00:24:23.474 "supported_io_types": { 00:24:23.474 "read": true, 00:24:23.474 "write": true, 00:24:23.474 "unmap": true, 00:24:23.474 "write_zeroes": true, 00:24:23.474 "flush": true, 00:24:23.474 "reset": true, 00:24:23.474 "compare": false, 00:24:23.474 "compare_and_write": false, 00:24:23.474 "abort": true, 00:24:23.474 "nvme_admin": false, 00:24:23.474 "nvme_io": false 00:24:23.474 }, 00:24:23.474 "memory_domains": [ 00:24:23.474 { 00:24:23.474 "dma_device_id": "system", 00:24:23.474 "dma_device_type": 1 00:24:23.474 }, 00:24:23.474 { 00:24:23.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:23.474 "dma_device_type": 2 00:24:23.474 } 00:24:23.474 ], 00:24:23.474 "driver_specific": { 00:24:23.474 "passthru": { 00:24:23.474 "name": "pt3", 00:24:23.474 "base_bdev_name": "malloc3" 00:24:23.474 } 00:24:23.474 } 00:24:23.474 }' 00:24:23.474 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:23.759 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:23.759 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:23.759 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:23.759 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:23.759 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:23.759 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:23.759 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:23.759 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:23.759 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:23.759 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:23.759 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:23.759 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:24.017 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:24.017 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:24.275 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:24.275 "name": "pt4", 00:24:24.275 "aliases": [ 00:24:24.275 "4745ebca-f866-57a8-a8c9-cb34bbf28f4c" 00:24:24.275 ], 00:24:24.275 "product_name": "passthru", 00:24:24.275 "block_size": 512, 00:24:24.275 "num_blocks": 65536, 00:24:24.275 "uuid": "4745ebca-f866-57a8-a8c9-cb34bbf28f4c", 00:24:24.275 "assigned_rate_limits": { 00:24:24.275 "rw_ios_per_sec": 0, 00:24:24.275 "rw_mbytes_per_sec": 0, 00:24:24.275 "r_mbytes_per_sec": 0, 00:24:24.275 "w_mbytes_per_sec": 0 00:24:24.275 }, 00:24:24.275 "claimed": true, 00:24:24.275 "claim_type": "exclusive_write", 00:24:24.275 "zoned": false, 00:24:24.275 "supported_io_types": { 00:24:24.275 "read": true, 00:24:24.275 "write": true, 00:24:24.275 "unmap": true, 00:24:24.275 "write_zeroes": true, 00:24:24.275 "flush": true, 00:24:24.275 "reset": true, 00:24:24.275 "compare": false, 00:24:24.275 "compare_and_write": false, 00:24:24.275 "abort": true, 00:24:24.275 "nvme_admin": false, 00:24:24.275 "nvme_io": false 00:24:24.275 }, 00:24:24.275 "memory_domains": [ 00:24:24.275 { 00:24:24.275 "dma_device_id": "system", 00:24:24.275 "dma_device_type": 1 00:24:24.275 }, 00:24:24.275 { 00:24:24.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.275 "dma_device_type": 2 00:24:24.275 } 00:24:24.275 ], 00:24:24.275 "driver_specific": { 00:24:24.275 "passthru": { 00:24:24.275 "name": "pt4", 00:24:24.275 "base_bdev_name": "malloc4" 00:24:24.275 } 00:24:24.275 } 00:24:24.275 }' 00:24:24.275 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:24.275 07:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:24.275 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:24.275 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:24.275 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:24.275 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:24.275 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:24.275 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:24.533 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:24.533 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:24.533 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:24.533 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:24.533 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:24.533 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:24:24.791 [2024-07-12 07:34:58.536638] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:24.791 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=6a6ea8e3-7b7d-420e-b499-c07702b63b10 00:24:24.791 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 6a6ea8e3-7b7d-420e-b499-c07702b63b10 ']' 00:24:24.791 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:25.050 [2024-07-12 07:34:58.812470] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:25.050 [2024-07-12 07:34:58.812764] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:25.050 [2024-07-12 07:34:58.813067] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:25.050 [2024-07-12 07:34:58.813264] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:25.050 [2024-07-12 07:34:58.813357] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:24:25.050 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.050 07:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:24:25.308 07:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:24:25.308 07:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:24:25.308 07:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:25.308 07:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:25.566 07:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:25.566 07:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:25.824 07:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:25.824 07:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:25.824 07:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:25.824 07:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:26.082 07:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:26.082 07:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:26.341 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:24:26.341 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:26.341 07:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:24:26.341 07:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:26.341 07:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:26.341 07:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.341 07:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:26.341 07:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.341 07:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:26.341 07:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:26.341 07:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:26.341 07:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:26.341 07:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:26.600 [2024-07-12 07:35:00.360720] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:26.600 [2024-07-12 07:35:00.363363] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:26.600 [2024-07-12 07:35:00.363527] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:26.600 [2024-07-12 07:35:00.363592] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:26.600 [2024-07-12 07:35:00.363725] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:26.600 [2024-07-12 07:35:00.363853] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:26.600 [2024-07-12 07:35:00.364042] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:24:26.600 [2024-07-12 07:35:00.364203] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:24:26.600 [2024-07-12 07:35:00.364257] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:26.600 [2024-07-12 07:35:00.364334] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:24:26.600 request: 00:24:26.600 { 00:24:26.600 "name": "raid_bdev1", 00:24:26.600 "raid_level": "concat", 00:24:26.600 "base_bdevs": [ 00:24:26.600 "malloc1", 00:24:26.600 "malloc2", 00:24:26.600 "malloc3", 00:24:26.600 "malloc4" 00:24:26.600 ], 00:24:26.600 "superblock": false, 00:24:26.600 "strip_size_kb": 64, 00:24:26.600 "method": "bdev_raid_create", 00:24:26.600 "req_id": 1 00:24:26.600 } 00:24:26.600 Got JSON-RPC error response 00:24:26.600 response: 00:24:26.600 { 00:24:26.600 "code": -17, 00:24:26.600 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:26.600 } 00:24:26.600 07:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:24:26.600 07:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:26.600 07:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:26.600 07:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:26.600 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:24:26.600 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.859 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:24:26.859 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:24:26.859 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:27.117 [2024-07-12 07:35:00.920787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:27.117 [2024-07-12 07:35:00.921097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.117 [2024-07-12 07:35:00.921170] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:27.117 [2024-07-12 07:35:00.921308] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.117 [2024-07-12 07:35:00.924089] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.117 [2024-07-12 07:35:00.924278] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:27.117 [2024-07-12 07:35:00.924469] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:27.117 [2024-07-12 07:35:00.924631] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:27.117 pt1 00:24:27.117 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:27.117 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:27.117 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:27.117 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:27.117 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:27.117 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:27.117 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:27.117 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:27.117 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:27.117 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:27.117 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.117 07:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.376 07:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:27.376 "name": "raid_bdev1", 00:24:27.376 "uuid": "6a6ea8e3-7b7d-420e-b499-c07702b63b10", 00:24:27.376 "strip_size_kb": 64, 00:24:27.376 "state": "configuring", 00:24:27.376 "raid_level": "concat", 00:24:27.376 "superblock": true, 00:24:27.376 "num_base_bdevs": 4, 00:24:27.376 "num_base_bdevs_discovered": 1, 00:24:27.376 "num_base_bdevs_operational": 4, 00:24:27.376 "base_bdevs_list": [ 00:24:27.376 { 00:24:27.376 "name": "pt1", 00:24:27.376 "uuid": "778f0635-62dc-5b45-902a-d025bd0e68c2", 00:24:27.376 "is_configured": true, 00:24:27.376 "data_offset": 2048, 00:24:27.376 "data_size": 63488 00:24:27.376 }, 00:24:27.376 { 00:24:27.376 "name": null, 00:24:27.376 "uuid": "b3089c9d-9705-565e-ad30-6344bfbdd8f6", 00:24:27.376 "is_configured": false, 00:24:27.376 "data_offset": 2048, 00:24:27.376 "data_size": 63488 00:24:27.376 }, 00:24:27.376 { 00:24:27.376 "name": null, 00:24:27.376 "uuid": "74a1d042-ae7b-5f22-b1e3-96c45dc30f44", 00:24:27.376 "is_configured": false, 00:24:27.376 "data_offset": 2048, 00:24:27.376 "data_size": 63488 00:24:27.376 }, 00:24:27.376 { 00:24:27.376 "name": null, 00:24:27.376 "uuid": "4745ebca-f866-57a8-a8c9-cb34bbf28f4c", 00:24:27.376 "is_configured": false, 00:24:27.376 "data_offset": 2048, 00:24:27.376 "data_size": 63488 00:24:27.376 } 00:24:27.376 ] 00:24:27.376 }' 00:24:27.376 07:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:27.376 07:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:27.942 07:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:24:27.942 07:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:28.201 [2024-07-12 07:35:02.021170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:28.201 [2024-07-12 07:35:02.021506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:28.201 [2024-07-12 07:35:02.021589] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:28.201 [2024-07-12 07:35:02.021690] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:28.201 [2024-07-12 07:35:02.022213] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:28.201 [2024-07-12 07:35:02.022374] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:28.201 [2024-07-12 07:35:02.022561] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:28.201 [2024-07-12 07:35:02.022614] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:28.201 pt2 00:24:28.201 07:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:28.459 [2024-07-12 07:35:02.213303] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:28.459 07:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:28.459 07:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:28.459 07:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:28.459 07:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:28.459 07:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:28.459 07:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:28.459 07:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:28.459 07:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:28.459 07:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:28.459 07:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:28.459 07:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.459 07:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.717 07:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:28.717 "name": "raid_bdev1", 00:24:28.717 "uuid": "6a6ea8e3-7b7d-420e-b499-c07702b63b10", 00:24:28.717 "strip_size_kb": 64, 00:24:28.717 "state": "configuring", 00:24:28.717 "raid_level": "concat", 00:24:28.717 "superblock": true, 00:24:28.717 "num_base_bdevs": 4, 00:24:28.717 "num_base_bdevs_discovered": 1, 00:24:28.717 "num_base_bdevs_operational": 4, 00:24:28.717 "base_bdevs_list": [ 00:24:28.717 { 00:24:28.717 "name": "pt1", 00:24:28.717 "uuid": "778f0635-62dc-5b45-902a-d025bd0e68c2", 00:24:28.717 "is_configured": true, 00:24:28.717 "data_offset": 2048, 00:24:28.717 "data_size": 63488 00:24:28.717 }, 00:24:28.717 { 00:24:28.717 "name": null, 00:24:28.718 "uuid": "b3089c9d-9705-565e-ad30-6344bfbdd8f6", 00:24:28.718 "is_configured": false, 00:24:28.718 "data_offset": 2048, 00:24:28.718 "data_size": 63488 00:24:28.718 }, 00:24:28.718 { 00:24:28.718 "name": null, 00:24:28.718 "uuid": "74a1d042-ae7b-5f22-b1e3-96c45dc30f44", 00:24:28.718 "is_configured": false, 00:24:28.718 "data_offset": 2048, 00:24:28.718 "data_size": 63488 00:24:28.718 }, 00:24:28.718 { 00:24:28.718 "name": null, 00:24:28.718 "uuid": "4745ebca-f866-57a8-a8c9-cb34bbf28f4c", 00:24:28.718 "is_configured": false, 00:24:28.718 "data_offset": 2048, 00:24:28.718 "data_size": 63488 00:24:28.718 } 00:24:28.718 ] 00:24:28.718 }' 00:24:28.718 07:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:28.718 07:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.287 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:24:29.287 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:29.287 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:29.543 [2024-07-12 07:35:03.289437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:29.543 [2024-07-12 07:35:03.289699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.543 [2024-07-12 07:35:03.289779] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:24:29.543 [2024-07-12 07:35:03.289876] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.543 [2024-07-12 07:35:03.290425] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.543 [2024-07-12 07:35:03.290601] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:29.543 [2024-07-12 07:35:03.290784] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:29.543 [2024-07-12 07:35:03.290904] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:29.543 pt2 00:24:29.543 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:29.543 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:29.543 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:29.800 [2024-07-12 07:35:03.561474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:29.800 [2024-07-12 07:35:03.561785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.800 [2024-07-12 07:35:03.561856] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:29.800 [2024-07-12 07:35:03.561950] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.800 [2024-07-12 07:35:03.562463] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.800 [2024-07-12 07:35:03.562645] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:29.800 [2024-07-12 07:35:03.562833] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:29.800 [2024-07-12 07:35:03.562973] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:29.800 pt3 00:24:29.800 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:29.800 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:29.800 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:30.057 [2024-07-12 07:35:03.749536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:30.057 [2024-07-12 07:35:03.749826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.057 [2024-07-12 07:35:03.749899] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:30.057 [2024-07-12 07:35:03.749996] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.057 [2024-07-12 07:35:03.750484] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.057 [2024-07-12 07:35:03.750643] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:30.057 [2024-07-12 07:35:03.750825] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:30.057 [2024-07-12 07:35:03.750927] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:30.057 [2024-07-12 07:35:03.751112] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:24:30.057 [2024-07-12 07:35:03.751215] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:30.057 [2024-07-12 07:35:03.751350] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:24:30.057 [2024-07-12 07:35:03.751869] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:24:30.057 [2024-07-12 07:35:03.751965] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:24:30.057 [2024-07-12 07:35:03.752142] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:30.057 pt4 00:24:30.057 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:30.057 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:30.057 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:30.057 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:30.057 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:30.057 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:30.057 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:30.057 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:30.057 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:30.057 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:30.057 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:30.057 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:30.057 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.057 07:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.326 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:30.326 "name": "raid_bdev1", 00:24:30.326 "uuid": "6a6ea8e3-7b7d-420e-b499-c07702b63b10", 00:24:30.326 "strip_size_kb": 64, 00:24:30.326 "state": "online", 00:24:30.326 "raid_level": "concat", 00:24:30.326 "superblock": true, 00:24:30.326 "num_base_bdevs": 4, 00:24:30.326 "num_base_bdevs_discovered": 4, 00:24:30.326 "num_base_bdevs_operational": 4, 00:24:30.326 "base_bdevs_list": [ 00:24:30.326 { 00:24:30.326 "name": "pt1", 00:24:30.326 "uuid": "778f0635-62dc-5b45-902a-d025bd0e68c2", 00:24:30.326 "is_configured": true, 00:24:30.326 "data_offset": 2048, 00:24:30.326 "data_size": 63488 00:24:30.326 }, 00:24:30.326 { 00:24:30.326 "name": "pt2", 00:24:30.326 "uuid": "b3089c9d-9705-565e-ad30-6344bfbdd8f6", 00:24:30.326 "is_configured": true, 00:24:30.326 "data_offset": 2048, 00:24:30.326 "data_size": 63488 00:24:30.326 }, 00:24:30.326 { 00:24:30.326 "name": "pt3", 00:24:30.326 "uuid": "74a1d042-ae7b-5f22-b1e3-96c45dc30f44", 00:24:30.326 "is_configured": true, 00:24:30.326 "data_offset": 2048, 00:24:30.326 "data_size": 63488 00:24:30.326 }, 00:24:30.326 { 00:24:30.326 "name": "pt4", 00:24:30.326 "uuid": "4745ebca-f866-57a8-a8c9-cb34bbf28f4c", 00:24:30.326 "is_configured": true, 00:24:30.326 "data_offset": 2048, 00:24:30.326 "data_size": 63488 00:24:30.326 } 00:24:30.326 ] 00:24:30.326 }' 00:24:30.326 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:30.326 07:35:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:30.960 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:24:30.960 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:30.960 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:30.960 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:30.960 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:30.960 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:30.960 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:30.960 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:30.960 [2024-07-12 07:35:04.781995] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:30.960 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:30.960 "name": "raid_bdev1", 00:24:30.960 "aliases": [ 00:24:30.960 "6a6ea8e3-7b7d-420e-b499-c07702b63b10" 00:24:30.960 ], 00:24:30.960 "product_name": "Raid Volume", 00:24:30.960 "block_size": 512, 00:24:30.960 "num_blocks": 253952, 00:24:30.961 "uuid": "6a6ea8e3-7b7d-420e-b499-c07702b63b10", 00:24:30.961 "assigned_rate_limits": { 00:24:30.961 "rw_ios_per_sec": 0, 00:24:30.961 "rw_mbytes_per_sec": 0, 00:24:30.961 "r_mbytes_per_sec": 0, 00:24:30.961 "w_mbytes_per_sec": 0 00:24:30.961 }, 00:24:30.961 "claimed": false, 00:24:30.961 "zoned": false, 00:24:30.961 "supported_io_types": { 00:24:30.961 "read": true, 00:24:30.961 "write": true, 00:24:30.961 "unmap": true, 00:24:30.961 "write_zeroes": true, 00:24:30.961 "flush": true, 00:24:30.961 "reset": true, 00:24:30.961 "compare": false, 00:24:30.961 "compare_and_write": false, 00:24:30.961 "abort": false, 00:24:30.961 "nvme_admin": false, 00:24:30.961 "nvme_io": false 00:24:30.961 }, 00:24:30.961 "memory_domains": [ 00:24:30.961 { 00:24:30.961 "dma_device_id": "system", 00:24:30.961 "dma_device_type": 1 00:24:30.961 }, 00:24:30.961 { 00:24:30.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.961 "dma_device_type": 2 00:24:30.961 }, 00:24:30.961 { 00:24:30.961 "dma_device_id": "system", 00:24:30.961 "dma_device_type": 1 00:24:30.961 }, 00:24:30.961 { 00:24:30.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.961 "dma_device_type": 2 00:24:30.961 }, 00:24:30.961 { 00:24:30.961 "dma_device_id": "system", 00:24:30.961 "dma_device_type": 1 00:24:30.961 }, 00:24:30.961 { 00:24:30.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.961 "dma_device_type": 2 00:24:30.961 }, 00:24:30.961 { 00:24:30.961 "dma_device_id": "system", 00:24:30.961 "dma_device_type": 1 00:24:30.961 }, 00:24:30.961 { 00:24:30.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.961 "dma_device_type": 2 00:24:30.961 } 00:24:30.961 ], 00:24:30.961 "driver_specific": { 00:24:30.961 "raid": { 00:24:30.961 "uuid": "6a6ea8e3-7b7d-420e-b499-c07702b63b10", 00:24:30.961 "strip_size_kb": 64, 00:24:30.961 "state": "online", 00:24:30.961 "raid_level": "concat", 00:24:30.961 "superblock": true, 00:24:30.961 "num_base_bdevs": 4, 00:24:30.961 "num_base_bdevs_discovered": 4, 00:24:30.961 "num_base_bdevs_operational": 4, 00:24:30.961 "base_bdevs_list": [ 00:24:30.961 { 00:24:30.961 "name": "pt1", 00:24:30.961 "uuid": "778f0635-62dc-5b45-902a-d025bd0e68c2", 00:24:30.961 "is_configured": true, 00:24:30.961 "data_offset": 2048, 00:24:30.961 "data_size": 63488 00:24:30.961 }, 00:24:30.961 { 00:24:30.961 "name": "pt2", 00:24:30.961 "uuid": "b3089c9d-9705-565e-ad30-6344bfbdd8f6", 00:24:30.961 "is_configured": true, 00:24:30.961 "data_offset": 2048, 00:24:30.961 "data_size": 63488 00:24:30.961 }, 00:24:30.961 { 00:24:30.961 "name": "pt3", 00:24:30.961 "uuid": "74a1d042-ae7b-5f22-b1e3-96c45dc30f44", 00:24:30.961 "is_configured": true, 00:24:30.961 "data_offset": 2048, 00:24:30.961 "data_size": 63488 00:24:30.961 }, 00:24:30.961 { 00:24:30.961 "name": "pt4", 00:24:30.961 "uuid": "4745ebca-f866-57a8-a8c9-cb34bbf28f4c", 00:24:30.961 "is_configured": true, 00:24:30.961 "data_offset": 2048, 00:24:30.961 "data_size": 63488 00:24:30.961 } 00:24:30.961 ] 00:24:30.961 } 00:24:30.961 } 00:24:30.961 }' 00:24:30.961 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:31.231 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:31.231 pt2 00:24:31.231 pt3 00:24:31.231 pt4' 00:24:31.231 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:31.231 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:31.231 07:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:31.231 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:31.231 "name": "pt1", 00:24:31.231 "aliases": [ 00:24:31.231 "778f0635-62dc-5b45-902a-d025bd0e68c2" 00:24:31.231 ], 00:24:31.231 "product_name": "passthru", 00:24:31.231 "block_size": 512, 00:24:31.231 "num_blocks": 65536, 00:24:31.231 "uuid": "778f0635-62dc-5b45-902a-d025bd0e68c2", 00:24:31.231 "assigned_rate_limits": { 00:24:31.231 "rw_ios_per_sec": 0, 00:24:31.231 "rw_mbytes_per_sec": 0, 00:24:31.231 "r_mbytes_per_sec": 0, 00:24:31.231 "w_mbytes_per_sec": 0 00:24:31.231 }, 00:24:31.231 "claimed": true, 00:24:31.231 "claim_type": "exclusive_write", 00:24:31.231 "zoned": false, 00:24:31.231 "supported_io_types": { 00:24:31.231 "read": true, 00:24:31.231 "write": true, 00:24:31.231 "unmap": true, 00:24:31.231 "write_zeroes": true, 00:24:31.231 "flush": true, 00:24:31.231 "reset": true, 00:24:31.231 "compare": false, 00:24:31.231 "compare_and_write": false, 00:24:31.231 "abort": true, 00:24:31.231 "nvme_admin": false, 00:24:31.231 "nvme_io": false 00:24:31.231 }, 00:24:31.231 "memory_domains": [ 00:24:31.231 { 00:24:31.231 "dma_device_id": "system", 00:24:31.231 "dma_device_type": 1 00:24:31.231 }, 00:24:31.231 { 00:24:31.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.231 "dma_device_type": 2 00:24:31.231 } 00:24:31.231 ], 00:24:31.231 "driver_specific": { 00:24:31.231 "passthru": { 00:24:31.231 "name": "pt1", 00:24:31.231 "base_bdev_name": "malloc1" 00:24:31.231 } 00:24:31.231 } 00:24:31.231 }' 00:24:31.231 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.490 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.490 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:31.490 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.490 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.490 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:31.490 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.490 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.748 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:31.748 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.748 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.748 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:31.748 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:31.748 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:31.748 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:32.007 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:32.007 "name": "pt2", 00:24:32.007 "aliases": [ 00:24:32.007 "b3089c9d-9705-565e-ad30-6344bfbdd8f6" 00:24:32.007 ], 00:24:32.007 "product_name": "passthru", 00:24:32.007 "block_size": 512, 00:24:32.007 "num_blocks": 65536, 00:24:32.007 "uuid": "b3089c9d-9705-565e-ad30-6344bfbdd8f6", 00:24:32.007 "assigned_rate_limits": { 00:24:32.007 "rw_ios_per_sec": 0, 00:24:32.007 "rw_mbytes_per_sec": 0, 00:24:32.007 "r_mbytes_per_sec": 0, 00:24:32.007 "w_mbytes_per_sec": 0 00:24:32.007 }, 00:24:32.007 "claimed": true, 00:24:32.007 "claim_type": "exclusive_write", 00:24:32.007 "zoned": false, 00:24:32.007 "supported_io_types": { 00:24:32.007 "read": true, 00:24:32.007 "write": true, 00:24:32.007 "unmap": true, 00:24:32.007 "write_zeroes": true, 00:24:32.007 "flush": true, 00:24:32.007 "reset": true, 00:24:32.007 "compare": false, 00:24:32.007 "compare_and_write": false, 00:24:32.007 "abort": true, 00:24:32.007 "nvme_admin": false, 00:24:32.007 "nvme_io": false 00:24:32.007 }, 00:24:32.007 "memory_domains": [ 00:24:32.007 { 00:24:32.007 "dma_device_id": "system", 00:24:32.007 "dma_device_type": 1 00:24:32.007 }, 00:24:32.007 { 00:24:32.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:32.007 "dma_device_type": 2 00:24:32.007 } 00:24:32.007 ], 00:24:32.007 "driver_specific": { 00:24:32.007 "passthru": { 00:24:32.007 "name": "pt2", 00:24:32.007 "base_bdev_name": "malloc2" 00:24:32.007 } 00:24:32.007 } 00:24:32.007 }' 00:24:32.007 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:32.007 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:32.007 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:32.007 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:32.266 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:32.266 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:32.266 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:32.267 07:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:32.267 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:32.267 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:32.267 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:32.267 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:32.267 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:32.267 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:32.267 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:32.835 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:32.836 "name": "pt3", 00:24:32.836 "aliases": [ 00:24:32.836 "74a1d042-ae7b-5f22-b1e3-96c45dc30f44" 00:24:32.836 ], 00:24:32.836 "product_name": "passthru", 00:24:32.836 "block_size": 512, 00:24:32.836 "num_blocks": 65536, 00:24:32.836 "uuid": "74a1d042-ae7b-5f22-b1e3-96c45dc30f44", 00:24:32.836 "assigned_rate_limits": { 00:24:32.836 "rw_ios_per_sec": 0, 00:24:32.836 "rw_mbytes_per_sec": 0, 00:24:32.836 "r_mbytes_per_sec": 0, 00:24:32.836 "w_mbytes_per_sec": 0 00:24:32.836 }, 00:24:32.836 "claimed": true, 00:24:32.836 "claim_type": "exclusive_write", 00:24:32.836 "zoned": false, 00:24:32.836 "supported_io_types": { 00:24:32.836 "read": true, 00:24:32.836 "write": true, 00:24:32.836 "unmap": true, 00:24:32.836 "write_zeroes": true, 00:24:32.836 "flush": true, 00:24:32.836 "reset": true, 00:24:32.836 "compare": false, 00:24:32.836 "compare_and_write": false, 00:24:32.836 "abort": true, 00:24:32.836 "nvme_admin": false, 00:24:32.836 "nvme_io": false 00:24:32.836 }, 00:24:32.836 "memory_domains": [ 00:24:32.836 { 00:24:32.836 "dma_device_id": "system", 00:24:32.836 "dma_device_type": 1 00:24:32.836 }, 00:24:32.836 { 00:24:32.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:32.836 "dma_device_type": 2 00:24:32.836 } 00:24:32.836 ], 00:24:32.836 "driver_specific": { 00:24:32.836 "passthru": { 00:24:32.836 "name": "pt3", 00:24:32.836 "base_bdev_name": "malloc3" 00:24:32.836 } 00:24:32.836 } 00:24:32.836 }' 00:24:32.836 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:32.836 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:32.836 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:32.836 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:32.836 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:32.836 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:32.836 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:32.836 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:33.094 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:33.094 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:33.094 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:33.094 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:33.094 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:33.095 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:33.095 07:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:33.354 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:33.354 "name": "pt4", 00:24:33.354 "aliases": [ 00:24:33.354 "4745ebca-f866-57a8-a8c9-cb34bbf28f4c" 00:24:33.354 ], 00:24:33.354 "product_name": "passthru", 00:24:33.354 "block_size": 512, 00:24:33.354 "num_blocks": 65536, 00:24:33.354 "uuid": "4745ebca-f866-57a8-a8c9-cb34bbf28f4c", 00:24:33.354 "assigned_rate_limits": { 00:24:33.354 "rw_ios_per_sec": 0, 00:24:33.354 "rw_mbytes_per_sec": 0, 00:24:33.354 "r_mbytes_per_sec": 0, 00:24:33.354 "w_mbytes_per_sec": 0 00:24:33.354 }, 00:24:33.354 "claimed": true, 00:24:33.354 "claim_type": "exclusive_write", 00:24:33.354 "zoned": false, 00:24:33.354 "supported_io_types": { 00:24:33.354 "read": true, 00:24:33.354 "write": true, 00:24:33.354 "unmap": true, 00:24:33.354 "write_zeroes": true, 00:24:33.354 "flush": true, 00:24:33.354 "reset": true, 00:24:33.354 "compare": false, 00:24:33.354 "compare_and_write": false, 00:24:33.354 "abort": true, 00:24:33.354 "nvme_admin": false, 00:24:33.354 "nvme_io": false 00:24:33.354 }, 00:24:33.354 "memory_domains": [ 00:24:33.354 { 00:24:33.354 "dma_device_id": "system", 00:24:33.354 "dma_device_type": 1 00:24:33.354 }, 00:24:33.354 { 00:24:33.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.354 "dma_device_type": 2 00:24:33.354 } 00:24:33.354 ], 00:24:33.354 "driver_specific": { 00:24:33.354 "passthru": { 00:24:33.354 "name": "pt4", 00:24:33.354 "base_bdev_name": "malloc4" 00:24:33.354 } 00:24:33.354 } 00:24:33.354 }' 00:24:33.354 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:33.354 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:33.354 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:33.354 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:33.354 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:33.613 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:33.613 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:33.613 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:33.613 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:33.613 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:33.613 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:33.613 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:33.613 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:33.613 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:24:33.872 [2024-07-12 07:35:07.678314] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:33.872 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 6a6ea8e3-7b7d-420e-b499-c07702b63b10 '!=' 6a6ea8e3-7b7d-420e-b499-c07702b63b10 ']' 00:24:33.872 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:24:33.872 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:33.872 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:33.872 07:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 149450 00:24:33.872 07:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 149450 ']' 00:24:33.872 07:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 149450 00:24:33.872 07:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:24:33.872 07:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:33.872 07:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 149450 00:24:33.872 07:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:33.872 07:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:33.872 07:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 149450' 00:24:33.872 killing process with pid 149450 00:24:33.872 07:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 149450 00:24:33.872 07:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 149450 00:24:33.872 [2024-07-12 07:35:07.731296] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:33.872 [2024-07-12 07:35:07.731403] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.872 [2024-07-12 07:35:07.731490] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.872 [2024-07-12 07:35:07.731501] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:24:34.131 [2024-07-12 07:35:07.812870] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:34.390 07:35:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:24:34.390 00:24:34.390 real 0m16.116s 00:24:34.390 user 0m28.715s 00:24:34.390 sys 0m2.931s 00:24:34.390 07:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:34.390 07:35:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.390 ************************************ 00:24:34.390 END TEST raid_superblock_test 00:24:34.390 ************************************ 00:24:34.390 07:35:08 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:24:34.390 07:35:08 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:34.390 07:35:08 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:34.390 07:35:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:34.648 ************************************ 00:24:34.648 START TEST raid_read_error_test 00:24:34.648 ************************************ 00:24:34.648 07:35:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 4 read 00:24:34.648 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:24:34.648 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:24:34.648 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:24:34.648 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:34.648 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:34.648 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.3YJ46stEIF 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=149980 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 149980 /var/tmp/spdk-raid.sock 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 149980 ']' 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:34.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:34.649 07:35:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.649 [2024-07-12 07:35:08.359849] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:34.649 [2024-07-12 07:35:08.360263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149980 ] 00:24:34.649 [2024-07-12 07:35:08.511412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.908 [2024-07-12 07:35:08.596944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.908 [2024-07-12 07:35:08.682551] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:35.476 07:35:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:35.476 07:35:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:24:35.476 07:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:35.476 07:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:35.736 BaseBdev1_malloc 00:24:35.736 07:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:35.995 true 00:24:35.995 07:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:35.995 [2024-07-12 07:35:09.876827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:35.995 [2024-07-12 07:35:09.877140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.995 [2024-07-12 07:35:09.877234] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:24:35.995 [2024-07-12 07:35:09.877390] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.254 [2024-07-12 07:35:09.880454] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.254 [2024-07-12 07:35:09.880615] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:36.254 BaseBdev1 00:24:36.254 07:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:36.254 07:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:36.254 BaseBdev2_malloc 00:24:36.254 07:35:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:36.512 true 00:24:36.512 07:35:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:36.771 [2024-07-12 07:35:10.472889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:36.771 [2024-07-12 07:35:10.473168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.771 [2024-07-12 07:35:10.473260] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:36.771 [2024-07-12 07:35:10.473428] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.771 [2024-07-12 07:35:10.476480] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.771 [2024-07-12 07:35:10.476622] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:36.772 BaseBdev2 00:24:36.772 07:35:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:36.772 07:35:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:37.030 BaseBdev3_malloc 00:24:37.030 07:35:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:37.287 true 00:24:37.287 07:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:37.544 [2024-07-12 07:35:11.181364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:37.544 [2024-07-12 07:35:11.181704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.544 [2024-07-12 07:35:11.181792] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:24:37.544 [2024-07-12 07:35:11.181910] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.544 [2024-07-12 07:35:11.184783] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.544 [2024-07-12 07:35:11.184960] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:37.544 BaseBdev3 00:24:37.544 07:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:37.544 07:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:37.544 BaseBdev4_malloc 00:24:37.544 07:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:37.802 true 00:24:37.802 07:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:38.058 [2024-07-12 07:35:11.845164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:38.058 [2024-07-12 07:35:11.845525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.058 [2024-07-12 07:35:11.845613] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:38.058 [2024-07-12 07:35:11.845747] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.058 [2024-07-12 07:35:11.848750] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.058 [2024-07-12 07:35:11.848931] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:38.058 BaseBdev4 00:24:38.058 07:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:38.315 [2024-07-12 07:35:12.045499] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:38.315 [2024-07-12 07:35:12.048489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:38.315 [2024-07-12 07:35:12.048725] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:38.315 [2024-07-12 07:35:12.048836] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:38.315 [2024-07-12 07:35:12.049226] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:24:38.315 [2024-07-12 07:35:12.049291] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:38.315 [2024-07-12 07:35:12.049584] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:24:38.315 [2024-07-12 07:35:12.050139] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:24:38.315 [2024-07-12 07:35:12.050257] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:24:38.315 [2024-07-12 07:35:12.050593] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:38.315 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:38.315 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:38.315 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:38.315 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:38.315 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:38.315 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:38.315 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:38.315 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:38.315 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:38.315 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:38.315 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.315 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.572 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:38.572 "name": "raid_bdev1", 00:24:38.572 "uuid": "ebb2c554-c65a-44f8-a365-b23e87d4a67d", 00:24:38.572 "strip_size_kb": 64, 00:24:38.572 "state": "online", 00:24:38.572 "raid_level": "concat", 00:24:38.572 "superblock": true, 00:24:38.572 "num_base_bdevs": 4, 00:24:38.572 "num_base_bdevs_discovered": 4, 00:24:38.572 "num_base_bdevs_operational": 4, 00:24:38.572 "base_bdevs_list": [ 00:24:38.572 { 00:24:38.572 "name": "BaseBdev1", 00:24:38.572 "uuid": "96f646c6-02fb-5606-87dd-7e08c7d997f0", 00:24:38.572 "is_configured": true, 00:24:38.572 "data_offset": 2048, 00:24:38.572 "data_size": 63488 00:24:38.572 }, 00:24:38.572 { 00:24:38.572 "name": "BaseBdev2", 00:24:38.572 "uuid": "751dd127-58df-5582-8894-3aca45855304", 00:24:38.572 "is_configured": true, 00:24:38.572 "data_offset": 2048, 00:24:38.572 "data_size": 63488 00:24:38.572 }, 00:24:38.572 { 00:24:38.572 "name": "BaseBdev3", 00:24:38.572 "uuid": "853adb04-41e5-5ad6-9b2d-ca4487a1e443", 00:24:38.572 "is_configured": true, 00:24:38.572 "data_offset": 2048, 00:24:38.572 "data_size": 63488 00:24:38.572 }, 00:24:38.572 { 00:24:38.572 "name": "BaseBdev4", 00:24:38.572 "uuid": "9a3aeb93-0248-5908-8d15-2e0ed734006a", 00:24:38.572 "is_configured": true, 00:24:38.572 "data_offset": 2048, 00:24:38.572 "data_size": 63488 00:24:38.572 } 00:24:38.572 ] 00:24:38.572 }' 00:24:38.572 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:38.572 07:35:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.136 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:39.136 07:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:39.136 [2024-07-12 07:35:12.978153] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:24:40.068 07:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:40.326 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:40.326 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:24:40.326 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:24:40.326 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:40.326 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:40.326 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:40.326 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:40.326 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:40.326 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:40.326 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:40.326 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:40.326 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:40.326 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:40.326 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.326 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.584 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:40.584 "name": "raid_bdev1", 00:24:40.584 "uuid": "ebb2c554-c65a-44f8-a365-b23e87d4a67d", 00:24:40.584 "strip_size_kb": 64, 00:24:40.584 "state": "online", 00:24:40.584 "raid_level": "concat", 00:24:40.584 "superblock": true, 00:24:40.584 "num_base_bdevs": 4, 00:24:40.584 "num_base_bdevs_discovered": 4, 00:24:40.584 "num_base_bdevs_operational": 4, 00:24:40.584 "base_bdevs_list": [ 00:24:40.584 { 00:24:40.584 "name": "BaseBdev1", 00:24:40.584 "uuid": "96f646c6-02fb-5606-87dd-7e08c7d997f0", 00:24:40.584 "is_configured": true, 00:24:40.584 "data_offset": 2048, 00:24:40.584 "data_size": 63488 00:24:40.584 }, 00:24:40.584 { 00:24:40.584 "name": "BaseBdev2", 00:24:40.584 "uuid": "751dd127-58df-5582-8894-3aca45855304", 00:24:40.584 "is_configured": true, 00:24:40.584 "data_offset": 2048, 00:24:40.584 "data_size": 63488 00:24:40.584 }, 00:24:40.584 { 00:24:40.584 "name": "BaseBdev3", 00:24:40.584 "uuid": "853adb04-41e5-5ad6-9b2d-ca4487a1e443", 00:24:40.584 "is_configured": true, 00:24:40.584 "data_offset": 2048, 00:24:40.584 "data_size": 63488 00:24:40.584 }, 00:24:40.584 { 00:24:40.584 "name": "BaseBdev4", 00:24:40.584 "uuid": "9a3aeb93-0248-5908-8d15-2e0ed734006a", 00:24:40.584 "is_configured": true, 00:24:40.584 "data_offset": 2048, 00:24:40.584 "data_size": 63488 00:24:40.584 } 00:24:40.584 ] 00:24:40.584 }' 00:24:40.584 07:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:40.584 07:35:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.520 07:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:41.520 [2024-07-12 07:35:15.325386] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:41.520 [2024-07-12 07:35:15.325649] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:41.520 [2024-07-12 07:35:15.328320] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:41.520 [2024-07-12 07:35:15.328491] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.520 [2024-07-12 07:35:15.328576] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:41.520 [2024-07-12 07:35:15.328662] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:24:41.520 0 00:24:41.520 07:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 149980 00:24:41.520 07:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 149980 ']' 00:24:41.520 07:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 149980 00:24:41.520 07:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:24:41.520 07:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:41.520 07:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 149980 00:24:41.520 07:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:41.520 07:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:41.520 07:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 149980' 00:24:41.520 killing process with pid 149980 00:24:41.520 07:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 149980 00:24:41.520 07:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 149980 00:24:41.520 [2024-07-12 07:35:15.382780] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:41.777 [2024-07-12 07:35:15.449140] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:42.034 07:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.3YJ46stEIF 00:24:42.035 07:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:24:42.035 07:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:24:42.035 07:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:24:42.035 07:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:24:42.035 ************************************ 00:24:42.035 END TEST raid_read_error_test 00:24:42.035 ************************************ 00:24:42.035 07:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:42.035 07:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:42.035 07:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:24:42.035 00:24:42.035 real 0m7.595s 00:24:42.035 user 0m11.872s 00:24:42.035 sys 0m1.303s 00:24:42.035 07:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:42.035 07:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.293 07:35:15 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:24:42.293 07:35:15 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:42.293 07:35:15 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:42.293 07:35:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:42.293 ************************************ 00:24:42.293 START TEST raid_write_error_test 00:24:42.293 ************************************ 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 4 write 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.cHRPd7bbQi 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=150183 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 150183 /var/tmp/spdk-raid.sock 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 150183 ']' 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:42.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:42.293 07:35:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.293 [2024-07-12 07:35:16.039756] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:42.293 [2024-07-12 07:35:16.040363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150183 ] 00:24:42.551 [2024-07-12 07:35:16.193484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.551 [2024-07-12 07:35:16.279248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.551 [2024-07-12 07:35:16.358451] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:43.116 07:35:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:43.116 07:35:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:24:43.116 07:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:43.116 07:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:43.373 BaseBdev1_malloc 00:24:43.373 07:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:43.631 true 00:24:43.631 07:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:43.889 [2024-07-12 07:35:17.599459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:43.889 [2024-07-12 07:35:17.599774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.889 [2024-07-12 07:35:17.599874] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:24:43.889 [2024-07-12 07:35:17.600039] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:43.889 [2024-07-12 07:35:17.603365] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:43.889 [2024-07-12 07:35:17.603548] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:43.889 BaseBdev1 00:24:43.889 07:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:43.889 07:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:44.146 BaseBdev2_malloc 00:24:44.146 07:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:44.403 true 00:24:44.403 07:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:44.661 [2024-07-12 07:35:18.335867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:44.661 [2024-07-12 07:35:18.336196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.661 [2024-07-12 07:35:18.336279] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:44.661 [2024-07-12 07:35:18.336422] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.661 [2024-07-12 07:35:18.339387] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.661 [2024-07-12 07:35:18.339554] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:44.661 BaseBdev2 00:24:44.661 07:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:44.661 07:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:44.919 BaseBdev3_malloc 00:24:44.919 07:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:44.919 true 00:24:44.919 07:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:45.177 [2024-07-12 07:35:18.949588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:45.177 [2024-07-12 07:35:18.949873] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.177 [2024-07-12 07:35:18.949959] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:24:45.177 [2024-07-12 07:35:18.950125] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.177 [2024-07-12 07:35:18.952948] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.177 [2024-07-12 07:35:18.953108] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:45.177 BaseBdev3 00:24:45.177 07:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:45.177 07:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:45.434 BaseBdev4_malloc 00:24:45.434 07:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:45.692 true 00:24:45.692 07:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:45.950 [2024-07-12 07:35:19.589309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:45.950 [2024-07-12 07:35:19.589650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.950 [2024-07-12 07:35:19.589728] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:45.950 [2024-07-12 07:35:19.589883] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.950 [2024-07-12 07:35:19.592697] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.950 [2024-07-12 07:35:19.592864] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:45.950 BaseBdev4 00:24:45.950 07:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:45.950 [2024-07-12 07:35:19.781486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:45.950 [2024-07-12 07:35:19.784245] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:45.950 [2024-07-12 07:35:19.784445] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:45.950 [2024-07-12 07:35:19.784659] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:45.950 [2024-07-12 07:35:19.785037] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:24:45.950 [2024-07-12 07:35:19.785142] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:45.950 [2024-07-12 07:35:19.785380] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:24:45.950 [2024-07-12 07:35:19.785923] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:24:45.950 [2024-07-12 07:35:19.786021] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:24:45.950 [2024-07-12 07:35:19.786327] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.950 07:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:45.950 07:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:45.950 07:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:45.950 07:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:45.950 07:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:45.950 07:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:45.950 07:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:45.950 07:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:45.950 07:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:45.950 07:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:45.950 07:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.950 07:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.208 07:35:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:46.208 "name": "raid_bdev1", 00:24:46.208 "uuid": "b1f518f8-79ed-4776-abfc-2323fc6b5f19", 00:24:46.208 "strip_size_kb": 64, 00:24:46.208 "state": "online", 00:24:46.208 "raid_level": "concat", 00:24:46.208 "superblock": true, 00:24:46.208 "num_base_bdevs": 4, 00:24:46.208 "num_base_bdevs_discovered": 4, 00:24:46.208 "num_base_bdevs_operational": 4, 00:24:46.208 "base_bdevs_list": [ 00:24:46.208 { 00:24:46.208 "name": "BaseBdev1", 00:24:46.208 "uuid": "8e1f5d5a-5714-5abd-bc00-b1b4d71d31bc", 00:24:46.208 "is_configured": true, 00:24:46.208 "data_offset": 2048, 00:24:46.208 "data_size": 63488 00:24:46.208 }, 00:24:46.208 { 00:24:46.208 "name": "BaseBdev2", 00:24:46.208 "uuid": "e5dc8fbb-ca4a-5ae3-9ffe-7e1864ea7fc9", 00:24:46.208 "is_configured": true, 00:24:46.208 "data_offset": 2048, 00:24:46.208 "data_size": 63488 00:24:46.208 }, 00:24:46.208 { 00:24:46.208 "name": "BaseBdev3", 00:24:46.208 "uuid": "b8cf5010-e387-5dcf-a07c-d8f93d7a0141", 00:24:46.208 "is_configured": true, 00:24:46.208 "data_offset": 2048, 00:24:46.208 "data_size": 63488 00:24:46.208 }, 00:24:46.208 { 00:24:46.208 "name": "BaseBdev4", 00:24:46.208 "uuid": "9727d061-a080-5d3c-a88c-4ab95eaa8dd5", 00:24:46.208 "is_configured": true, 00:24:46.208 "data_offset": 2048, 00:24:46.208 "data_size": 63488 00:24:46.208 } 00:24:46.208 ] 00:24:46.208 }' 00:24:46.208 07:35:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:46.208 07:35:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.773 07:35:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:46.773 07:35:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:47.031 [2024-07-12 07:35:20.711083] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:24:47.969 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:48.227 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:48.227 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:24:48.227 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:24:48.227 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:48.227 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:48.227 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:48.227 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:48.227 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:48.227 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:48.227 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:48.227 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:48.227 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:48.227 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:48.227 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.227 07:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.227 07:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:48.227 "name": "raid_bdev1", 00:24:48.227 "uuid": "b1f518f8-79ed-4776-abfc-2323fc6b5f19", 00:24:48.227 "strip_size_kb": 64, 00:24:48.227 "state": "online", 00:24:48.227 "raid_level": "concat", 00:24:48.227 "superblock": true, 00:24:48.227 "num_base_bdevs": 4, 00:24:48.227 "num_base_bdevs_discovered": 4, 00:24:48.227 "num_base_bdevs_operational": 4, 00:24:48.227 "base_bdevs_list": [ 00:24:48.227 { 00:24:48.227 "name": "BaseBdev1", 00:24:48.227 "uuid": "8e1f5d5a-5714-5abd-bc00-b1b4d71d31bc", 00:24:48.227 "is_configured": true, 00:24:48.227 "data_offset": 2048, 00:24:48.227 "data_size": 63488 00:24:48.227 }, 00:24:48.227 { 00:24:48.227 "name": "BaseBdev2", 00:24:48.227 "uuid": "e5dc8fbb-ca4a-5ae3-9ffe-7e1864ea7fc9", 00:24:48.227 "is_configured": true, 00:24:48.227 "data_offset": 2048, 00:24:48.227 "data_size": 63488 00:24:48.227 }, 00:24:48.227 { 00:24:48.227 "name": "BaseBdev3", 00:24:48.227 "uuid": "b8cf5010-e387-5dcf-a07c-d8f93d7a0141", 00:24:48.227 "is_configured": true, 00:24:48.227 "data_offset": 2048, 00:24:48.227 "data_size": 63488 00:24:48.227 }, 00:24:48.227 { 00:24:48.227 "name": "BaseBdev4", 00:24:48.227 "uuid": "9727d061-a080-5d3c-a88c-4ab95eaa8dd5", 00:24:48.227 "is_configured": true, 00:24:48.227 "data_offset": 2048, 00:24:48.227 "data_size": 63488 00:24:48.227 } 00:24:48.227 ] 00:24:48.227 }' 00:24:48.227 07:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:48.227 07:35:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:48.791 07:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:49.356 [2024-07-12 07:35:22.976931] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:49.356 [2024-07-12 07:35:22.977192] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:49.356 [2024-07-12 07:35:22.979873] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:49.356 [2024-07-12 07:35:22.980044] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:49.356 [2024-07-12 07:35:22.980125] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:49.356 [2024-07-12 07:35:22.980206] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:24:49.356 0 00:24:49.356 07:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 150183 00:24:49.356 07:35:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 150183 ']' 00:24:49.356 07:35:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 150183 00:24:49.356 07:35:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:24:49.356 07:35:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:49.356 07:35:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 150183 00:24:49.356 killing process with pid 150183 00:24:49.356 07:35:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:49.356 07:35:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:49.356 07:35:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 150183' 00:24:49.356 07:35:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 150183 00:24:49.356 07:35:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 150183 00:24:49.356 [2024-07-12 07:35:23.034120] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:49.356 [2024-07-12 07:35:23.098363] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:49.922 07:35:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.cHRPd7bbQi 00:24:49.922 07:35:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:24:49.922 07:35:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:24:49.922 07:35:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:24:49.922 07:35:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:24:49.922 07:35:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:49.922 07:35:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:49.922 07:35:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:24:49.922 00:24:49.922 real 0m7.578s 00:24:49.922 user 0m11.889s 00:24:49.922 sys 0m1.319s 00:24:49.922 07:35:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:49.922 07:35:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.922 ************************************ 00:24:49.922 END TEST raid_write_error_test 00:24:49.922 ************************************ 00:24:49.922 07:35:23 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:24:49.922 07:35:23 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:24:49.922 07:35:23 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:49.922 07:35:23 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:49.922 07:35:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:49.922 ************************************ 00:24:49.922 START TEST raid_state_function_test 00:24:49.922 ************************************ 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 false 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=150382 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 150382' 00:24:49.922 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:49.922 Process raid pid: 150382 00:24:49.923 07:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 150382 /var/tmp/spdk-raid.sock 00:24:49.923 07:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 150382 ']' 00:24:49.923 07:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:49.923 07:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:49.923 07:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:49.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:49.923 07:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:49.923 07:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.923 [2024-07-12 07:35:23.687980] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:49.923 [2024-07-12 07:35:23.688347] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.181 [2024-07-12 07:35:23.845296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.181 [2024-07-12 07:35:23.924722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.181 [2024-07-12 07:35:24.003166] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:51.126 07:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:51.126 07:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:24:51.126 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:51.126 [2024-07-12 07:35:24.806990] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:51.126 [2024-07-12 07:35:24.807291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:51.126 [2024-07-12 07:35:24.807468] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:51.126 [2024-07-12 07:35:24.807530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:51.126 [2024-07-12 07:35:24.807619] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:51.126 [2024-07-12 07:35:24.807717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:51.126 [2024-07-12 07:35:24.807809] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:51.126 [2024-07-12 07:35:24.807865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:51.126 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:51.126 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:51.126 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:51.126 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:51.126 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:51.126 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:51.126 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:51.126 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:51.126 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:51.126 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:51.126 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:51.126 07:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.415 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:51.415 "name": "Existed_Raid", 00:24:51.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.415 "strip_size_kb": 0, 00:24:51.415 "state": "configuring", 00:24:51.415 "raid_level": "raid1", 00:24:51.415 "superblock": false, 00:24:51.415 "num_base_bdevs": 4, 00:24:51.415 "num_base_bdevs_discovered": 0, 00:24:51.415 "num_base_bdevs_operational": 4, 00:24:51.415 "base_bdevs_list": [ 00:24:51.415 { 00:24:51.415 "name": "BaseBdev1", 00:24:51.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.415 "is_configured": false, 00:24:51.415 "data_offset": 0, 00:24:51.415 "data_size": 0 00:24:51.415 }, 00:24:51.415 { 00:24:51.415 "name": "BaseBdev2", 00:24:51.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.415 "is_configured": false, 00:24:51.415 "data_offset": 0, 00:24:51.415 "data_size": 0 00:24:51.415 }, 00:24:51.415 { 00:24:51.415 "name": "BaseBdev3", 00:24:51.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.415 "is_configured": false, 00:24:51.415 "data_offset": 0, 00:24:51.415 "data_size": 0 00:24:51.415 }, 00:24:51.415 { 00:24:51.415 "name": "BaseBdev4", 00:24:51.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.415 "is_configured": false, 00:24:51.415 "data_offset": 0, 00:24:51.415 "data_size": 0 00:24:51.415 } 00:24:51.415 ] 00:24:51.415 }' 00:24:51.415 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:51.415 07:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.985 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:51.985 [2024-07-12 07:35:25.783016] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:51.985 [2024-07-12 07:35:25.783268] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:24:51.985 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:52.243 [2024-07-12 07:35:25.967046] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:52.243 [2024-07-12 07:35:25.967297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:52.243 [2024-07-12 07:35:25.967429] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:52.243 [2024-07-12 07:35:25.967493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:52.243 [2024-07-12 07:35:25.967575] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:52.243 [2024-07-12 07:35:25.967640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:52.243 [2024-07-12 07:35:25.967668] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:52.243 [2024-07-12 07:35:25.967766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:52.243 07:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:52.502 [2024-07-12 07:35:26.167349] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:52.502 BaseBdev1 00:24:52.502 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:52.502 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:24:52.502 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:52.502 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:52.502 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:52.502 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:52.502 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:52.502 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:52.761 [ 00:24:52.761 { 00:24:52.761 "name": "BaseBdev1", 00:24:52.761 "aliases": [ 00:24:52.761 "c035c11d-210a-4b1d-b23b-f0d8f117d385" 00:24:52.761 ], 00:24:52.761 "product_name": "Malloc disk", 00:24:52.761 "block_size": 512, 00:24:52.761 "num_blocks": 65536, 00:24:52.761 "uuid": "c035c11d-210a-4b1d-b23b-f0d8f117d385", 00:24:52.761 "assigned_rate_limits": { 00:24:52.761 "rw_ios_per_sec": 0, 00:24:52.761 "rw_mbytes_per_sec": 0, 00:24:52.761 "r_mbytes_per_sec": 0, 00:24:52.761 "w_mbytes_per_sec": 0 00:24:52.761 }, 00:24:52.761 "claimed": true, 00:24:52.761 "claim_type": "exclusive_write", 00:24:52.761 "zoned": false, 00:24:52.761 "supported_io_types": { 00:24:52.761 "read": true, 00:24:52.761 "write": true, 00:24:52.761 "unmap": true, 00:24:52.761 "write_zeroes": true, 00:24:52.761 "flush": true, 00:24:52.761 "reset": true, 00:24:52.761 "compare": false, 00:24:52.761 "compare_and_write": false, 00:24:52.761 "abort": true, 00:24:52.761 "nvme_admin": false, 00:24:52.761 "nvme_io": false 00:24:52.761 }, 00:24:52.761 "memory_domains": [ 00:24:52.761 { 00:24:52.761 "dma_device_id": "system", 00:24:52.761 "dma_device_type": 1 00:24:52.761 }, 00:24:52.761 { 00:24:52.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.761 "dma_device_type": 2 00:24:52.761 } 00:24:52.761 ], 00:24:52.761 "driver_specific": {} 00:24:52.761 } 00:24:52.761 ] 00:24:53.020 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:53.020 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:53.020 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:53.020 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:53.020 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:53.020 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:53.020 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:53.020 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:53.020 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:53.020 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:53.020 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:53.020 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.020 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:53.279 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:53.279 "name": "Existed_Raid", 00:24:53.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.279 "strip_size_kb": 0, 00:24:53.279 "state": "configuring", 00:24:53.279 "raid_level": "raid1", 00:24:53.279 "superblock": false, 00:24:53.279 "num_base_bdevs": 4, 00:24:53.279 "num_base_bdevs_discovered": 1, 00:24:53.279 "num_base_bdevs_operational": 4, 00:24:53.279 "base_bdevs_list": [ 00:24:53.279 { 00:24:53.279 "name": "BaseBdev1", 00:24:53.279 "uuid": "c035c11d-210a-4b1d-b23b-f0d8f117d385", 00:24:53.279 "is_configured": true, 00:24:53.279 "data_offset": 0, 00:24:53.279 "data_size": 65536 00:24:53.279 }, 00:24:53.279 { 00:24:53.279 "name": "BaseBdev2", 00:24:53.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.279 "is_configured": false, 00:24:53.279 "data_offset": 0, 00:24:53.279 "data_size": 0 00:24:53.279 }, 00:24:53.279 { 00:24:53.279 "name": "BaseBdev3", 00:24:53.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.279 "is_configured": false, 00:24:53.279 "data_offset": 0, 00:24:53.279 "data_size": 0 00:24:53.279 }, 00:24:53.279 { 00:24:53.279 "name": "BaseBdev4", 00:24:53.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.279 "is_configured": false, 00:24:53.279 "data_offset": 0, 00:24:53.279 "data_size": 0 00:24:53.279 } 00:24:53.279 ] 00:24:53.279 }' 00:24:53.279 07:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:53.279 07:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.847 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:54.106 [2024-07-12 07:35:27.755673] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:54.106 [2024-07-12 07:35:27.755906] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:24:54.106 07:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:54.365 [2024-07-12 07:35:28.103808] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:54.365 [2024-07-12 07:35:28.106430] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:54.365 [2024-07-12 07:35:28.106657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:54.365 [2024-07-12 07:35:28.106751] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:54.365 [2024-07-12 07:35:28.106811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:54.365 [2024-07-12 07:35:28.106884] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:54.365 [2024-07-12 07:35:28.106935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:54.365 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:54.365 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:54.365 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:54.365 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:54.365 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:54.365 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:54.365 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:54.365 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:54.365 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:54.365 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:54.365 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:54.365 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:54.365 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.365 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.625 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:54.625 "name": "Existed_Raid", 00:24:54.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.625 "strip_size_kb": 0, 00:24:54.625 "state": "configuring", 00:24:54.625 "raid_level": "raid1", 00:24:54.625 "superblock": false, 00:24:54.625 "num_base_bdevs": 4, 00:24:54.625 "num_base_bdevs_discovered": 1, 00:24:54.625 "num_base_bdevs_operational": 4, 00:24:54.625 "base_bdevs_list": [ 00:24:54.625 { 00:24:54.625 "name": "BaseBdev1", 00:24:54.625 "uuid": "c035c11d-210a-4b1d-b23b-f0d8f117d385", 00:24:54.625 "is_configured": true, 00:24:54.625 "data_offset": 0, 00:24:54.625 "data_size": 65536 00:24:54.625 }, 00:24:54.625 { 00:24:54.625 "name": "BaseBdev2", 00:24:54.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.625 "is_configured": false, 00:24:54.625 "data_offset": 0, 00:24:54.625 "data_size": 0 00:24:54.625 }, 00:24:54.625 { 00:24:54.625 "name": "BaseBdev3", 00:24:54.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.625 "is_configured": false, 00:24:54.625 "data_offset": 0, 00:24:54.625 "data_size": 0 00:24:54.625 }, 00:24:54.625 { 00:24:54.625 "name": "BaseBdev4", 00:24:54.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.625 "is_configured": false, 00:24:54.625 "data_offset": 0, 00:24:54.625 "data_size": 0 00:24:54.625 } 00:24:54.625 ] 00:24:54.625 }' 00:24:54.625 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:54.625 07:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.194 07:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:55.453 [2024-07-12 07:35:29.235042] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:55.453 BaseBdev2 00:24:55.453 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:55.453 07:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:55.453 07:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:55.453 07:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:55.453 07:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:55.453 07:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:55.453 07:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:55.712 07:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:55.981 [ 00:24:55.981 { 00:24:55.981 "name": "BaseBdev2", 00:24:55.981 "aliases": [ 00:24:55.981 "cbc57722-02c9-47b5-bee5-ee518f1d521c" 00:24:55.981 ], 00:24:55.981 "product_name": "Malloc disk", 00:24:55.981 "block_size": 512, 00:24:55.981 "num_blocks": 65536, 00:24:55.981 "uuid": "cbc57722-02c9-47b5-bee5-ee518f1d521c", 00:24:55.981 "assigned_rate_limits": { 00:24:55.981 "rw_ios_per_sec": 0, 00:24:55.981 "rw_mbytes_per_sec": 0, 00:24:55.981 "r_mbytes_per_sec": 0, 00:24:55.981 "w_mbytes_per_sec": 0 00:24:55.981 }, 00:24:55.981 "claimed": true, 00:24:55.981 "claim_type": "exclusive_write", 00:24:55.981 "zoned": false, 00:24:55.981 "supported_io_types": { 00:24:55.981 "read": true, 00:24:55.981 "write": true, 00:24:55.981 "unmap": true, 00:24:55.981 "write_zeroes": true, 00:24:55.981 "flush": true, 00:24:55.981 "reset": true, 00:24:55.981 "compare": false, 00:24:55.981 "compare_and_write": false, 00:24:55.981 "abort": true, 00:24:55.981 "nvme_admin": false, 00:24:55.981 "nvme_io": false 00:24:55.981 }, 00:24:55.981 "memory_domains": [ 00:24:55.981 { 00:24:55.981 "dma_device_id": "system", 00:24:55.981 "dma_device_type": 1 00:24:55.981 }, 00:24:55.981 { 00:24:55.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:55.981 "dma_device_type": 2 00:24:55.981 } 00:24:55.981 ], 00:24:55.981 "driver_specific": {} 00:24:55.981 } 00:24:55.981 ] 00:24:55.981 07:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:55.981 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:55.981 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:55.981 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:55.981 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:55.981 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:55.981 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:55.981 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:55.981 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:55.981 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:55.981 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:55.981 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:55.981 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:55.981 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:55.981 07:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.239 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:56.239 "name": "Existed_Raid", 00:24:56.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.239 "strip_size_kb": 0, 00:24:56.239 "state": "configuring", 00:24:56.239 "raid_level": "raid1", 00:24:56.239 "superblock": false, 00:24:56.239 "num_base_bdevs": 4, 00:24:56.239 "num_base_bdevs_discovered": 2, 00:24:56.239 "num_base_bdevs_operational": 4, 00:24:56.239 "base_bdevs_list": [ 00:24:56.239 { 00:24:56.239 "name": "BaseBdev1", 00:24:56.239 "uuid": "c035c11d-210a-4b1d-b23b-f0d8f117d385", 00:24:56.239 "is_configured": true, 00:24:56.239 "data_offset": 0, 00:24:56.239 "data_size": 65536 00:24:56.239 }, 00:24:56.239 { 00:24:56.239 "name": "BaseBdev2", 00:24:56.239 "uuid": "cbc57722-02c9-47b5-bee5-ee518f1d521c", 00:24:56.239 "is_configured": true, 00:24:56.239 "data_offset": 0, 00:24:56.239 "data_size": 65536 00:24:56.239 }, 00:24:56.239 { 00:24:56.239 "name": "BaseBdev3", 00:24:56.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.239 "is_configured": false, 00:24:56.239 "data_offset": 0, 00:24:56.239 "data_size": 0 00:24:56.239 }, 00:24:56.239 { 00:24:56.239 "name": "BaseBdev4", 00:24:56.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.239 "is_configured": false, 00:24:56.239 "data_offset": 0, 00:24:56.239 "data_size": 0 00:24:56.239 } 00:24:56.239 ] 00:24:56.239 }' 00:24:56.239 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:56.239 07:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.805 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:57.064 [2024-07-12 07:35:30.938425] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:57.064 BaseBdev3 00:24:57.322 07:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:57.322 07:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:24:57.322 07:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:57.322 07:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:57.322 07:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:57.322 07:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:57.322 07:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:57.322 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:57.581 [ 00:24:57.581 { 00:24:57.581 "name": "BaseBdev3", 00:24:57.581 "aliases": [ 00:24:57.581 "81f33944-a73d-43e7-a3a5-ab8483e65470" 00:24:57.581 ], 00:24:57.581 "product_name": "Malloc disk", 00:24:57.581 "block_size": 512, 00:24:57.581 "num_blocks": 65536, 00:24:57.581 "uuid": "81f33944-a73d-43e7-a3a5-ab8483e65470", 00:24:57.581 "assigned_rate_limits": { 00:24:57.581 "rw_ios_per_sec": 0, 00:24:57.581 "rw_mbytes_per_sec": 0, 00:24:57.581 "r_mbytes_per_sec": 0, 00:24:57.581 "w_mbytes_per_sec": 0 00:24:57.581 }, 00:24:57.581 "claimed": true, 00:24:57.581 "claim_type": "exclusive_write", 00:24:57.581 "zoned": false, 00:24:57.581 "supported_io_types": { 00:24:57.581 "read": true, 00:24:57.581 "write": true, 00:24:57.581 "unmap": true, 00:24:57.581 "write_zeroes": true, 00:24:57.581 "flush": true, 00:24:57.581 "reset": true, 00:24:57.581 "compare": false, 00:24:57.581 "compare_and_write": false, 00:24:57.581 "abort": true, 00:24:57.581 "nvme_admin": false, 00:24:57.581 "nvme_io": false 00:24:57.581 }, 00:24:57.581 "memory_domains": [ 00:24:57.581 { 00:24:57.581 "dma_device_id": "system", 00:24:57.581 "dma_device_type": 1 00:24:57.581 }, 00:24:57.581 { 00:24:57.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.581 "dma_device_type": 2 00:24:57.581 } 00:24:57.581 ], 00:24:57.581 "driver_specific": {} 00:24:57.581 } 00:24:57.581 ] 00:24:57.581 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:57.581 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:57.581 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:57.581 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:57.581 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:57.581 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:57.581 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:57.581 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:57.581 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:57.581 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:57.581 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:57.581 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:57.581 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:57.581 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.581 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.840 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:57.840 "name": "Existed_Raid", 00:24:57.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.840 "strip_size_kb": 0, 00:24:57.840 "state": "configuring", 00:24:57.840 "raid_level": "raid1", 00:24:57.840 "superblock": false, 00:24:57.840 "num_base_bdevs": 4, 00:24:57.840 "num_base_bdevs_discovered": 3, 00:24:57.840 "num_base_bdevs_operational": 4, 00:24:57.840 "base_bdevs_list": [ 00:24:57.840 { 00:24:57.840 "name": "BaseBdev1", 00:24:57.840 "uuid": "c035c11d-210a-4b1d-b23b-f0d8f117d385", 00:24:57.840 "is_configured": true, 00:24:57.840 "data_offset": 0, 00:24:57.840 "data_size": 65536 00:24:57.840 }, 00:24:57.840 { 00:24:57.840 "name": "BaseBdev2", 00:24:57.840 "uuid": "cbc57722-02c9-47b5-bee5-ee518f1d521c", 00:24:57.840 "is_configured": true, 00:24:57.840 "data_offset": 0, 00:24:57.840 "data_size": 65536 00:24:57.840 }, 00:24:57.840 { 00:24:57.840 "name": "BaseBdev3", 00:24:57.840 "uuid": "81f33944-a73d-43e7-a3a5-ab8483e65470", 00:24:57.840 "is_configured": true, 00:24:57.840 "data_offset": 0, 00:24:57.840 "data_size": 65536 00:24:57.840 }, 00:24:57.840 { 00:24:57.840 "name": "BaseBdev4", 00:24:57.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.840 "is_configured": false, 00:24:57.840 "data_offset": 0, 00:24:57.840 "data_size": 0 00:24:57.840 } 00:24:57.840 ] 00:24:57.840 }' 00:24:57.840 07:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:57.840 07:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.409 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:58.668 [2024-07-12 07:35:32.409859] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:58.668 [2024-07-12 07:35:32.410163] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:24:58.668 [2024-07-12 07:35:32.410206] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:58.668 [2024-07-12 07:35:32.410447] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:24:58.668 [2024-07-12 07:35:32.410982] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:24:58.668 [2024-07-12 07:35:32.411097] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:24:58.668 [2024-07-12 07:35:32.411440] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:58.668 BaseBdev4 00:24:58.668 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:58.668 07:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:24:58.668 07:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:58.668 07:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:58.668 07:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:58.668 07:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:58.668 07:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:58.927 07:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:59.187 [ 00:24:59.187 { 00:24:59.187 "name": "BaseBdev4", 00:24:59.187 "aliases": [ 00:24:59.187 "fc6633b0-e6c1-4365-9d79-bf860a2e4453" 00:24:59.187 ], 00:24:59.187 "product_name": "Malloc disk", 00:24:59.187 "block_size": 512, 00:24:59.187 "num_blocks": 65536, 00:24:59.187 "uuid": "fc6633b0-e6c1-4365-9d79-bf860a2e4453", 00:24:59.187 "assigned_rate_limits": { 00:24:59.187 "rw_ios_per_sec": 0, 00:24:59.187 "rw_mbytes_per_sec": 0, 00:24:59.187 "r_mbytes_per_sec": 0, 00:24:59.187 "w_mbytes_per_sec": 0 00:24:59.187 }, 00:24:59.187 "claimed": true, 00:24:59.187 "claim_type": "exclusive_write", 00:24:59.187 "zoned": false, 00:24:59.187 "supported_io_types": { 00:24:59.187 "read": true, 00:24:59.187 "write": true, 00:24:59.187 "unmap": true, 00:24:59.187 "write_zeroes": true, 00:24:59.187 "flush": true, 00:24:59.187 "reset": true, 00:24:59.187 "compare": false, 00:24:59.187 "compare_and_write": false, 00:24:59.187 "abort": true, 00:24:59.187 "nvme_admin": false, 00:24:59.187 "nvme_io": false 00:24:59.187 }, 00:24:59.187 "memory_domains": [ 00:24:59.187 { 00:24:59.187 "dma_device_id": "system", 00:24:59.187 "dma_device_type": 1 00:24:59.187 }, 00:24:59.187 { 00:24:59.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:59.187 "dma_device_type": 2 00:24:59.187 } 00:24:59.187 ], 00:24:59.187 "driver_specific": {} 00:24:59.187 } 00:24:59.187 ] 00:24:59.187 07:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:59.187 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:59.187 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:59.187 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:24:59.187 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:59.187 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:59.187 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:59.187 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:59.187 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:59.187 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:59.187 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:59.187 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:59.187 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:59.187 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.187 07:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:59.446 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:59.446 "name": "Existed_Raid", 00:24:59.446 "uuid": "bfd79a34-bf62-4745-a736-664a77b2b1e7", 00:24:59.446 "strip_size_kb": 0, 00:24:59.446 "state": "online", 00:24:59.446 "raid_level": "raid1", 00:24:59.446 "superblock": false, 00:24:59.446 "num_base_bdevs": 4, 00:24:59.446 "num_base_bdevs_discovered": 4, 00:24:59.446 "num_base_bdevs_operational": 4, 00:24:59.446 "base_bdevs_list": [ 00:24:59.446 { 00:24:59.446 "name": "BaseBdev1", 00:24:59.446 "uuid": "c035c11d-210a-4b1d-b23b-f0d8f117d385", 00:24:59.446 "is_configured": true, 00:24:59.446 "data_offset": 0, 00:24:59.446 "data_size": 65536 00:24:59.446 }, 00:24:59.446 { 00:24:59.446 "name": "BaseBdev2", 00:24:59.446 "uuid": "cbc57722-02c9-47b5-bee5-ee518f1d521c", 00:24:59.446 "is_configured": true, 00:24:59.446 "data_offset": 0, 00:24:59.446 "data_size": 65536 00:24:59.446 }, 00:24:59.446 { 00:24:59.446 "name": "BaseBdev3", 00:24:59.446 "uuid": "81f33944-a73d-43e7-a3a5-ab8483e65470", 00:24:59.446 "is_configured": true, 00:24:59.446 "data_offset": 0, 00:24:59.446 "data_size": 65536 00:24:59.446 }, 00:24:59.446 { 00:24:59.446 "name": "BaseBdev4", 00:24:59.446 "uuid": "fc6633b0-e6c1-4365-9d79-bf860a2e4453", 00:24:59.446 "is_configured": true, 00:24:59.446 "data_offset": 0, 00:24:59.446 "data_size": 65536 00:24:59.446 } 00:24:59.446 ] 00:24:59.446 }' 00:24:59.446 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:59.446 07:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.014 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:00.014 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:00.014 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:00.014 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:00.014 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:00.014 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:00.014 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:00.014 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:00.014 [2024-07-12 07:35:33.866602] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:00.014 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:00.014 "name": "Existed_Raid", 00:25:00.014 "aliases": [ 00:25:00.014 "bfd79a34-bf62-4745-a736-664a77b2b1e7" 00:25:00.014 ], 00:25:00.014 "product_name": "Raid Volume", 00:25:00.014 "block_size": 512, 00:25:00.014 "num_blocks": 65536, 00:25:00.014 "uuid": "bfd79a34-bf62-4745-a736-664a77b2b1e7", 00:25:00.014 "assigned_rate_limits": { 00:25:00.014 "rw_ios_per_sec": 0, 00:25:00.014 "rw_mbytes_per_sec": 0, 00:25:00.014 "r_mbytes_per_sec": 0, 00:25:00.014 "w_mbytes_per_sec": 0 00:25:00.014 }, 00:25:00.014 "claimed": false, 00:25:00.014 "zoned": false, 00:25:00.014 "supported_io_types": { 00:25:00.014 "read": true, 00:25:00.014 "write": true, 00:25:00.014 "unmap": false, 00:25:00.014 "write_zeroes": true, 00:25:00.014 "flush": false, 00:25:00.014 "reset": true, 00:25:00.014 "compare": false, 00:25:00.014 "compare_and_write": false, 00:25:00.014 "abort": false, 00:25:00.014 "nvme_admin": false, 00:25:00.014 "nvme_io": false 00:25:00.014 }, 00:25:00.014 "memory_domains": [ 00:25:00.014 { 00:25:00.014 "dma_device_id": "system", 00:25:00.014 "dma_device_type": 1 00:25:00.014 }, 00:25:00.014 { 00:25:00.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:00.014 "dma_device_type": 2 00:25:00.014 }, 00:25:00.014 { 00:25:00.014 "dma_device_id": "system", 00:25:00.014 "dma_device_type": 1 00:25:00.014 }, 00:25:00.014 { 00:25:00.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:00.014 "dma_device_type": 2 00:25:00.014 }, 00:25:00.014 { 00:25:00.014 "dma_device_id": "system", 00:25:00.014 "dma_device_type": 1 00:25:00.014 }, 00:25:00.014 { 00:25:00.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:00.014 "dma_device_type": 2 00:25:00.014 }, 00:25:00.014 { 00:25:00.014 "dma_device_id": "system", 00:25:00.014 "dma_device_type": 1 00:25:00.014 }, 00:25:00.014 { 00:25:00.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:00.014 "dma_device_type": 2 00:25:00.014 } 00:25:00.014 ], 00:25:00.014 "driver_specific": { 00:25:00.014 "raid": { 00:25:00.014 "uuid": "bfd79a34-bf62-4745-a736-664a77b2b1e7", 00:25:00.014 "strip_size_kb": 0, 00:25:00.014 "state": "online", 00:25:00.014 "raid_level": "raid1", 00:25:00.014 "superblock": false, 00:25:00.014 "num_base_bdevs": 4, 00:25:00.014 "num_base_bdevs_discovered": 4, 00:25:00.014 "num_base_bdevs_operational": 4, 00:25:00.014 "base_bdevs_list": [ 00:25:00.014 { 00:25:00.014 "name": "BaseBdev1", 00:25:00.014 "uuid": "c035c11d-210a-4b1d-b23b-f0d8f117d385", 00:25:00.014 "is_configured": true, 00:25:00.014 "data_offset": 0, 00:25:00.014 "data_size": 65536 00:25:00.014 }, 00:25:00.014 { 00:25:00.014 "name": "BaseBdev2", 00:25:00.014 "uuid": "cbc57722-02c9-47b5-bee5-ee518f1d521c", 00:25:00.014 "is_configured": true, 00:25:00.014 "data_offset": 0, 00:25:00.014 "data_size": 65536 00:25:00.014 }, 00:25:00.014 { 00:25:00.014 "name": "BaseBdev3", 00:25:00.014 "uuid": "81f33944-a73d-43e7-a3a5-ab8483e65470", 00:25:00.014 "is_configured": true, 00:25:00.014 "data_offset": 0, 00:25:00.014 "data_size": 65536 00:25:00.014 }, 00:25:00.014 { 00:25:00.014 "name": "BaseBdev4", 00:25:00.014 "uuid": "fc6633b0-e6c1-4365-9d79-bf860a2e4453", 00:25:00.014 "is_configured": true, 00:25:00.014 "data_offset": 0, 00:25:00.014 "data_size": 65536 00:25:00.014 } 00:25:00.014 ] 00:25:00.014 } 00:25:00.014 } 00:25:00.014 }' 00:25:00.274 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:00.274 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:00.274 BaseBdev2 00:25:00.274 BaseBdev3 00:25:00.274 BaseBdev4' 00:25:00.274 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:00.274 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:00.274 07:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:00.535 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:00.535 "name": "BaseBdev1", 00:25:00.535 "aliases": [ 00:25:00.535 "c035c11d-210a-4b1d-b23b-f0d8f117d385" 00:25:00.535 ], 00:25:00.535 "product_name": "Malloc disk", 00:25:00.535 "block_size": 512, 00:25:00.535 "num_blocks": 65536, 00:25:00.535 "uuid": "c035c11d-210a-4b1d-b23b-f0d8f117d385", 00:25:00.535 "assigned_rate_limits": { 00:25:00.535 "rw_ios_per_sec": 0, 00:25:00.535 "rw_mbytes_per_sec": 0, 00:25:00.535 "r_mbytes_per_sec": 0, 00:25:00.535 "w_mbytes_per_sec": 0 00:25:00.535 }, 00:25:00.535 "claimed": true, 00:25:00.535 "claim_type": "exclusive_write", 00:25:00.535 "zoned": false, 00:25:00.535 "supported_io_types": { 00:25:00.535 "read": true, 00:25:00.535 "write": true, 00:25:00.535 "unmap": true, 00:25:00.535 "write_zeroes": true, 00:25:00.535 "flush": true, 00:25:00.535 "reset": true, 00:25:00.535 "compare": false, 00:25:00.535 "compare_and_write": false, 00:25:00.535 "abort": true, 00:25:00.535 "nvme_admin": false, 00:25:00.535 "nvme_io": false 00:25:00.535 }, 00:25:00.535 "memory_domains": [ 00:25:00.535 { 00:25:00.535 "dma_device_id": "system", 00:25:00.535 "dma_device_type": 1 00:25:00.535 }, 00:25:00.535 { 00:25:00.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:00.535 "dma_device_type": 2 00:25:00.535 } 00:25:00.535 ], 00:25:00.535 "driver_specific": {} 00:25:00.535 }' 00:25:00.535 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:00.535 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:00.535 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:00.535 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:00.535 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:00.794 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:00.794 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:00.794 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:00.794 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:00.794 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:00.794 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:00.794 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:00.794 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:00.794 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:00.794 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:01.362 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:01.362 "name": "BaseBdev2", 00:25:01.362 "aliases": [ 00:25:01.362 "cbc57722-02c9-47b5-bee5-ee518f1d521c" 00:25:01.362 ], 00:25:01.362 "product_name": "Malloc disk", 00:25:01.362 "block_size": 512, 00:25:01.362 "num_blocks": 65536, 00:25:01.362 "uuid": "cbc57722-02c9-47b5-bee5-ee518f1d521c", 00:25:01.362 "assigned_rate_limits": { 00:25:01.362 "rw_ios_per_sec": 0, 00:25:01.362 "rw_mbytes_per_sec": 0, 00:25:01.362 "r_mbytes_per_sec": 0, 00:25:01.362 "w_mbytes_per_sec": 0 00:25:01.362 }, 00:25:01.362 "claimed": true, 00:25:01.362 "claim_type": "exclusive_write", 00:25:01.362 "zoned": false, 00:25:01.362 "supported_io_types": { 00:25:01.362 "read": true, 00:25:01.362 "write": true, 00:25:01.362 "unmap": true, 00:25:01.362 "write_zeroes": true, 00:25:01.362 "flush": true, 00:25:01.362 "reset": true, 00:25:01.362 "compare": false, 00:25:01.362 "compare_and_write": false, 00:25:01.362 "abort": true, 00:25:01.362 "nvme_admin": false, 00:25:01.362 "nvme_io": false 00:25:01.362 }, 00:25:01.362 "memory_domains": [ 00:25:01.362 { 00:25:01.362 "dma_device_id": "system", 00:25:01.362 "dma_device_type": 1 00:25:01.362 }, 00:25:01.362 { 00:25:01.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.362 "dma_device_type": 2 00:25:01.362 } 00:25:01.362 ], 00:25:01.362 "driver_specific": {} 00:25:01.362 }' 00:25:01.362 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:01.362 07:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:01.362 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:01.362 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:01.362 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:01.362 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:01.362 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:01.362 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:01.362 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:01.362 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:01.621 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:01.621 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:01.621 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:01.621 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:01.621 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:01.880 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:01.880 "name": "BaseBdev3", 00:25:01.880 "aliases": [ 00:25:01.880 "81f33944-a73d-43e7-a3a5-ab8483e65470" 00:25:01.880 ], 00:25:01.880 "product_name": "Malloc disk", 00:25:01.880 "block_size": 512, 00:25:01.880 "num_blocks": 65536, 00:25:01.880 "uuid": "81f33944-a73d-43e7-a3a5-ab8483e65470", 00:25:01.880 "assigned_rate_limits": { 00:25:01.880 "rw_ios_per_sec": 0, 00:25:01.880 "rw_mbytes_per_sec": 0, 00:25:01.880 "r_mbytes_per_sec": 0, 00:25:01.880 "w_mbytes_per_sec": 0 00:25:01.880 }, 00:25:01.880 "claimed": true, 00:25:01.880 "claim_type": "exclusive_write", 00:25:01.880 "zoned": false, 00:25:01.880 "supported_io_types": { 00:25:01.880 "read": true, 00:25:01.880 "write": true, 00:25:01.880 "unmap": true, 00:25:01.880 "write_zeroes": true, 00:25:01.880 "flush": true, 00:25:01.880 "reset": true, 00:25:01.880 "compare": false, 00:25:01.880 "compare_and_write": false, 00:25:01.880 "abort": true, 00:25:01.880 "nvme_admin": false, 00:25:01.880 "nvme_io": false 00:25:01.880 }, 00:25:01.880 "memory_domains": [ 00:25:01.880 { 00:25:01.880 "dma_device_id": "system", 00:25:01.880 "dma_device_type": 1 00:25:01.880 }, 00:25:01.880 { 00:25:01.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.880 "dma_device_type": 2 00:25:01.880 } 00:25:01.880 ], 00:25:01.880 "driver_specific": {} 00:25:01.880 }' 00:25:01.880 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:01.880 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:01.880 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:01.880 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:01.880 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:01.880 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:01.880 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:02.152 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:02.152 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:02.152 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:02.152 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:02.152 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:02.152 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:02.152 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:02.152 07:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:02.410 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:02.410 "name": "BaseBdev4", 00:25:02.410 "aliases": [ 00:25:02.410 "fc6633b0-e6c1-4365-9d79-bf860a2e4453" 00:25:02.410 ], 00:25:02.410 "product_name": "Malloc disk", 00:25:02.410 "block_size": 512, 00:25:02.410 "num_blocks": 65536, 00:25:02.410 "uuid": "fc6633b0-e6c1-4365-9d79-bf860a2e4453", 00:25:02.410 "assigned_rate_limits": { 00:25:02.410 "rw_ios_per_sec": 0, 00:25:02.410 "rw_mbytes_per_sec": 0, 00:25:02.410 "r_mbytes_per_sec": 0, 00:25:02.410 "w_mbytes_per_sec": 0 00:25:02.410 }, 00:25:02.410 "claimed": true, 00:25:02.410 "claim_type": "exclusive_write", 00:25:02.410 "zoned": false, 00:25:02.410 "supported_io_types": { 00:25:02.410 "read": true, 00:25:02.410 "write": true, 00:25:02.410 "unmap": true, 00:25:02.410 "write_zeroes": true, 00:25:02.410 "flush": true, 00:25:02.410 "reset": true, 00:25:02.410 "compare": false, 00:25:02.410 "compare_and_write": false, 00:25:02.410 "abort": true, 00:25:02.410 "nvme_admin": false, 00:25:02.410 "nvme_io": false 00:25:02.410 }, 00:25:02.410 "memory_domains": [ 00:25:02.410 { 00:25:02.410 "dma_device_id": "system", 00:25:02.410 "dma_device_type": 1 00:25:02.410 }, 00:25:02.410 { 00:25:02.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.410 "dma_device_type": 2 00:25:02.410 } 00:25:02.410 ], 00:25:02.410 "driver_specific": {} 00:25:02.410 }' 00:25:02.410 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:02.410 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:02.410 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:02.410 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:02.669 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:02.669 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:02.669 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:02.669 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:02.669 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:02.669 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:02.669 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:02.669 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:02.669 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:02.929 [2024-07-12 07:35:36.767175] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.929 07:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.188 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:03.188 "name": "Existed_Raid", 00:25:03.188 "uuid": "bfd79a34-bf62-4745-a736-664a77b2b1e7", 00:25:03.188 "strip_size_kb": 0, 00:25:03.188 "state": "online", 00:25:03.188 "raid_level": "raid1", 00:25:03.188 "superblock": false, 00:25:03.188 "num_base_bdevs": 4, 00:25:03.188 "num_base_bdevs_discovered": 3, 00:25:03.188 "num_base_bdevs_operational": 3, 00:25:03.188 "base_bdevs_list": [ 00:25:03.188 { 00:25:03.188 "name": null, 00:25:03.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.188 "is_configured": false, 00:25:03.188 "data_offset": 0, 00:25:03.188 "data_size": 65536 00:25:03.188 }, 00:25:03.188 { 00:25:03.188 "name": "BaseBdev2", 00:25:03.188 "uuid": "cbc57722-02c9-47b5-bee5-ee518f1d521c", 00:25:03.188 "is_configured": true, 00:25:03.188 "data_offset": 0, 00:25:03.188 "data_size": 65536 00:25:03.188 }, 00:25:03.188 { 00:25:03.188 "name": "BaseBdev3", 00:25:03.188 "uuid": "81f33944-a73d-43e7-a3a5-ab8483e65470", 00:25:03.188 "is_configured": true, 00:25:03.188 "data_offset": 0, 00:25:03.188 "data_size": 65536 00:25:03.188 }, 00:25:03.188 { 00:25:03.188 "name": "BaseBdev4", 00:25:03.188 "uuid": "fc6633b0-e6c1-4365-9d79-bf860a2e4453", 00:25:03.188 "is_configured": true, 00:25:03.188 "data_offset": 0, 00:25:03.188 "data_size": 65536 00:25:03.188 } 00:25:03.188 ] 00:25:03.188 }' 00:25:03.188 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:03.188 07:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.755 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:03.755 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:03.755 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:03.755 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.322 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:04.322 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:04.322 07:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:04.322 [2024-07-12 07:35:38.159830] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:04.323 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:04.323 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:04.323 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.323 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:04.581 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:04.581 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:04.581 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:04.840 [2024-07-12 07:35:38.624153] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:04.840 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:04.840 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:04.840 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.840 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:05.099 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:05.099 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:05.099 07:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:05.358 [2024-07-12 07:35:39.004466] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:05.358 [2024-07-12 07:35:39.004779] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:05.358 [2024-07-12 07:35:39.017153] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:05.358 [2024-07-12 07:35:39.017447] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:05.358 [2024-07-12 07:35:39.017541] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:25:05.358 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:05.358 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:05.358 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.358 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:05.617 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:05.617 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:05.617 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:05.617 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:05.617 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:05.617 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:05.875 BaseBdev2 00:25:05.875 07:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:05.875 07:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:25:05.876 07:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:05.876 07:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:05.876 07:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:05.876 07:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:05.876 07:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:06.134 07:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:06.391 [ 00:25:06.391 { 00:25:06.391 "name": "BaseBdev2", 00:25:06.391 "aliases": [ 00:25:06.391 "576abac9-41fe-4f77-9fbc-72fe04de23b0" 00:25:06.391 ], 00:25:06.391 "product_name": "Malloc disk", 00:25:06.391 "block_size": 512, 00:25:06.391 "num_blocks": 65536, 00:25:06.391 "uuid": "576abac9-41fe-4f77-9fbc-72fe04de23b0", 00:25:06.391 "assigned_rate_limits": { 00:25:06.391 "rw_ios_per_sec": 0, 00:25:06.391 "rw_mbytes_per_sec": 0, 00:25:06.391 "r_mbytes_per_sec": 0, 00:25:06.391 "w_mbytes_per_sec": 0 00:25:06.391 }, 00:25:06.391 "claimed": false, 00:25:06.391 "zoned": false, 00:25:06.391 "supported_io_types": { 00:25:06.391 "read": true, 00:25:06.391 "write": true, 00:25:06.391 "unmap": true, 00:25:06.391 "write_zeroes": true, 00:25:06.391 "flush": true, 00:25:06.391 "reset": true, 00:25:06.391 "compare": false, 00:25:06.391 "compare_and_write": false, 00:25:06.391 "abort": true, 00:25:06.391 "nvme_admin": false, 00:25:06.391 "nvme_io": false 00:25:06.391 }, 00:25:06.391 "memory_domains": [ 00:25:06.391 { 00:25:06.391 "dma_device_id": "system", 00:25:06.391 "dma_device_type": 1 00:25:06.391 }, 00:25:06.391 { 00:25:06.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.391 "dma_device_type": 2 00:25:06.391 } 00:25:06.391 ], 00:25:06.391 "driver_specific": {} 00:25:06.391 } 00:25:06.391 ] 00:25:06.391 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:06.392 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:06.392 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:06.392 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:06.392 BaseBdev3 00:25:06.392 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:06.392 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:25:06.392 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:06.392 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:06.392 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:06.392 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:06.392 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:06.650 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:06.908 [ 00:25:06.908 { 00:25:06.908 "name": "BaseBdev3", 00:25:06.908 "aliases": [ 00:25:06.908 "f0243b33-3c47-4d12-8ba6-cfed9925a014" 00:25:06.908 ], 00:25:06.908 "product_name": "Malloc disk", 00:25:06.908 "block_size": 512, 00:25:06.908 "num_blocks": 65536, 00:25:06.908 "uuid": "f0243b33-3c47-4d12-8ba6-cfed9925a014", 00:25:06.908 "assigned_rate_limits": { 00:25:06.908 "rw_ios_per_sec": 0, 00:25:06.908 "rw_mbytes_per_sec": 0, 00:25:06.908 "r_mbytes_per_sec": 0, 00:25:06.908 "w_mbytes_per_sec": 0 00:25:06.908 }, 00:25:06.908 "claimed": false, 00:25:06.908 "zoned": false, 00:25:06.908 "supported_io_types": { 00:25:06.908 "read": true, 00:25:06.908 "write": true, 00:25:06.908 "unmap": true, 00:25:06.908 "write_zeroes": true, 00:25:06.908 "flush": true, 00:25:06.908 "reset": true, 00:25:06.908 "compare": false, 00:25:06.908 "compare_and_write": false, 00:25:06.908 "abort": true, 00:25:06.908 "nvme_admin": false, 00:25:06.908 "nvme_io": false 00:25:06.908 }, 00:25:06.908 "memory_domains": [ 00:25:06.908 { 00:25:06.908 "dma_device_id": "system", 00:25:06.908 "dma_device_type": 1 00:25:06.908 }, 00:25:06.908 { 00:25:06.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.908 "dma_device_type": 2 00:25:06.908 } 00:25:06.908 ], 00:25:06.908 "driver_specific": {} 00:25:06.908 } 00:25:06.908 ] 00:25:06.908 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:06.908 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:06.908 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:06.908 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:07.165 BaseBdev4 00:25:07.165 07:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:07.165 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:25:07.165 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:07.165 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:07.165 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:07.165 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:07.165 07:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:07.424 07:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:07.424 [ 00:25:07.424 { 00:25:07.424 "name": "BaseBdev4", 00:25:07.424 "aliases": [ 00:25:07.424 "b77ee178-8522-448f-868f-14450eb9062b" 00:25:07.424 ], 00:25:07.424 "product_name": "Malloc disk", 00:25:07.424 "block_size": 512, 00:25:07.424 "num_blocks": 65536, 00:25:07.424 "uuid": "b77ee178-8522-448f-868f-14450eb9062b", 00:25:07.424 "assigned_rate_limits": { 00:25:07.424 "rw_ios_per_sec": 0, 00:25:07.424 "rw_mbytes_per_sec": 0, 00:25:07.424 "r_mbytes_per_sec": 0, 00:25:07.424 "w_mbytes_per_sec": 0 00:25:07.424 }, 00:25:07.424 "claimed": false, 00:25:07.424 "zoned": false, 00:25:07.424 "supported_io_types": { 00:25:07.424 "read": true, 00:25:07.424 "write": true, 00:25:07.424 "unmap": true, 00:25:07.424 "write_zeroes": true, 00:25:07.424 "flush": true, 00:25:07.424 "reset": true, 00:25:07.424 "compare": false, 00:25:07.424 "compare_and_write": false, 00:25:07.424 "abort": true, 00:25:07.424 "nvme_admin": false, 00:25:07.424 "nvme_io": false 00:25:07.424 }, 00:25:07.424 "memory_domains": [ 00:25:07.424 { 00:25:07.424 "dma_device_id": "system", 00:25:07.424 "dma_device_type": 1 00:25:07.424 }, 00:25:07.424 { 00:25:07.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.424 "dma_device_type": 2 00:25:07.424 } 00:25:07.424 ], 00:25:07.424 "driver_specific": {} 00:25:07.424 } 00:25:07.424 ] 00:25:07.424 07:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:07.424 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:07.424 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:07.424 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:07.684 [2024-07-12 07:35:41.434341] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:07.684 [2024-07-12 07:35:41.434679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:07.684 [2024-07-12 07:35:41.434862] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:07.684 [2024-07-12 07:35:41.437568] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:07.684 [2024-07-12 07:35:41.437762] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:07.684 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:07.684 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:07.684 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:07.684 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:07.684 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:07.684 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:07.684 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:07.684 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:07.684 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:07.684 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:07.684 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.684 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.944 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:07.944 "name": "Existed_Raid", 00:25:07.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.944 "strip_size_kb": 0, 00:25:07.944 "state": "configuring", 00:25:07.944 "raid_level": "raid1", 00:25:07.944 "superblock": false, 00:25:07.944 "num_base_bdevs": 4, 00:25:07.944 "num_base_bdevs_discovered": 3, 00:25:07.944 "num_base_bdevs_operational": 4, 00:25:07.944 "base_bdevs_list": [ 00:25:07.944 { 00:25:07.944 "name": "BaseBdev1", 00:25:07.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.944 "is_configured": false, 00:25:07.944 "data_offset": 0, 00:25:07.944 "data_size": 0 00:25:07.944 }, 00:25:07.944 { 00:25:07.944 "name": "BaseBdev2", 00:25:07.944 "uuid": "576abac9-41fe-4f77-9fbc-72fe04de23b0", 00:25:07.944 "is_configured": true, 00:25:07.944 "data_offset": 0, 00:25:07.944 "data_size": 65536 00:25:07.944 }, 00:25:07.944 { 00:25:07.944 "name": "BaseBdev3", 00:25:07.944 "uuid": "f0243b33-3c47-4d12-8ba6-cfed9925a014", 00:25:07.944 "is_configured": true, 00:25:07.944 "data_offset": 0, 00:25:07.944 "data_size": 65536 00:25:07.944 }, 00:25:07.944 { 00:25:07.944 "name": "BaseBdev4", 00:25:07.944 "uuid": "b77ee178-8522-448f-868f-14450eb9062b", 00:25:07.944 "is_configured": true, 00:25:07.944 "data_offset": 0, 00:25:07.944 "data_size": 65536 00:25:07.944 } 00:25:07.944 ] 00:25:07.944 }' 00:25:07.944 07:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:07.944 07:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.511 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:08.770 [2024-07-12 07:35:42.529808] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:08.770 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:08.770 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:08.770 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:08.770 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:08.770 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:08.770 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:08.770 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:08.770 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:08.770 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:08.770 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:08.770 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.770 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.027 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:09.027 "name": "Existed_Raid", 00:25:09.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.027 "strip_size_kb": 0, 00:25:09.027 "state": "configuring", 00:25:09.027 "raid_level": "raid1", 00:25:09.027 "superblock": false, 00:25:09.027 "num_base_bdevs": 4, 00:25:09.027 "num_base_bdevs_discovered": 2, 00:25:09.027 "num_base_bdevs_operational": 4, 00:25:09.027 "base_bdevs_list": [ 00:25:09.027 { 00:25:09.028 "name": "BaseBdev1", 00:25:09.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.028 "is_configured": false, 00:25:09.028 "data_offset": 0, 00:25:09.028 "data_size": 0 00:25:09.028 }, 00:25:09.028 { 00:25:09.028 "name": null, 00:25:09.028 "uuid": "576abac9-41fe-4f77-9fbc-72fe04de23b0", 00:25:09.028 "is_configured": false, 00:25:09.028 "data_offset": 0, 00:25:09.028 "data_size": 65536 00:25:09.028 }, 00:25:09.028 { 00:25:09.028 "name": "BaseBdev3", 00:25:09.028 "uuid": "f0243b33-3c47-4d12-8ba6-cfed9925a014", 00:25:09.028 "is_configured": true, 00:25:09.028 "data_offset": 0, 00:25:09.028 "data_size": 65536 00:25:09.028 }, 00:25:09.028 { 00:25:09.028 "name": "BaseBdev4", 00:25:09.028 "uuid": "b77ee178-8522-448f-868f-14450eb9062b", 00:25:09.028 "is_configured": true, 00:25:09.028 "data_offset": 0, 00:25:09.028 "data_size": 65536 00:25:09.028 } 00:25:09.028 ] 00:25:09.028 }' 00:25:09.028 07:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:09.028 07:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.605 07:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:09.605 07:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.864 07:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:09.864 07:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:10.122 [2024-07-12 07:35:43.799282] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:10.122 BaseBdev1 00:25:10.122 07:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:10.122 07:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:25:10.122 07:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:10.122 07:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:10.122 07:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:10.122 07:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:10.122 07:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:10.381 07:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:10.381 [ 00:25:10.381 { 00:25:10.381 "name": "BaseBdev1", 00:25:10.381 "aliases": [ 00:25:10.381 "a138556d-a64c-461a-8ae9-5c2f77330ed9" 00:25:10.381 ], 00:25:10.381 "product_name": "Malloc disk", 00:25:10.381 "block_size": 512, 00:25:10.381 "num_blocks": 65536, 00:25:10.381 "uuid": "a138556d-a64c-461a-8ae9-5c2f77330ed9", 00:25:10.381 "assigned_rate_limits": { 00:25:10.381 "rw_ios_per_sec": 0, 00:25:10.381 "rw_mbytes_per_sec": 0, 00:25:10.381 "r_mbytes_per_sec": 0, 00:25:10.381 "w_mbytes_per_sec": 0 00:25:10.381 }, 00:25:10.381 "claimed": true, 00:25:10.381 "claim_type": "exclusive_write", 00:25:10.381 "zoned": false, 00:25:10.381 "supported_io_types": { 00:25:10.381 "read": true, 00:25:10.381 "write": true, 00:25:10.381 "unmap": true, 00:25:10.381 "write_zeroes": true, 00:25:10.381 "flush": true, 00:25:10.381 "reset": true, 00:25:10.381 "compare": false, 00:25:10.381 "compare_and_write": false, 00:25:10.381 "abort": true, 00:25:10.381 "nvme_admin": false, 00:25:10.381 "nvme_io": false 00:25:10.381 }, 00:25:10.381 "memory_domains": [ 00:25:10.381 { 00:25:10.381 "dma_device_id": "system", 00:25:10.381 "dma_device_type": 1 00:25:10.381 }, 00:25:10.381 { 00:25:10.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.381 "dma_device_type": 2 00:25:10.381 } 00:25:10.381 ], 00:25:10.381 "driver_specific": {} 00:25:10.381 } 00:25:10.381 ] 00:25:10.381 07:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:10.381 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:10.381 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:10.381 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:10.381 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:10.381 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:10.381 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:10.381 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:10.381 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:10.381 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:10.381 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:10.381 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.382 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:10.664 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:10.664 "name": "Existed_Raid", 00:25:10.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.664 "strip_size_kb": 0, 00:25:10.664 "state": "configuring", 00:25:10.664 "raid_level": "raid1", 00:25:10.664 "superblock": false, 00:25:10.664 "num_base_bdevs": 4, 00:25:10.664 "num_base_bdevs_discovered": 3, 00:25:10.664 "num_base_bdevs_operational": 4, 00:25:10.664 "base_bdevs_list": [ 00:25:10.664 { 00:25:10.664 "name": "BaseBdev1", 00:25:10.664 "uuid": "a138556d-a64c-461a-8ae9-5c2f77330ed9", 00:25:10.664 "is_configured": true, 00:25:10.664 "data_offset": 0, 00:25:10.664 "data_size": 65536 00:25:10.664 }, 00:25:10.664 { 00:25:10.664 "name": null, 00:25:10.664 "uuid": "576abac9-41fe-4f77-9fbc-72fe04de23b0", 00:25:10.664 "is_configured": false, 00:25:10.664 "data_offset": 0, 00:25:10.664 "data_size": 65536 00:25:10.664 }, 00:25:10.664 { 00:25:10.664 "name": "BaseBdev3", 00:25:10.664 "uuid": "f0243b33-3c47-4d12-8ba6-cfed9925a014", 00:25:10.664 "is_configured": true, 00:25:10.664 "data_offset": 0, 00:25:10.664 "data_size": 65536 00:25:10.664 }, 00:25:10.664 { 00:25:10.664 "name": "BaseBdev4", 00:25:10.664 "uuid": "b77ee178-8522-448f-868f-14450eb9062b", 00:25:10.664 "is_configured": true, 00:25:10.664 "data_offset": 0, 00:25:10.664 "data_size": 65536 00:25:10.664 } 00:25:10.664 ] 00:25:10.664 }' 00:25:10.664 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:10.664 07:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.233 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.233 07:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:11.492 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:11.492 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:11.492 [2024-07-12 07:35:45.355676] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:11.492 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:11.492 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:11.492 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:11.750 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:11.750 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:11.750 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:11.750 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:11.750 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:11.750 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:11.750 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:11.750 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.750 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.750 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:11.751 "name": "Existed_Raid", 00:25:11.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.751 "strip_size_kb": 0, 00:25:11.751 "state": "configuring", 00:25:11.751 "raid_level": "raid1", 00:25:11.751 "superblock": false, 00:25:11.751 "num_base_bdevs": 4, 00:25:11.751 "num_base_bdevs_discovered": 2, 00:25:11.751 "num_base_bdevs_operational": 4, 00:25:11.751 "base_bdevs_list": [ 00:25:11.751 { 00:25:11.751 "name": "BaseBdev1", 00:25:11.751 "uuid": "a138556d-a64c-461a-8ae9-5c2f77330ed9", 00:25:11.751 "is_configured": true, 00:25:11.751 "data_offset": 0, 00:25:11.751 "data_size": 65536 00:25:11.751 }, 00:25:11.751 { 00:25:11.751 "name": null, 00:25:11.751 "uuid": "576abac9-41fe-4f77-9fbc-72fe04de23b0", 00:25:11.751 "is_configured": false, 00:25:11.751 "data_offset": 0, 00:25:11.751 "data_size": 65536 00:25:11.751 }, 00:25:11.751 { 00:25:11.751 "name": null, 00:25:11.751 "uuid": "f0243b33-3c47-4d12-8ba6-cfed9925a014", 00:25:11.751 "is_configured": false, 00:25:11.751 "data_offset": 0, 00:25:11.751 "data_size": 65536 00:25:11.751 }, 00:25:11.751 { 00:25:11.751 "name": "BaseBdev4", 00:25:11.751 "uuid": "b77ee178-8522-448f-868f-14450eb9062b", 00:25:11.751 "is_configured": true, 00:25:11.751 "data_offset": 0, 00:25:11.751 "data_size": 65536 00:25:11.751 } 00:25:11.751 ] 00:25:11.751 }' 00:25:11.751 07:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:11.751 07:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.317 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:12.317 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.885 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:12.885 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:12.885 [2024-07-12 07:35:46.747986] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:13.144 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:13.144 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:13.144 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:13.144 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:13.144 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:13.144 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:13.144 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:13.144 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:13.144 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:13.144 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:13.144 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.144 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.144 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:13.144 "name": "Existed_Raid", 00:25:13.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.144 "strip_size_kb": 0, 00:25:13.144 "state": "configuring", 00:25:13.144 "raid_level": "raid1", 00:25:13.144 "superblock": false, 00:25:13.144 "num_base_bdevs": 4, 00:25:13.144 "num_base_bdevs_discovered": 3, 00:25:13.144 "num_base_bdevs_operational": 4, 00:25:13.144 "base_bdevs_list": [ 00:25:13.144 { 00:25:13.144 "name": "BaseBdev1", 00:25:13.144 "uuid": "a138556d-a64c-461a-8ae9-5c2f77330ed9", 00:25:13.144 "is_configured": true, 00:25:13.144 "data_offset": 0, 00:25:13.144 "data_size": 65536 00:25:13.144 }, 00:25:13.144 { 00:25:13.144 "name": null, 00:25:13.144 "uuid": "576abac9-41fe-4f77-9fbc-72fe04de23b0", 00:25:13.144 "is_configured": false, 00:25:13.144 "data_offset": 0, 00:25:13.144 "data_size": 65536 00:25:13.144 }, 00:25:13.144 { 00:25:13.144 "name": "BaseBdev3", 00:25:13.144 "uuid": "f0243b33-3c47-4d12-8ba6-cfed9925a014", 00:25:13.144 "is_configured": true, 00:25:13.144 "data_offset": 0, 00:25:13.144 "data_size": 65536 00:25:13.144 }, 00:25:13.144 { 00:25:13.144 "name": "BaseBdev4", 00:25:13.144 "uuid": "b77ee178-8522-448f-868f-14450eb9062b", 00:25:13.144 "is_configured": true, 00:25:13.144 "data_offset": 0, 00:25:13.144 "data_size": 65536 00:25:13.144 } 00:25:13.144 ] 00:25:13.144 }' 00:25:13.144 07:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:13.144 07:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.712 07:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:13.712 07:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.281 07:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:14.281 07:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:14.281 [2024-07-12 07:35:48.124290] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:14.281 07:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:14.281 07:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:14.281 07:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:14.281 07:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:14.281 07:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:14.281 07:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:14.281 07:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:14.281 07:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:14.281 07:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:14.281 07:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:14.539 07:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.539 07:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.797 07:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:14.797 "name": "Existed_Raid", 00:25:14.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.797 "strip_size_kb": 0, 00:25:14.797 "state": "configuring", 00:25:14.797 "raid_level": "raid1", 00:25:14.797 "superblock": false, 00:25:14.797 "num_base_bdevs": 4, 00:25:14.797 "num_base_bdevs_discovered": 2, 00:25:14.797 "num_base_bdevs_operational": 4, 00:25:14.797 "base_bdevs_list": [ 00:25:14.797 { 00:25:14.797 "name": null, 00:25:14.797 "uuid": "a138556d-a64c-461a-8ae9-5c2f77330ed9", 00:25:14.797 "is_configured": false, 00:25:14.797 "data_offset": 0, 00:25:14.797 "data_size": 65536 00:25:14.797 }, 00:25:14.797 { 00:25:14.797 "name": null, 00:25:14.797 "uuid": "576abac9-41fe-4f77-9fbc-72fe04de23b0", 00:25:14.797 "is_configured": false, 00:25:14.797 "data_offset": 0, 00:25:14.797 "data_size": 65536 00:25:14.797 }, 00:25:14.797 { 00:25:14.797 "name": "BaseBdev3", 00:25:14.797 "uuid": "f0243b33-3c47-4d12-8ba6-cfed9925a014", 00:25:14.797 "is_configured": true, 00:25:14.797 "data_offset": 0, 00:25:14.797 "data_size": 65536 00:25:14.797 }, 00:25:14.797 { 00:25:14.797 "name": "BaseBdev4", 00:25:14.797 "uuid": "b77ee178-8522-448f-868f-14450eb9062b", 00:25:14.797 "is_configured": true, 00:25:14.797 "data_offset": 0, 00:25:14.797 "data_size": 65536 00:25:14.797 } 00:25:14.797 ] 00:25:14.797 }' 00:25:14.797 07:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:14.797 07:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.363 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.363 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:15.363 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:15.363 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:15.622 [2024-07-12 07:35:49.480300] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:15.622 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:15.622 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:15.622 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:15.622 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:15.622 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:15.622 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:15.622 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:15.622 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:15.622 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:15.622 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:15.622 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:15.622 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.881 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:15.881 "name": "Existed_Raid", 00:25:15.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.881 "strip_size_kb": 0, 00:25:15.881 "state": "configuring", 00:25:15.881 "raid_level": "raid1", 00:25:15.881 "superblock": false, 00:25:15.881 "num_base_bdevs": 4, 00:25:15.881 "num_base_bdevs_discovered": 3, 00:25:15.881 "num_base_bdevs_operational": 4, 00:25:15.881 "base_bdevs_list": [ 00:25:15.881 { 00:25:15.881 "name": null, 00:25:15.881 "uuid": "a138556d-a64c-461a-8ae9-5c2f77330ed9", 00:25:15.881 "is_configured": false, 00:25:15.881 "data_offset": 0, 00:25:15.881 "data_size": 65536 00:25:15.881 }, 00:25:15.881 { 00:25:15.881 "name": "BaseBdev2", 00:25:15.881 "uuid": "576abac9-41fe-4f77-9fbc-72fe04de23b0", 00:25:15.881 "is_configured": true, 00:25:15.881 "data_offset": 0, 00:25:15.881 "data_size": 65536 00:25:15.881 }, 00:25:15.881 { 00:25:15.881 "name": "BaseBdev3", 00:25:15.881 "uuid": "f0243b33-3c47-4d12-8ba6-cfed9925a014", 00:25:15.881 "is_configured": true, 00:25:15.881 "data_offset": 0, 00:25:15.881 "data_size": 65536 00:25:15.881 }, 00:25:15.881 { 00:25:15.881 "name": "BaseBdev4", 00:25:15.881 "uuid": "b77ee178-8522-448f-868f-14450eb9062b", 00:25:15.881 "is_configured": true, 00:25:15.881 "data_offset": 0, 00:25:15.881 "data_size": 65536 00:25:15.881 } 00:25:15.881 ] 00:25:15.881 }' 00:25:15.881 07:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:15.881 07:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.821 07:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.821 07:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:16.821 07:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:16.821 07:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.821 07:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:17.079 07:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a138556d-a64c-461a-8ae9-5c2f77330ed9 00:25:17.338 [2024-07-12 07:35:51.084299] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:17.338 [2024-07-12 07:35:51.084609] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:25:17.338 [2024-07-12 07:35:51.084673] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:17.338 [2024-07-12 07:35:51.084854] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:25:17.338 [2024-07-12 07:35:51.085397] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:25:17.338 [2024-07-12 07:35:51.085538] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:25:17.338 [2024-07-12 07:35:51.085842] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:17.338 NewBaseBdev 00:25:17.338 07:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:17.338 07:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:25:17.338 07:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:17.338 07:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:25:17.338 07:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:17.338 07:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:17.338 07:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:17.596 07:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:17.855 [ 00:25:17.855 { 00:25:17.855 "name": "NewBaseBdev", 00:25:17.855 "aliases": [ 00:25:17.855 "a138556d-a64c-461a-8ae9-5c2f77330ed9" 00:25:17.855 ], 00:25:17.855 "product_name": "Malloc disk", 00:25:17.855 "block_size": 512, 00:25:17.855 "num_blocks": 65536, 00:25:17.855 "uuid": "a138556d-a64c-461a-8ae9-5c2f77330ed9", 00:25:17.855 "assigned_rate_limits": { 00:25:17.855 "rw_ios_per_sec": 0, 00:25:17.855 "rw_mbytes_per_sec": 0, 00:25:17.855 "r_mbytes_per_sec": 0, 00:25:17.855 "w_mbytes_per_sec": 0 00:25:17.855 }, 00:25:17.855 "claimed": true, 00:25:17.855 "claim_type": "exclusive_write", 00:25:17.855 "zoned": false, 00:25:17.855 "supported_io_types": { 00:25:17.855 "read": true, 00:25:17.855 "write": true, 00:25:17.855 "unmap": true, 00:25:17.855 "write_zeroes": true, 00:25:17.855 "flush": true, 00:25:17.855 "reset": true, 00:25:17.855 "compare": false, 00:25:17.855 "compare_and_write": false, 00:25:17.855 "abort": true, 00:25:17.855 "nvme_admin": false, 00:25:17.855 "nvme_io": false 00:25:17.855 }, 00:25:17.855 "memory_domains": [ 00:25:17.855 { 00:25:17.855 "dma_device_id": "system", 00:25:17.855 "dma_device_type": 1 00:25:17.855 }, 00:25:17.855 { 00:25:17.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:17.855 "dma_device_type": 2 00:25:17.855 } 00:25:17.855 ], 00:25:17.855 "driver_specific": {} 00:25:17.855 } 00:25:17.855 ] 00:25:17.855 07:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:25:17.855 07:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:17.855 07:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:17.855 07:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:17.855 07:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:17.855 07:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:17.855 07:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:17.855 07:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:17.855 07:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:17.855 07:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:17.855 07:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:17.855 07:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:17.855 07:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.114 07:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:18.114 "name": "Existed_Raid", 00:25:18.114 "uuid": "47435ff4-adf9-4e15-9c5f-4ea11e7f9f85", 00:25:18.114 "strip_size_kb": 0, 00:25:18.114 "state": "online", 00:25:18.114 "raid_level": "raid1", 00:25:18.114 "superblock": false, 00:25:18.114 "num_base_bdevs": 4, 00:25:18.114 "num_base_bdevs_discovered": 4, 00:25:18.114 "num_base_bdevs_operational": 4, 00:25:18.114 "base_bdevs_list": [ 00:25:18.114 { 00:25:18.114 "name": "NewBaseBdev", 00:25:18.114 "uuid": "a138556d-a64c-461a-8ae9-5c2f77330ed9", 00:25:18.114 "is_configured": true, 00:25:18.114 "data_offset": 0, 00:25:18.114 "data_size": 65536 00:25:18.114 }, 00:25:18.114 { 00:25:18.114 "name": "BaseBdev2", 00:25:18.114 "uuid": "576abac9-41fe-4f77-9fbc-72fe04de23b0", 00:25:18.114 "is_configured": true, 00:25:18.114 "data_offset": 0, 00:25:18.114 "data_size": 65536 00:25:18.114 }, 00:25:18.114 { 00:25:18.114 "name": "BaseBdev3", 00:25:18.114 "uuid": "f0243b33-3c47-4d12-8ba6-cfed9925a014", 00:25:18.114 "is_configured": true, 00:25:18.114 "data_offset": 0, 00:25:18.114 "data_size": 65536 00:25:18.114 }, 00:25:18.114 { 00:25:18.114 "name": "BaseBdev4", 00:25:18.114 "uuid": "b77ee178-8522-448f-868f-14450eb9062b", 00:25:18.114 "is_configured": true, 00:25:18.114 "data_offset": 0, 00:25:18.114 "data_size": 65536 00:25:18.114 } 00:25:18.114 ] 00:25:18.114 }' 00:25:18.114 07:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:18.114 07:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.683 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:18.683 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:18.683 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:18.683 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:18.683 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:18.683 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:18.683 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:18.683 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:18.942 [2024-07-12 07:35:52.661034] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:18.942 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:18.942 "name": "Existed_Raid", 00:25:18.942 "aliases": [ 00:25:18.942 "47435ff4-adf9-4e15-9c5f-4ea11e7f9f85" 00:25:18.942 ], 00:25:18.942 "product_name": "Raid Volume", 00:25:18.942 "block_size": 512, 00:25:18.942 "num_blocks": 65536, 00:25:18.942 "uuid": "47435ff4-adf9-4e15-9c5f-4ea11e7f9f85", 00:25:18.942 "assigned_rate_limits": { 00:25:18.942 "rw_ios_per_sec": 0, 00:25:18.942 "rw_mbytes_per_sec": 0, 00:25:18.942 "r_mbytes_per_sec": 0, 00:25:18.942 "w_mbytes_per_sec": 0 00:25:18.942 }, 00:25:18.942 "claimed": false, 00:25:18.942 "zoned": false, 00:25:18.942 "supported_io_types": { 00:25:18.942 "read": true, 00:25:18.942 "write": true, 00:25:18.942 "unmap": false, 00:25:18.942 "write_zeroes": true, 00:25:18.942 "flush": false, 00:25:18.942 "reset": true, 00:25:18.942 "compare": false, 00:25:18.942 "compare_and_write": false, 00:25:18.942 "abort": false, 00:25:18.942 "nvme_admin": false, 00:25:18.942 "nvme_io": false 00:25:18.942 }, 00:25:18.942 "memory_domains": [ 00:25:18.942 { 00:25:18.942 "dma_device_id": "system", 00:25:18.942 "dma_device_type": 1 00:25:18.942 }, 00:25:18.942 { 00:25:18.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.942 "dma_device_type": 2 00:25:18.942 }, 00:25:18.942 { 00:25:18.942 "dma_device_id": "system", 00:25:18.942 "dma_device_type": 1 00:25:18.942 }, 00:25:18.942 { 00:25:18.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.942 "dma_device_type": 2 00:25:18.942 }, 00:25:18.942 { 00:25:18.942 "dma_device_id": "system", 00:25:18.942 "dma_device_type": 1 00:25:18.942 }, 00:25:18.942 { 00:25:18.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.942 "dma_device_type": 2 00:25:18.942 }, 00:25:18.942 { 00:25:18.942 "dma_device_id": "system", 00:25:18.942 "dma_device_type": 1 00:25:18.942 }, 00:25:18.942 { 00:25:18.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.942 "dma_device_type": 2 00:25:18.942 } 00:25:18.942 ], 00:25:18.942 "driver_specific": { 00:25:18.942 "raid": { 00:25:18.942 "uuid": "47435ff4-adf9-4e15-9c5f-4ea11e7f9f85", 00:25:18.942 "strip_size_kb": 0, 00:25:18.942 "state": "online", 00:25:18.942 "raid_level": "raid1", 00:25:18.942 "superblock": false, 00:25:18.942 "num_base_bdevs": 4, 00:25:18.942 "num_base_bdevs_discovered": 4, 00:25:18.942 "num_base_bdevs_operational": 4, 00:25:18.942 "base_bdevs_list": [ 00:25:18.942 { 00:25:18.942 "name": "NewBaseBdev", 00:25:18.942 "uuid": "a138556d-a64c-461a-8ae9-5c2f77330ed9", 00:25:18.942 "is_configured": true, 00:25:18.942 "data_offset": 0, 00:25:18.942 "data_size": 65536 00:25:18.942 }, 00:25:18.942 { 00:25:18.942 "name": "BaseBdev2", 00:25:18.942 "uuid": "576abac9-41fe-4f77-9fbc-72fe04de23b0", 00:25:18.942 "is_configured": true, 00:25:18.942 "data_offset": 0, 00:25:18.942 "data_size": 65536 00:25:18.942 }, 00:25:18.942 { 00:25:18.942 "name": "BaseBdev3", 00:25:18.942 "uuid": "f0243b33-3c47-4d12-8ba6-cfed9925a014", 00:25:18.942 "is_configured": true, 00:25:18.942 "data_offset": 0, 00:25:18.942 "data_size": 65536 00:25:18.942 }, 00:25:18.942 { 00:25:18.942 "name": "BaseBdev4", 00:25:18.942 "uuid": "b77ee178-8522-448f-868f-14450eb9062b", 00:25:18.942 "is_configured": true, 00:25:18.942 "data_offset": 0, 00:25:18.942 "data_size": 65536 00:25:18.942 } 00:25:18.942 ] 00:25:18.942 } 00:25:18.942 } 00:25:18.942 }' 00:25:18.942 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:18.942 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:18.942 BaseBdev2 00:25:18.942 BaseBdev3 00:25:18.942 BaseBdev4' 00:25:18.942 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:18.942 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:18.942 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:19.201 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:19.201 "name": "NewBaseBdev", 00:25:19.201 "aliases": [ 00:25:19.201 "a138556d-a64c-461a-8ae9-5c2f77330ed9" 00:25:19.201 ], 00:25:19.201 "product_name": "Malloc disk", 00:25:19.201 "block_size": 512, 00:25:19.201 "num_blocks": 65536, 00:25:19.201 "uuid": "a138556d-a64c-461a-8ae9-5c2f77330ed9", 00:25:19.201 "assigned_rate_limits": { 00:25:19.201 "rw_ios_per_sec": 0, 00:25:19.201 "rw_mbytes_per_sec": 0, 00:25:19.201 "r_mbytes_per_sec": 0, 00:25:19.201 "w_mbytes_per_sec": 0 00:25:19.201 }, 00:25:19.201 "claimed": true, 00:25:19.201 "claim_type": "exclusive_write", 00:25:19.201 "zoned": false, 00:25:19.201 "supported_io_types": { 00:25:19.201 "read": true, 00:25:19.201 "write": true, 00:25:19.201 "unmap": true, 00:25:19.201 "write_zeroes": true, 00:25:19.201 "flush": true, 00:25:19.201 "reset": true, 00:25:19.201 "compare": false, 00:25:19.201 "compare_and_write": false, 00:25:19.201 "abort": true, 00:25:19.201 "nvme_admin": false, 00:25:19.201 "nvme_io": false 00:25:19.201 }, 00:25:19.201 "memory_domains": [ 00:25:19.201 { 00:25:19.201 "dma_device_id": "system", 00:25:19.201 "dma_device_type": 1 00:25:19.201 }, 00:25:19.201 { 00:25:19.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.201 "dma_device_type": 2 00:25:19.201 } 00:25:19.201 ], 00:25:19.201 "driver_specific": {} 00:25:19.201 }' 00:25:19.201 07:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:19.201 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:19.460 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:19.460 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:19.460 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:19.460 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:19.460 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:19.460 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:19.460 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:19.460 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:19.460 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:19.719 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:19.719 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:19.719 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:19.719 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:19.978 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:19.978 "name": "BaseBdev2", 00:25:19.978 "aliases": [ 00:25:19.978 "576abac9-41fe-4f77-9fbc-72fe04de23b0" 00:25:19.978 ], 00:25:19.978 "product_name": "Malloc disk", 00:25:19.978 "block_size": 512, 00:25:19.978 "num_blocks": 65536, 00:25:19.978 "uuid": "576abac9-41fe-4f77-9fbc-72fe04de23b0", 00:25:19.978 "assigned_rate_limits": { 00:25:19.978 "rw_ios_per_sec": 0, 00:25:19.978 "rw_mbytes_per_sec": 0, 00:25:19.978 "r_mbytes_per_sec": 0, 00:25:19.978 "w_mbytes_per_sec": 0 00:25:19.978 }, 00:25:19.978 "claimed": true, 00:25:19.978 "claim_type": "exclusive_write", 00:25:19.978 "zoned": false, 00:25:19.978 "supported_io_types": { 00:25:19.978 "read": true, 00:25:19.978 "write": true, 00:25:19.978 "unmap": true, 00:25:19.978 "write_zeroes": true, 00:25:19.978 "flush": true, 00:25:19.978 "reset": true, 00:25:19.978 "compare": false, 00:25:19.978 "compare_and_write": false, 00:25:19.978 "abort": true, 00:25:19.978 "nvme_admin": false, 00:25:19.978 "nvme_io": false 00:25:19.978 }, 00:25:19.978 "memory_domains": [ 00:25:19.978 { 00:25:19.978 "dma_device_id": "system", 00:25:19.978 "dma_device_type": 1 00:25:19.978 }, 00:25:19.978 { 00:25:19.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.978 "dma_device_type": 2 00:25:19.978 } 00:25:19.978 ], 00:25:19.978 "driver_specific": {} 00:25:19.978 }' 00:25:19.978 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:19.978 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:19.978 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:19.978 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:19.978 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:19.978 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:19.978 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:20.237 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:20.237 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:20.237 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:20.237 07:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:20.237 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:20.237 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:20.237 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:20.237 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:20.496 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:20.496 "name": "BaseBdev3", 00:25:20.496 "aliases": [ 00:25:20.496 "f0243b33-3c47-4d12-8ba6-cfed9925a014" 00:25:20.496 ], 00:25:20.496 "product_name": "Malloc disk", 00:25:20.496 "block_size": 512, 00:25:20.496 "num_blocks": 65536, 00:25:20.496 "uuid": "f0243b33-3c47-4d12-8ba6-cfed9925a014", 00:25:20.496 "assigned_rate_limits": { 00:25:20.496 "rw_ios_per_sec": 0, 00:25:20.496 "rw_mbytes_per_sec": 0, 00:25:20.496 "r_mbytes_per_sec": 0, 00:25:20.496 "w_mbytes_per_sec": 0 00:25:20.496 }, 00:25:20.496 "claimed": true, 00:25:20.496 "claim_type": "exclusive_write", 00:25:20.496 "zoned": false, 00:25:20.496 "supported_io_types": { 00:25:20.496 "read": true, 00:25:20.496 "write": true, 00:25:20.496 "unmap": true, 00:25:20.496 "write_zeroes": true, 00:25:20.496 "flush": true, 00:25:20.496 "reset": true, 00:25:20.496 "compare": false, 00:25:20.496 "compare_and_write": false, 00:25:20.496 "abort": true, 00:25:20.496 "nvme_admin": false, 00:25:20.496 "nvme_io": false 00:25:20.496 }, 00:25:20.496 "memory_domains": [ 00:25:20.496 { 00:25:20.496 "dma_device_id": "system", 00:25:20.496 "dma_device_type": 1 00:25:20.496 }, 00:25:20.496 { 00:25:20.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.496 "dma_device_type": 2 00:25:20.496 } 00:25:20.496 ], 00:25:20.496 "driver_specific": {} 00:25:20.496 }' 00:25:20.496 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:20.496 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:20.496 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:20.496 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:20.755 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:20.755 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:20.755 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:20.755 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:20.755 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:20.755 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:20.755 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:20.755 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:20.755 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:20.755 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:20.755 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:21.320 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:21.320 "name": "BaseBdev4", 00:25:21.320 "aliases": [ 00:25:21.320 "b77ee178-8522-448f-868f-14450eb9062b" 00:25:21.320 ], 00:25:21.320 "product_name": "Malloc disk", 00:25:21.320 "block_size": 512, 00:25:21.320 "num_blocks": 65536, 00:25:21.320 "uuid": "b77ee178-8522-448f-868f-14450eb9062b", 00:25:21.320 "assigned_rate_limits": { 00:25:21.320 "rw_ios_per_sec": 0, 00:25:21.320 "rw_mbytes_per_sec": 0, 00:25:21.320 "r_mbytes_per_sec": 0, 00:25:21.320 "w_mbytes_per_sec": 0 00:25:21.320 }, 00:25:21.320 "claimed": true, 00:25:21.320 "claim_type": "exclusive_write", 00:25:21.320 "zoned": false, 00:25:21.320 "supported_io_types": { 00:25:21.320 "read": true, 00:25:21.320 "write": true, 00:25:21.320 "unmap": true, 00:25:21.320 "write_zeroes": true, 00:25:21.320 "flush": true, 00:25:21.320 "reset": true, 00:25:21.320 "compare": false, 00:25:21.320 "compare_and_write": false, 00:25:21.320 "abort": true, 00:25:21.320 "nvme_admin": false, 00:25:21.320 "nvme_io": false 00:25:21.320 }, 00:25:21.320 "memory_domains": [ 00:25:21.320 { 00:25:21.320 "dma_device_id": "system", 00:25:21.320 "dma_device_type": 1 00:25:21.320 }, 00:25:21.320 { 00:25:21.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.320 "dma_device_type": 2 00:25:21.320 } 00:25:21.320 ], 00:25:21.320 "driver_specific": {} 00:25:21.320 }' 00:25:21.320 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:21.320 07:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:21.320 07:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:21.320 07:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:21.320 07:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:21.320 07:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:21.320 07:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:21.320 07:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:21.578 07:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:21.578 07:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:21.578 07:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:21.578 07:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:21.578 07:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:21.837 [2024-07-12 07:35:55.585764] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:21.837 [2024-07-12 07:35:55.586071] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:21.837 [2024-07-12 07:35:55.586333] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:21.837 [2024-07-12 07:35:55.586757] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:21.837 [2024-07-12 07:35:55.586944] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:25:21.837 07:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 150382 00:25:21.837 07:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 150382 ']' 00:25:21.837 07:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 150382 00:25:21.837 07:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:25:21.837 07:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:21.837 07:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 150382 00:25:21.837 07:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:21.837 07:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:21.837 07:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 150382' 00:25:21.837 killing process with pid 150382 00:25:21.837 07:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 150382 00:25:21.837 07:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 150382 00:25:21.837 [2024-07-12 07:35:55.634878] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:21.837 [2024-07-12 07:35:55.676742] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:22.095 ************************************ 00:25:22.096 END TEST raid_state_function_test 00:25:22.096 ************************************ 00:25:22.096 07:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:25:22.096 00:25:22.096 real 0m32.329s 00:25:22.096 user 0m59.577s 00:25:22.096 sys 0m5.578s 00:25:22.096 07:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:22.096 07:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.354 07:35:55 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:25:22.354 07:35:55 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:25:22.354 07:35:55 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:22.354 07:35:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:22.354 ************************************ 00:25:22.354 START TEST raid_state_function_test_sb 00:25:22.354 ************************************ 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 true 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:22.354 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=151468 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 151468' 00:25:22.355 Process raid pid: 151468 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 151468 /var/tmp/spdk-raid.sock 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 151468 ']' 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:22.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:22.355 07:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.355 [2024-07-12 07:35:56.076593] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:22.355 [2024-07-12 07:35:56.077054] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.355 [2024-07-12 07:35:56.222048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.613 [2024-07-12 07:35:56.267966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.613 [2024-07-12 07:35:56.310205] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:22.613 07:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:22.613 07:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:25:22.613 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:22.872 [2024-07-12 07:35:56.619771] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:22.872 [2024-07-12 07:35:56.620023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:22.872 [2024-07-12 07:35:56.620108] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:22.872 [2024-07-12 07:35:56.620158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:22.872 [2024-07-12 07:35:56.620230] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:22.872 [2024-07-12 07:35:56.620300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:22.872 [2024-07-12 07:35:56.620374] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:22.872 [2024-07-12 07:35:56.620423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:22.872 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:22.872 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:22.872 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:22.872 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:22.872 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:22.872 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:22.872 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:22.872 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:22.872 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:22.872 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:22.872 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.872 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.131 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:23.131 "name": "Existed_Raid", 00:25:23.131 "uuid": "5a20a91a-79ec-4a7a-b901-57caf7c382db", 00:25:23.131 "strip_size_kb": 0, 00:25:23.131 "state": "configuring", 00:25:23.131 "raid_level": "raid1", 00:25:23.131 "superblock": true, 00:25:23.131 "num_base_bdevs": 4, 00:25:23.131 "num_base_bdevs_discovered": 0, 00:25:23.131 "num_base_bdevs_operational": 4, 00:25:23.131 "base_bdevs_list": [ 00:25:23.131 { 00:25:23.131 "name": "BaseBdev1", 00:25:23.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.132 "is_configured": false, 00:25:23.132 "data_offset": 0, 00:25:23.132 "data_size": 0 00:25:23.132 }, 00:25:23.132 { 00:25:23.132 "name": "BaseBdev2", 00:25:23.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.132 "is_configured": false, 00:25:23.132 "data_offset": 0, 00:25:23.132 "data_size": 0 00:25:23.132 }, 00:25:23.132 { 00:25:23.132 "name": "BaseBdev3", 00:25:23.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.132 "is_configured": false, 00:25:23.132 "data_offset": 0, 00:25:23.132 "data_size": 0 00:25:23.132 }, 00:25:23.132 { 00:25:23.132 "name": "BaseBdev4", 00:25:23.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.132 "is_configured": false, 00:25:23.132 "data_offset": 0, 00:25:23.132 "data_size": 0 00:25:23.132 } 00:25:23.132 ] 00:25:23.132 }' 00:25:23.132 07:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:23.132 07:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.698 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:23.957 [2024-07-12 07:35:57.695847] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:23.957 [2024-07-12 07:35:57.696056] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:25:23.957 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:24.216 [2024-07-12 07:35:57.975898] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:24.216 [2024-07-12 07:35:57.976174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:24.216 [2024-07-12 07:35:57.976347] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:24.216 [2024-07-12 07:35:57.976406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:24.216 [2024-07-12 07:35:57.976434] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:24.216 [2024-07-12 07:35:57.976512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:24.216 [2024-07-12 07:35:57.976543] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:24.216 [2024-07-12 07:35:57.976592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:24.216 07:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:24.474 [2024-07-12 07:35:58.189383] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:24.474 BaseBdev1 00:25:24.474 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:24.474 07:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:25:24.474 07:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:24.474 07:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:24.474 07:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:24.474 07:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:24.474 07:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:24.733 07:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:24.733 [ 00:25:24.733 { 00:25:24.733 "name": "BaseBdev1", 00:25:24.733 "aliases": [ 00:25:24.733 "76fd6e0e-7bea-4c27-895b-3a5d68721031" 00:25:24.733 ], 00:25:24.733 "product_name": "Malloc disk", 00:25:24.733 "block_size": 512, 00:25:24.733 "num_blocks": 65536, 00:25:24.733 "uuid": "76fd6e0e-7bea-4c27-895b-3a5d68721031", 00:25:24.733 "assigned_rate_limits": { 00:25:24.733 "rw_ios_per_sec": 0, 00:25:24.733 "rw_mbytes_per_sec": 0, 00:25:24.733 "r_mbytes_per_sec": 0, 00:25:24.733 "w_mbytes_per_sec": 0 00:25:24.733 }, 00:25:24.733 "claimed": true, 00:25:24.733 "claim_type": "exclusive_write", 00:25:24.733 "zoned": false, 00:25:24.733 "supported_io_types": { 00:25:24.733 "read": true, 00:25:24.733 "write": true, 00:25:24.733 "unmap": true, 00:25:24.733 "write_zeroes": true, 00:25:24.733 "flush": true, 00:25:24.733 "reset": true, 00:25:24.733 "compare": false, 00:25:24.733 "compare_and_write": false, 00:25:24.733 "abort": true, 00:25:24.733 "nvme_admin": false, 00:25:24.733 "nvme_io": false 00:25:24.733 }, 00:25:24.733 "memory_domains": [ 00:25:24.733 { 00:25:24.733 "dma_device_id": "system", 00:25:24.733 "dma_device_type": 1 00:25:24.733 }, 00:25:24.733 { 00:25:24.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.733 "dma_device_type": 2 00:25:24.733 } 00:25:24.733 ], 00:25:24.733 "driver_specific": {} 00:25:24.733 } 00:25:24.733 ] 00:25:24.733 07:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:24.733 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:24.733 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:24.733 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:24.733 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:24.733 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:24.733 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:24.733 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:24.733 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:24.733 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:24.733 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:24.733 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:24.733 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.020 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:25.020 "name": "Existed_Raid", 00:25:25.020 "uuid": "0bde887e-38cd-4a1f-b457-cbd2910bdfcd", 00:25:25.020 "strip_size_kb": 0, 00:25:25.020 "state": "configuring", 00:25:25.020 "raid_level": "raid1", 00:25:25.020 "superblock": true, 00:25:25.020 "num_base_bdevs": 4, 00:25:25.020 "num_base_bdevs_discovered": 1, 00:25:25.020 "num_base_bdevs_operational": 4, 00:25:25.020 "base_bdevs_list": [ 00:25:25.020 { 00:25:25.020 "name": "BaseBdev1", 00:25:25.020 "uuid": "76fd6e0e-7bea-4c27-895b-3a5d68721031", 00:25:25.020 "is_configured": true, 00:25:25.020 "data_offset": 2048, 00:25:25.020 "data_size": 63488 00:25:25.020 }, 00:25:25.020 { 00:25:25.020 "name": "BaseBdev2", 00:25:25.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.020 "is_configured": false, 00:25:25.020 "data_offset": 0, 00:25:25.020 "data_size": 0 00:25:25.020 }, 00:25:25.020 { 00:25:25.020 "name": "BaseBdev3", 00:25:25.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.020 "is_configured": false, 00:25:25.020 "data_offset": 0, 00:25:25.020 "data_size": 0 00:25:25.020 }, 00:25:25.020 { 00:25:25.020 "name": "BaseBdev4", 00:25:25.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.020 "is_configured": false, 00:25:25.020 "data_offset": 0, 00:25:25.020 "data_size": 0 00:25:25.020 } 00:25:25.020 ] 00:25:25.020 }' 00:25:25.020 07:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:25.020 07:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.605 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:25.864 [2024-07-12 07:35:59.669788] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:25.864 [2024-07-12 07:35:59.670025] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:25:25.864 07:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:26.122 [2024-07-12 07:35:59.977983] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:26.122 [2024-07-12 07:35:59.980643] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:26.122 [2024-07-12 07:35:59.980859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:26.122 [2024-07-12 07:35:59.980955] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:26.122 [2024-07-12 07:35:59.981019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:26.122 [2024-07-12 07:35:59.981106] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:26.122 [2024-07-12 07:35:59.981167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:26.380 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:26.380 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:26.380 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:26.380 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:26.380 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:26.380 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:26.380 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:26.380 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:26.380 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.380 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.380 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.380 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.380 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.380 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:26.639 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:26.639 "name": "Existed_Raid", 00:25:26.639 "uuid": "6f1d7360-ce14-4258-87d3-263734ec2428", 00:25:26.639 "strip_size_kb": 0, 00:25:26.639 "state": "configuring", 00:25:26.639 "raid_level": "raid1", 00:25:26.639 "superblock": true, 00:25:26.639 "num_base_bdevs": 4, 00:25:26.639 "num_base_bdevs_discovered": 1, 00:25:26.639 "num_base_bdevs_operational": 4, 00:25:26.639 "base_bdevs_list": [ 00:25:26.639 { 00:25:26.639 "name": "BaseBdev1", 00:25:26.639 "uuid": "76fd6e0e-7bea-4c27-895b-3a5d68721031", 00:25:26.639 "is_configured": true, 00:25:26.639 "data_offset": 2048, 00:25:26.639 "data_size": 63488 00:25:26.639 }, 00:25:26.639 { 00:25:26.639 "name": "BaseBdev2", 00:25:26.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.639 "is_configured": false, 00:25:26.639 "data_offset": 0, 00:25:26.639 "data_size": 0 00:25:26.639 }, 00:25:26.639 { 00:25:26.639 "name": "BaseBdev3", 00:25:26.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.639 "is_configured": false, 00:25:26.639 "data_offset": 0, 00:25:26.639 "data_size": 0 00:25:26.639 }, 00:25:26.639 { 00:25:26.639 "name": "BaseBdev4", 00:25:26.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.639 "is_configured": false, 00:25:26.639 "data_offset": 0, 00:25:26.639 "data_size": 0 00:25:26.639 } 00:25:26.639 ] 00:25:26.639 }' 00:25:26.639 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:26.639 07:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.207 07:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:27.465 [2024-07-12 07:36:01.333056] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:27.465 BaseBdev2 00:25:27.723 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:27.723 07:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:25:27.723 07:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:27.723 07:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:27.723 07:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:27.723 07:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:27.723 07:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:27.723 07:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:27.982 [ 00:25:27.982 { 00:25:27.982 "name": "BaseBdev2", 00:25:27.982 "aliases": [ 00:25:27.982 "c5ca4dcf-288d-47e4-a1ec-44a042d3fd37" 00:25:27.982 ], 00:25:27.982 "product_name": "Malloc disk", 00:25:27.982 "block_size": 512, 00:25:27.982 "num_blocks": 65536, 00:25:27.982 "uuid": "c5ca4dcf-288d-47e4-a1ec-44a042d3fd37", 00:25:27.982 "assigned_rate_limits": { 00:25:27.982 "rw_ios_per_sec": 0, 00:25:27.982 "rw_mbytes_per_sec": 0, 00:25:27.982 "r_mbytes_per_sec": 0, 00:25:27.982 "w_mbytes_per_sec": 0 00:25:27.982 }, 00:25:27.982 "claimed": true, 00:25:27.982 "claim_type": "exclusive_write", 00:25:27.982 "zoned": false, 00:25:27.982 "supported_io_types": { 00:25:27.982 "read": true, 00:25:27.982 "write": true, 00:25:27.982 "unmap": true, 00:25:27.982 "write_zeroes": true, 00:25:27.982 "flush": true, 00:25:27.982 "reset": true, 00:25:27.982 "compare": false, 00:25:27.982 "compare_and_write": false, 00:25:27.982 "abort": true, 00:25:27.982 "nvme_admin": false, 00:25:27.982 "nvme_io": false 00:25:27.982 }, 00:25:27.982 "memory_domains": [ 00:25:27.982 { 00:25:27.982 "dma_device_id": "system", 00:25:27.982 "dma_device_type": 1 00:25:27.982 }, 00:25:27.982 { 00:25:27.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.982 "dma_device_type": 2 00:25:27.982 } 00:25:27.982 ], 00:25:27.982 "driver_specific": {} 00:25:27.982 } 00:25:27.982 ] 00:25:27.982 07:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:27.982 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:27.982 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:27.982 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:27.982 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:27.982 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:27.982 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:27.982 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:27.982 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:27.982 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:27.982 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:27.982 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:27.982 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:27.982 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.982 07:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.241 07:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:28.241 "name": "Existed_Raid", 00:25:28.241 "uuid": "6f1d7360-ce14-4258-87d3-263734ec2428", 00:25:28.241 "strip_size_kb": 0, 00:25:28.241 "state": "configuring", 00:25:28.241 "raid_level": "raid1", 00:25:28.241 "superblock": true, 00:25:28.241 "num_base_bdevs": 4, 00:25:28.241 "num_base_bdevs_discovered": 2, 00:25:28.241 "num_base_bdevs_operational": 4, 00:25:28.241 "base_bdevs_list": [ 00:25:28.241 { 00:25:28.241 "name": "BaseBdev1", 00:25:28.241 "uuid": "76fd6e0e-7bea-4c27-895b-3a5d68721031", 00:25:28.241 "is_configured": true, 00:25:28.241 "data_offset": 2048, 00:25:28.241 "data_size": 63488 00:25:28.241 }, 00:25:28.241 { 00:25:28.241 "name": "BaseBdev2", 00:25:28.241 "uuid": "c5ca4dcf-288d-47e4-a1ec-44a042d3fd37", 00:25:28.241 "is_configured": true, 00:25:28.241 "data_offset": 2048, 00:25:28.241 "data_size": 63488 00:25:28.241 }, 00:25:28.241 { 00:25:28.241 "name": "BaseBdev3", 00:25:28.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.241 "is_configured": false, 00:25:28.241 "data_offset": 0, 00:25:28.241 "data_size": 0 00:25:28.241 }, 00:25:28.241 { 00:25:28.241 "name": "BaseBdev4", 00:25:28.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.241 "is_configured": false, 00:25:28.241 "data_offset": 0, 00:25:28.241 "data_size": 0 00:25:28.241 } 00:25:28.241 ] 00:25:28.241 }' 00:25:28.241 07:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:28.241 07:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:29.176 07:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:29.176 [2024-07-12 07:36:02.964894] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:29.176 BaseBdev3 00:25:29.176 07:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:29.176 07:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:25:29.176 07:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:29.176 07:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:29.176 07:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:29.176 07:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:29.177 07:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:29.436 07:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:29.696 [ 00:25:29.696 { 00:25:29.696 "name": "BaseBdev3", 00:25:29.696 "aliases": [ 00:25:29.696 "d8b2ac23-d61c-4109-936c-8774798d8a2d" 00:25:29.696 ], 00:25:29.696 "product_name": "Malloc disk", 00:25:29.696 "block_size": 512, 00:25:29.696 "num_blocks": 65536, 00:25:29.696 "uuid": "d8b2ac23-d61c-4109-936c-8774798d8a2d", 00:25:29.696 "assigned_rate_limits": { 00:25:29.696 "rw_ios_per_sec": 0, 00:25:29.696 "rw_mbytes_per_sec": 0, 00:25:29.696 "r_mbytes_per_sec": 0, 00:25:29.696 "w_mbytes_per_sec": 0 00:25:29.696 }, 00:25:29.696 "claimed": true, 00:25:29.696 "claim_type": "exclusive_write", 00:25:29.696 "zoned": false, 00:25:29.696 "supported_io_types": { 00:25:29.696 "read": true, 00:25:29.696 "write": true, 00:25:29.696 "unmap": true, 00:25:29.696 "write_zeroes": true, 00:25:29.696 "flush": true, 00:25:29.696 "reset": true, 00:25:29.696 "compare": false, 00:25:29.696 "compare_and_write": false, 00:25:29.696 "abort": true, 00:25:29.696 "nvme_admin": false, 00:25:29.696 "nvme_io": false 00:25:29.696 }, 00:25:29.696 "memory_domains": [ 00:25:29.696 { 00:25:29.696 "dma_device_id": "system", 00:25:29.696 "dma_device_type": 1 00:25:29.696 }, 00:25:29.696 { 00:25:29.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.696 "dma_device_type": 2 00:25:29.696 } 00:25:29.696 ], 00:25:29.696 "driver_specific": {} 00:25:29.696 } 00:25:29.696 ] 00:25:29.696 07:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:29.696 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:29.696 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:29.696 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:29.696 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:29.696 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:29.696 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:29.696 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:29.696 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:29.696 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.696 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.696 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.696 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.696 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.696 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.964 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:29.964 "name": "Existed_Raid", 00:25:29.964 "uuid": "6f1d7360-ce14-4258-87d3-263734ec2428", 00:25:29.964 "strip_size_kb": 0, 00:25:29.964 "state": "configuring", 00:25:29.964 "raid_level": "raid1", 00:25:29.964 "superblock": true, 00:25:29.964 "num_base_bdevs": 4, 00:25:29.964 "num_base_bdevs_discovered": 3, 00:25:29.964 "num_base_bdevs_operational": 4, 00:25:29.964 "base_bdevs_list": [ 00:25:29.964 { 00:25:29.964 "name": "BaseBdev1", 00:25:29.964 "uuid": "76fd6e0e-7bea-4c27-895b-3a5d68721031", 00:25:29.964 "is_configured": true, 00:25:29.964 "data_offset": 2048, 00:25:29.964 "data_size": 63488 00:25:29.964 }, 00:25:29.964 { 00:25:29.964 "name": "BaseBdev2", 00:25:29.964 "uuid": "c5ca4dcf-288d-47e4-a1ec-44a042d3fd37", 00:25:29.964 "is_configured": true, 00:25:29.964 "data_offset": 2048, 00:25:29.964 "data_size": 63488 00:25:29.964 }, 00:25:29.964 { 00:25:29.964 "name": "BaseBdev3", 00:25:29.964 "uuid": "d8b2ac23-d61c-4109-936c-8774798d8a2d", 00:25:29.964 "is_configured": true, 00:25:29.964 "data_offset": 2048, 00:25:29.964 "data_size": 63488 00:25:29.964 }, 00:25:29.964 { 00:25:29.964 "name": "BaseBdev4", 00:25:29.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.964 "is_configured": false, 00:25:29.964 "data_offset": 0, 00:25:29.964 "data_size": 0 00:25:29.964 } 00:25:29.964 ] 00:25:29.964 }' 00:25:29.964 07:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:29.964 07:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.531 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:30.791 [2024-07-12 07:36:04.568868] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:30.791 [2024-07-12 07:36:04.569373] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:25:30.791 [2024-07-12 07:36:04.569484] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:30.791 [2024-07-12 07:36:04.569653] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:25:30.791 [2024-07-12 07:36:04.570173] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:25:30.791 [2024-07-12 07:36:04.570298] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:25:30.791 [2024-07-12 07:36:04.570507] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:30.791 BaseBdev4 00:25:30.791 07:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:30.791 07:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:25:30.791 07:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:30.791 07:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:30.791 07:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:30.791 07:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:30.791 07:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:31.050 07:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:31.309 [ 00:25:31.309 { 00:25:31.309 "name": "BaseBdev4", 00:25:31.309 "aliases": [ 00:25:31.309 "48ab3761-2205-4265-bfed-403c895d84df" 00:25:31.309 ], 00:25:31.309 "product_name": "Malloc disk", 00:25:31.309 "block_size": 512, 00:25:31.309 "num_blocks": 65536, 00:25:31.309 "uuid": "48ab3761-2205-4265-bfed-403c895d84df", 00:25:31.309 "assigned_rate_limits": { 00:25:31.309 "rw_ios_per_sec": 0, 00:25:31.309 "rw_mbytes_per_sec": 0, 00:25:31.309 "r_mbytes_per_sec": 0, 00:25:31.309 "w_mbytes_per_sec": 0 00:25:31.309 }, 00:25:31.309 "claimed": true, 00:25:31.309 "claim_type": "exclusive_write", 00:25:31.309 "zoned": false, 00:25:31.309 "supported_io_types": { 00:25:31.309 "read": true, 00:25:31.309 "write": true, 00:25:31.309 "unmap": true, 00:25:31.309 "write_zeroes": true, 00:25:31.309 "flush": true, 00:25:31.309 "reset": true, 00:25:31.309 "compare": false, 00:25:31.309 "compare_and_write": false, 00:25:31.309 "abort": true, 00:25:31.309 "nvme_admin": false, 00:25:31.309 "nvme_io": false 00:25:31.309 }, 00:25:31.309 "memory_domains": [ 00:25:31.309 { 00:25:31.309 "dma_device_id": "system", 00:25:31.309 "dma_device_type": 1 00:25:31.309 }, 00:25:31.309 { 00:25:31.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.309 "dma_device_type": 2 00:25:31.309 } 00:25:31.309 ], 00:25:31.309 "driver_specific": {} 00:25:31.309 } 00:25:31.309 ] 00:25:31.309 07:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:31.309 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:31.309 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:31.309 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:31.309 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:31.309 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:31.309 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:31.309 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:31.309 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:31.309 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:31.309 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:31.309 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:31.309 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:31.309 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.309 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.568 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:31.568 "name": "Existed_Raid", 00:25:31.568 "uuid": "6f1d7360-ce14-4258-87d3-263734ec2428", 00:25:31.568 "strip_size_kb": 0, 00:25:31.568 "state": "online", 00:25:31.568 "raid_level": "raid1", 00:25:31.568 "superblock": true, 00:25:31.568 "num_base_bdevs": 4, 00:25:31.568 "num_base_bdevs_discovered": 4, 00:25:31.568 "num_base_bdevs_operational": 4, 00:25:31.568 "base_bdevs_list": [ 00:25:31.568 { 00:25:31.568 "name": "BaseBdev1", 00:25:31.568 "uuid": "76fd6e0e-7bea-4c27-895b-3a5d68721031", 00:25:31.568 "is_configured": true, 00:25:31.568 "data_offset": 2048, 00:25:31.568 "data_size": 63488 00:25:31.568 }, 00:25:31.568 { 00:25:31.568 "name": "BaseBdev2", 00:25:31.568 "uuid": "c5ca4dcf-288d-47e4-a1ec-44a042d3fd37", 00:25:31.568 "is_configured": true, 00:25:31.568 "data_offset": 2048, 00:25:31.568 "data_size": 63488 00:25:31.568 }, 00:25:31.568 { 00:25:31.568 "name": "BaseBdev3", 00:25:31.568 "uuid": "d8b2ac23-d61c-4109-936c-8774798d8a2d", 00:25:31.568 "is_configured": true, 00:25:31.568 "data_offset": 2048, 00:25:31.568 "data_size": 63488 00:25:31.568 }, 00:25:31.568 { 00:25:31.568 "name": "BaseBdev4", 00:25:31.568 "uuid": "48ab3761-2205-4265-bfed-403c895d84df", 00:25:31.568 "is_configured": true, 00:25:31.568 "data_offset": 2048, 00:25:31.568 "data_size": 63488 00:25:31.568 } 00:25:31.568 ] 00:25:31.568 }' 00:25:31.568 07:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:31.568 07:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:32.504 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:32.504 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:32.504 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:32.504 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:32.504 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:32.504 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:32.504 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:32.504 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:32.504 [2024-07-12 07:36:06.281639] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:32.504 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:32.504 "name": "Existed_Raid", 00:25:32.504 "aliases": [ 00:25:32.504 "6f1d7360-ce14-4258-87d3-263734ec2428" 00:25:32.504 ], 00:25:32.504 "product_name": "Raid Volume", 00:25:32.504 "block_size": 512, 00:25:32.504 "num_blocks": 63488, 00:25:32.504 "uuid": "6f1d7360-ce14-4258-87d3-263734ec2428", 00:25:32.504 "assigned_rate_limits": { 00:25:32.504 "rw_ios_per_sec": 0, 00:25:32.504 "rw_mbytes_per_sec": 0, 00:25:32.504 "r_mbytes_per_sec": 0, 00:25:32.504 "w_mbytes_per_sec": 0 00:25:32.504 }, 00:25:32.504 "claimed": false, 00:25:32.504 "zoned": false, 00:25:32.504 "supported_io_types": { 00:25:32.504 "read": true, 00:25:32.504 "write": true, 00:25:32.504 "unmap": false, 00:25:32.504 "write_zeroes": true, 00:25:32.504 "flush": false, 00:25:32.504 "reset": true, 00:25:32.504 "compare": false, 00:25:32.504 "compare_and_write": false, 00:25:32.504 "abort": false, 00:25:32.504 "nvme_admin": false, 00:25:32.504 "nvme_io": false 00:25:32.504 }, 00:25:32.504 "memory_domains": [ 00:25:32.504 { 00:25:32.504 "dma_device_id": "system", 00:25:32.504 "dma_device_type": 1 00:25:32.504 }, 00:25:32.504 { 00:25:32.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.504 "dma_device_type": 2 00:25:32.504 }, 00:25:32.504 { 00:25:32.504 "dma_device_id": "system", 00:25:32.504 "dma_device_type": 1 00:25:32.504 }, 00:25:32.504 { 00:25:32.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.504 "dma_device_type": 2 00:25:32.504 }, 00:25:32.504 { 00:25:32.504 "dma_device_id": "system", 00:25:32.504 "dma_device_type": 1 00:25:32.504 }, 00:25:32.504 { 00:25:32.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.504 "dma_device_type": 2 00:25:32.504 }, 00:25:32.504 { 00:25:32.504 "dma_device_id": "system", 00:25:32.504 "dma_device_type": 1 00:25:32.504 }, 00:25:32.504 { 00:25:32.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.504 "dma_device_type": 2 00:25:32.504 } 00:25:32.504 ], 00:25:32.504 "driver_specific": { 00:25:32.504 "raid": { 00:25:32.504 "uuid": "6f1d7360-ce14-4258-87d3-263734ec2428", 00:25:32.504 "strip_size_kb": 0, 00:25:32.504 "state": "online", 00:25:32.504 "raid_level": "raid1", 00:25:32.504 "superblock": true, 00:25:32.504 "num_base_bdevs": 4, 00:25:32.504 "num_base_bdevs_discovered": 4, 00:25:32.504 "num_base_bdevs_operational": 4, 00:25:32.504 "base_bdevs_list": [ 00:25:32.504 { 00:25:32.504 "name": "BaseBdev1", 00:25:32.504 "uuid": "76fd6e0e-7bea-4c27-895b-3a5d68721031", 00:25:32.504 "is_configured": true, 00:25:32.504 "data_offset": 2048, 00:25:32.504 "data_size": 63488 00:25:32.504 }, 00:25:32.504 { 00:25:32.504 "name": "BaseBdev2", 00:25:32.504 "uuid": "c5ca4dcf-288d-47e4-a1ec-44a042d3fd37", 00:25:32.504 "is_configured": true, 00:25:32.504 "data_offset": 2048, 00:25:32.504 "data_size": 63488 00:25:32.504 }, 00:25:32.504 { 00:25:32.505 "name": "BaseBdev3", 00:25:32.505 "uuid": "d8b2ac23-d61c-4109-936c-8774798d8a2d", 00:25:32.505 "is_configured": true, 00:25:32.505 "data_offset": 2048, 00:25:32.505 "data_size": 63488 00:25:32.505 }, 00:25:32.505 { 00:25:32.505 "name": "BaseBdev4", 00:25:32.505 "uuid": "48ab3761-2205-4265-bfed-403c895d84df", 00:25:32.505 "is_configured": true, 00:25:32.505 "data_offset": 2048, 00:25:32.505 "data_size": 63488 00:25:32.505 } 00:25:32.505 ] 00:25:32.505 } 00:25:32.505 } 00:25:32.505 }' 00:25:32.505 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:32.505 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:32.505 BaseBdev2 00:25:32.505 BaseBdev3 00:25:32.505 BaseBdev4' 00:25:32.505 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:32.505 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:32.505 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:33.074 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:33.074 "name": "BaseBdev1", 00:25:33.074 "aliases": [ 00:25:33.074 "76fd6e0e-7bea-4c27-895b-3a5d68721031" 00:25:33.074 ], 00:25:33.074 "product_name": "Malloc disk", 00:25:33.074 "block_size": 512, 00:25:33.074 "num_blocks": 65536, 00:25:33.074 "uuid": "76fd6e0e-7bea-4c27-895b-3a5d68721031", 00:25:33.074 "assigned_rate_limits": { 00:25:33.074 "rw_ios_per_sec": 0, 00:25:33.074 "rw_mbytes_per_sec": 0, 00:25:33.074 "r_mbytes_per_sec": 0, 00:25:33.074 "w_mbytes_per_sec": 0 00:25:33.074 }, 00:25:33.074 "claimed": true, 00:25:33.074 "claim_type": "exclusive_write", 00:25:33.074 "zoned": false, 00:25:33.074 "supported_io_types": { 00:25:33.074 "read": true, 00:25:33.074 "write": true, 00:25:33.074 "unmap": true, 00:25:33.074 "write_zeroes": true, 00:25:33.074 "flush": true, 00:25:33.074 "reset": true, 00:25:33.074 "compare": false, 00:25:33.074 "compare_and_write": false, 00:25:33.074 "abort": true, 00:25:33.074 "nvme_admin": false, 00:25:33.074 "nvme_io": false 00:25:33.074 }, 00:25:33.074 "memory_domains": [ 00:25:33.074 { 00:25:33.074 "dma_device_id": "system", 00:25:33.074 "dma_device_type": 1 00:25:33.074 }, 00:25:33.074 { 00:25:33.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.074 "dma_device_type": 2 00:25:33.074 } 00:25:33.074 ], 00:25:33.074 "driver_specific": {} 00:25:33.074 }' 00:25:33.074 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:33.074 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:33.074 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:33.074 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:33.074 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:33.074 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:33.074 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:33.074 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:33.074 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:33.074 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:33.333 07:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:33.333 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:33.333 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:33.333 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:33.333 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:33.592 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:33.592 "name": "BaseBdev2", 00:25:33.592 "aliases": [ 00:25:33.592 "c5ca4dcf-288d-47e4-a1ec-44a042d3fd37" 00:25:33.592 ], 00:25:33.592 "product_name": "Malloc disk", 00:25:33.592 "block_size": 512, 00:25:33.592 "num_blocks": 65536, 00:25:33.592 "uuid": "c5ca4dcf-288d-47e4-a1ec-44a042d3fd37", 00:25:33.592 "assigned_rate_limits": { 00:25:33.592 "rw_ios_per_sec": 0, 00:25:33.592 "rw_mbytes_per_sec": 0, 00:25:33.592 "r_mbytes_per_sec": 0, 00:25:33.592 "w_mbytes_per_sec": 0 00:25:33.592 }, 00:25:33.592 "claimed": true, 00:25:33.592 "claim_type": "exclusive_write", 00:25:33.592 "zoned": false, 00:25:33.592 "supported_io_types": { 00:25:33.592 "read": true, 00:25:33.592 "write": true, 00:25:33.592 "unmap": true, 00:25:33.592 "write_zeroes": true, 00:25:33.592 "flush": true, 00:25:33.592 "reset": true, 00:25:33.592 "compare": false, 00:25:33.592 "compare_and_write": false, 00:25:33.592 "abort": true, 00:25:33.592 "nvme_admin": false, 00:25:33.592 "nvme_io": false 00:25:33.592 }, 00:25:33.592 "memory_domains": [ 00:25:33.592 { 00:25:33.592 "dma_device_id": "system", 00:25:33.592 "dma_device_type": 1 00:25:33.592 }, 00:25:33.592 { 00:25:33.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.592 "dma_device_type": 2 00:25:33.592 } 00:25:33.592 ], 00:25:33.592 "driver_specific": {} 00:25:33.592 }' 00:25:33.592 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:33.592 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:33.592 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:33.592 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:33.592 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:33.850 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:33.850 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:33.850 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:33.850 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:33.850 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:33.850 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:33.850 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:33.850 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:33.850 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:33.850 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:34.109 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:34.109 "name": "BaseBdev3", 00:25:34.109 "aliases": [ 00:25:34.109 "d8b2ac23-d61c-4109-936c-8774798d8a2d" 00:25:34.109 ], 00:25:34.109 "product_name": "Malloc disk", 00:25:34.109 "block_size": 512, 00:25:34.109 "num_blocks": 65536, 00:25:34.109 "uuid": "d8b2ac23-d61c-4109-936c-8774798d8a2d", 00:25:34.109 "assigned_rate_limits": { 00:25:34.109 "rw_ios_per_sec": 0, 00:25:34.109 "rw_mbytes_per_sec": 0, 00:25:34.109 "r_mbytes_per_sec": 0, 00:25:34.109 "w_mbytes_per_sec": 0 00:25:34.109 }, 00:25:34.109 "claimed": true, 00:25:34.109 "claim_type": "exclusive_write", 00:25:34.109 "zoned": false, 00:25:34.109 "supported_io_types": { 00:25:34.109 "read": true, 00:25:34.109 "write": true, 00:25:34.109 "unmap": true, 00:25:34.109 "write_zeroes": true, 00:25:34.109 "flush": true, 00:25:34.109 "reset": true, 00:25:34.109 "compare": false, 00:25:34.109 "compare_and_write": false, 00:25:34.109 "abort": true, 00:25:34.109 "nvme_admin": false, 00:25:34.109 "nvme_io": false 00:25:34.109 }, 00:25:34.109 "memory_domains": [ 00:25:34.109 { 00:25:34.109 "dma_device_id": "system", 00:25:34.109 "dma_device_type": 1 00:25:34.109 }, 00:25:34.109 { 00:25:34.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.109 "dma_device_type": 2 00:25:34.109 } 00:25:34.109 ], 00:25:34.109 "driver_specific": {} 00:25:34.109 }' 00:25:34.109 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:34.369 07:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:34.369 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:34.369 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:34.369 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:34.369 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:34.369 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:34.369 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:34.369 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:34.369 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:34.627 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:34.627 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:34.627 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:34.627 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:34.627 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:34.885 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:34.885 "name": "BaseBdev4", 00:25:34.885 "aliases": [ 00:25:34.885 "48ab3761-2205-4265-bfed-403c895d84df" 00:25:34.885 ], 00:25:34.885 "product_name": "Malloc disk", 00:25:34.885 "block_size": 512, 00:25:34.885 "num_blocks": 65536, 00:25:34.885 "uuid": "48ab3761-2205-4265-bfed-403c895d84df", 00:25:34.885 "assigned_rate_limits": { 00:25:34.885 "rw_ios_per_sec": 0, 00:25:34.885 "rw_mbytes_per_sec": 0, 00:25:34.885 "r_mbytes_per_sec": 0, 00:25:34.885 "w_mbytes_per_sec": 0 00:25:34.885 }, 00:25:34.885 "claimed": true, 00:25:34.885 "claim_type": "exclusive_write", 00:25:34.885 "zoned": false, 00:25:34.885 "supported_io_types": { 00:25:34.885 "read": true, 00:25:34.885 "write": true, 00:25:34.885 "unmap": true, 00:25:34.885 "write_zeroes": true, 00:25:34.885 "flush": true, 00:25:34.885 "reset": true, 00:25:34.885 "compare": false, 00:25:34.885 "compare_and_write": false, 00:25:34.885 "abort": true, 00:25:34.885 "nvme_admin": false, 00:25:34.885 "nvme_io": false 00:25:34.885 }, 00:25:34.885 "memory_domains": [ 00:25:34.885 { 00:25:34.885 "dma_device_id": "system", 00:25:34.885 "dma_device_type": 1 00:25:34.885 }, 00:25:34.885 { 00:25:34.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.885 "dma_device_type": 2 00:25:34.885 } 00:25:34.885 ], 00:25:34.885 "driver_specific": {} 00:25:34.885 }' 00:25:34.885 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:34.885 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:34.885 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:34.885 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:34.885 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:34.885 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:34.885 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:34.885 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:35.144 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:35.144 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:35.144 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:35.144 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:35.144 07:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:35.404 [2024-07-12 07:36:09.086177] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.404 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:35.663 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:35.663 "name": "Existed_Raid", 00:25:35.663 "uuid": "6f1d7360-ce14-4258-87d3-263734ec2428", 00:25:35.663 "strip_size_kb": 0, 00:25:35.663 "state": "online", 00:25:35.663 "raid_level": "raid1", 00:25:35.663 "superblock": true, 00:25:35.663 "num_base_bdevs": 4, 00:25:35.663 "num_base_bdevs_discovered": 3, 00:25:35.663 "num_base_bdevs_operational": 3, 00:25:35.663 "base_bdevs_list": [ 00:25:35.663 { 00:25:35.663 "name": null, 00:25:35.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.663 "is_configured": false, 00:25:35.663 "data_offset": 2048, 00:25:35.663 "data_size": 63488 00:25:35.663 }, 00:25:35.663 { 00:25:35.663 "name": "BaseBdev2", 00:25:35.663 "uuid": "c5ca4dcf-288d-47e4-a1ec-44a042d3fd37", 00:25:35.663 "is_configured": true, 00:25:35.663 "data_offset": 2048, 00:25:35.663 "data_size": 63488 00:25:35.663 }, 00:25:35.663 { 00:25:35.663 "name": "BaseBdev3", 00:25:35.663 "uuid": "d8b2ac23-d61c-4109-936c-8774798d8a2d", 00:25:35.663 "is_configured": true, 00:25:35.663 "data_offset": 2048, 00:25:35.663 "data_size": 63488 00:25:35.663 }, 00:25:35.663 { 00:25:35.663 "name": "BaseBdev4", 00:25:35.663 "uuid": "48ab3761-2205-4265-bfed-403c895d84df", 00:25:35.663 "is_configured": true, 00:25:35.663 "data_offset": 2048, 00:25:35.663 "data_size": 63488 00:25:35.663 } 00:25:35.663 ] 00:25:35.663 }' 00:25:35.663 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:35.663 07:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:36.230 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:36.230 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:36.230 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.230 07:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:36.488 07:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:36.488 07:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:36.488 07:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:36.746 [2024-07-12 07:36:10.403486] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:36.746 07:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:36.746 07:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:36.746 07:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.746 07:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:37.005 07:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:37.005 07:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:37.005 07:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:37.005 [2024-07-12 07:36:10.840298] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:37.005 07:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:37.005 07:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:37.005 07:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.005 07:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:37.262 07:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:37.263 07:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:37.263 07:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:37.520 [2024-07-12 07:36:11.272781] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:37.520 [2024-07-12 07:36:11.272908] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:37.520 [2024-07-12 07:36:11.285877] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:37.520 [2024-07-12 07:36:11.285935] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:37.520 [2024-07-12 07:36:11.285947] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:25:37.520 07:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:37.520 07:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:37.520 07:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.520 07:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:37.777 07:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:37.777 07:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:37.777 07:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:37.777 07:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:37.777 07:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:37.777 07:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:38.036 BaseBdev2 00:25:38.036 07:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:38.036 07:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:25:38.036 07:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:38.036 07:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:38.036 07:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:38.036 07:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:38.036 07:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:38.294 07:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:38.553 [ 00:25:38.553 { 00:25:38.553 "name": "BaseBdev2", 00:25:38.553 "aliases": [ 00:25:38.553 "5bbc6430-0235-48dd-b6a1-179550c6cbc2" 00:25:38.553 ], 00:25:38.553 "product_name": "Malloc disk", 00:25:38.553 "block_size": 512, 00:25:38.553 "num_blocks": 65536, 00:25:38.553 "uuid": "5bbc6430-0235-48dd-b6a1-179550c6cbc2", 00:25:38.553 "assigned_rate_limits": { 00:25:38.553 "rw_ios_per_sec": 0, 00:25:38.553 "rw_mbytes_per_sec": 0, 00:25:38.553 "r_mbytes_per_sec": 0, 00:25:38.553 "w_mbytes_per_sec": 0 00:25:38.553 }, 00:25:38.553 "claimed": false, 00:25:38.553 "zoned": false, 00:25:38.553 "supported_io_types": { 00:25:38.553 "read": true, 00:25:38.553 "write": true, 00:25:38.553 "unmap": true, 00:25:38.553 "write_zeroes": true, 00:25:38.553 "flush": true, 00:25:38.553 "reset": true, 00:25:38.553 "compare": false, 00:25:38.553 "compare_and_write": false, 00:25:38.553 "abort": true, 00:25:38.553 "nvme_admin": false, 00:25:38.553 "nvme_io": false 00:25:38.553 }, 00:25:38.553 "memory_domains": [ 00:25:38.553 { 00:25:38.553 "dma_device_id": "system", 00:25:38.553 "dma_device_type": 1 00:25:38.553 }, 00:25:38.553 { 00:25:38.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.553 "dma_device_type": 2 00:25:38.553 } 00:25:38.553 ], 00:25:38.553 "driver_specific": {} 00:25:38.553 } 00:25:38.553 ] 00:25:38.553 07:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:38.553 07:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:38.553 07:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:38.553 07:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:38.812 BaseBdev3 00:25:38.812 07:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:38.812 07:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:25:38.812 07:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:38.812 07:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:38.812 07:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:38.812 07:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:38.812 07:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:39.071 07:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:39.071 [ 00:25:39.071 { 00:25:39.071 "name": "BaseBdev3", 00:25:39.071 "aliases": [ 00:25:39.071 "00e0ef0d-e931-42bd-9d8a-978e1e9704ec" 00:25:39.071 ], 00:25:39.071 "product_name": "Malloc disk", 00:25:39.071 "block_size": 512, 00:25:39.071 "num_blocks": 65536, 00:25:39.071 "uuid": "00e0ef0d-e931-42bd-9d8a-978e1e9704ec", 00:25:39.071 "assigned_rate_limits": { 00:25:39.071 "rw_ios_per_sec": 0, 00:25:39.071 "rw_mbytes_per_sec": 0, 00:25:39.071 "r_mbytes_per_sec": 0, 00:25:39.071 "w_mbytes_per_sec": 0 00:25:39.071 }, 00:25:39.071 "claimed": false, 00:25:39.071 "zoned": false, 00:25:39.071 "supported_io_types": { 00:25:39.071 "read": true, 00:25:39.071 "write": true, 00:25:39.071 "unmap": true, 00:25:39.071 "write_zeroes": true, 00:25:39.071 "flush": true, 00:25:39.071 "reset": true, 00:25:39.071 "compare": false, 00:25:39.071 "compare_and_write": false, 00:25:39.071 "abort": true, 00:25:39.071 "nvme_admin": false, 00:25:39.071 "nvme_io": false 00:25:39.071 }, 00:25:39.071 "memory_domains": [ 00:25:39.071 { 00:25:39.071 "dma_device_id": "system", 00:25:39.071 "dma_device_type": 1 00:25:39.071 }, 00:25:39.071 { 00:25:39.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.071 "dma_device_type": 2 00:25:39.071 } 00:25:39.071 ], 00:25:39.071 "driver_specific": {} 00:25:39.071 } 00:25:39.071 ] 00:25:39.071 07:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:39.071 07:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:39.071 07:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:39.071 07:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:39.330 BaseBdev4 00:25:39.330 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:39.330 07:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:25:39.330 07:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:39.587 07:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:39.587 07:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:39.587 07:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:39.587 07:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:39.587 07:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:39.869 [ 00:25:39.869 { 00:25:39.869 "name": "BaseBdev4", 00:25:39.869 "aliases": [ 00:25:39.869 "cfddd573-c107-45df-b81d-9b7ff31156c4" 00:25:39.869 ], 00:25:39.869 "product_name": "Malloc disk", 00:25:39.869 "block_size": 512, 00:25:39.869 "num_blocks": 65536, 00:25:39.869 "uuid": "cfddd573-c107-45df-b81d-9b7ff31156c4", 00:25:39.869 "assigned_rate_limits": { 00:25:39.869 "rw_ios_per_sec": 0, 00:25:39.869 "rw_mbytes_per_sec": 0, 00:25:39.869 "r_mbytes_per_sec": 0, 00:25:39.869 "w_mbytes_per_sec": 0 00:25:39.869 }, 00:25:39.869 "claimed": false, 00:25:39.869 "zoned": false, 00:25:39.869 "supported_io_types": { 00:25:39.869 "read": true, 00:25:39.869 "write": true, 00:25:39.869 "unmap": true, 00:25:39.869 "write_zeroes": true, 00:25:39.869 "flush": true, 00:25:39.869 "reset": true, 00:25:39.869 "compare": false, 00:25:39.869 "compare_and_write": false, 00:25:39.869 "abort": true, 00:25:39.869 "nvme_admin": false, 00:25:39.869 "nvme_io": false 00:25:39.869 }, 00:25:39.869 "memory_domains": [ 00:25:39.869 { 00:25:39.869 "dma_device_id": "system", 00:25:39.869 "dma_device_type": 1 00:25:39.869 }, 00:25:39.869 { 00:25:39.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.869 "dma_device_type": 2 00:25:39.869 } 00:25:39.869 ], 00:25:39.869 "driver_specific": {} 00:25:39.869 } 00:25:39.869 ] 00:25:39.869 07:36:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:39.869 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:39.869 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:39.869 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:40.145 [2024-07-12 07:36:13.827142] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:40.145 [2024-07-12 07:36:13.827254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:40.145 [2024-07-12 07:36:13.827281] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:40.145 [2024-07-12 07:36:13.829451] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:40.145 [2024-07-12 07:36:13.829506] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:40.145 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:40.145 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:40.145 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:40.145 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:40.145 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:40.145 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:40.145 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:40.145 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:40.145 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:40.145 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:40.145 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.145 07:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.403 07:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:40.403 "name": "Existed_Raid", 00:25:40.403 "uuid": "5981b0b6-ca09-415c-9a14-1abfc08d3203", 00:25:40.403 "strip_size_kb": 0, 00:25:40.403 "state": "configuring", 00:25:40.403 "raid_level": "raid1", 00:25:40.403 "superblock": true, 00:25:40.403 "num_base_bdevs": 4, 00:25:40.403 "num_base_bdevs_discovered": 3, 00:25:40.403 "num_base_bdevs_operational": 4, 00:25:40.403 "base_bdevs_list": [ 00:25:40.403 { 00:25:40.403 "name": "BaseBdev1", 00:25:40.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.403 "is_configured": false, 00:25:40.403 "data_offset": 0, 00:25:40.403 "data_size": 0 00:25:40.403 }, 00:25:40.403 { 00:25:40.403 "name": "BaseBdev2", 00:25:40.403 "uuid": "5bbc6430-0235-48dd-b6a1-179550c6cbc2", 00:25:40.403 "is_configured": true, 00:25:40.403 "data_offset": 2048, 00:25:40.403 "data_size": 63488 00:25:40.403 }, 00:25:40.403 { 00:25:40.403 "name": "BaseBdev3", 00:25:40.403 "uuid": "00e0ef0d-e931-42bd-9d8a-978e1e9704ec", 00:25:40.403 "is_configured": true, 00:25:40.403 "data_offset": 2048, 00:25:40.403 "data_size": 63488 00:25:40.403 }, 00:25:40.403 { 00:25:40.403 "name": "BaseBdev4", 00:25:40.403 "uuid": "cfddd573-c107-45df-b81d-9b7ff31156c4", 00:25:40.403 "is_configured": true, 00:25:40.403 "data_offset": 2048, 00:25:40.403 "data_size": 63488 00:25:40.403 } 00:25:40.403 ] 00:25:40.403 }' 00:25:40.403 07:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:40.403 07:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:40.971 07:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:41.230 [2024-07-12 07:36:15.023399] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:41.230 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:41.230 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:41.230 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:41.230 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:41.230 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:41.230 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:41.230 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:41.230 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:41.230 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:41.230 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:41.230 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.230 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:41.489 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:41.489 "name": "Existed_Raid", 00:25:41.489 "uuid": "5981b0b6-ca09-415c-9a14-1abfc08d3203", 00:25:41.489 "strip_size_kb": 0, 00:25:41.489 "state": "configuring", 00:25:41.489 "raid_level": "raid1", 00:25:41.489 "superblock": true, 00:25:41.489 "num_base_bdevs": 4, 00:25:41.489 "num_base_bdevs_discovered": 2, 00:25:41.489 "num_base_bdevs_operational": 4, 00:25:41.489 "base_bdevs_list": [ 00:25:41.489 { 00:25:41.489 "name": "BaseBdev1", 00:25:41.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.489 "is_configured": false, 00:25:41.489 "data_offset": 0, 00:25:41.489 "data_size": 0 00:25:41.489 }, 00:25:41.489 { 00:25:41.489 "name": null, 00:25:41.489 "uuid": "5bbc6430-0235-48dd-b6a1-179550c6cbc2", 00:25:41.489 "is_configured": false, 00:25:41.489 "data_offset": 2048, 00:25:41.489 "data_size": 63488 00:25:41.489 }, 00:25:41.489 { 00:25:41.489 "name": "BaseBdev3", 00:25:41.489 "uuid": "00e0ef0d-e931-42bd-9d8a-978e1e9704ec", 00:25:41.489 "is_configured": true, 00:25:41.489 "data_offset": 2048, 00:25:41.489 "data_size": 63488 00:25:41.489 }, 00:25:41.489 { 00:25:41.489 "name": "BaseBdev4", 00:25:41.489 "uuid": "cfddd573-c107-45df-b81d-9b7ff31156c4", 00:25:41.489 "is_configured": true, 00:25:41.489 "data_offset": 2048, 00:25:41.489 "data_size": 63488 00:25:41.489 } 00:25:41.489 ] 00:25:41.489 }' 00:25:41.489 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:41.489 07:36:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.426 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.426 07:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:42.426 07:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:42.426 07:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:42.684 [2024-07-12 07:36:16.506955] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:42.684 BaseBdev1 00:25:42.684 07:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:42.684 07:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:25:42.684 07:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:42.684 07:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:42.684 07:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:42.684 07:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:42.684 07:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:42.944 07:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:43.203 [ 00:25:43.203 { 00:25:43.203 "name": "BaseBdev1", 00:25:43.203 "aliases": [ 00:25:43.203 "9203da36-f666-494b-9adc-f24a75a50402" 00:25:43.203 ], 00:25:43.203 "product_name": "Malloc disk", 00:25:43.203 "block_size": 512, 00:25:43.203 "num_blocks": 65536, 00:25:43.203 "uuid": "9203da36-f666-494b-9adc-f24a75a50402", 00:25:43.203 "assigned_rate_limits": { 00:25:43.203 "rw_ios_per_sec": 0, 00:25:43.203 "rw_mbytes_per_sec": 0, 00:25:43.203 "r_mbytes_per_sec": 0, 00:25:43.203 "w_mbytes_per_sec": 0 00:25:43.203 }, 00:25:43.203 "claimed": true, 00:25:43.203 "claim_type": "exclusive_write", 00:25:43.203 "zoned": false, 00:25:43.203 "supported_io_types": { 00:25:43.203 "read": true, 00:25:43.203 "write": true, 00:25:43.203 "unmap": true, 00:25:43.203 "write_zeroes": true, 00:25:43.203 "flush": true, 00:25:43.203 "reset": true, 00:25:43.203 "compare": false, 00:25:43.203 "compare_and_write": false, 00:25:43.203 "abort": true, 00:25:43.203 "nvme_admin": false, 00:25:43.203 "nvme_io": false 00:25:43.203 }, 00:25:43.203 "memory_domains": [ 00:25:43.203 { 00:25:43.203 "dma_device_id": "system", 00:25:43.203 "dma_device_type": 1 00:25:43.203 }, 00:25:43.203 { 00:25:43.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.203 "dma_device_type": 2 00:25:43.203 } 00:25:43.203 ], 00:25:43.203 "driver_specific": {} 00:25:43.203 } 00:25:43.203 ] 00:25:43.203 07:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:43.203 07:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:43.203 07:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:43.203 07:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:43.203 07:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:43.203 07:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:43.203 07:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:43.203 07:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:43.203 07:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:43.203 07:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:43.203 07:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:43.203 07:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.203 07:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:43.462 07:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:43.462 "name": "Existed_Raid", 00:25:43.462 "uuid": "5981b0b6-ca09-415c-9a14-1abfc08d3203", 00:25:43.462 "strip_size_kb": 0, 00:25:43.462 "state": "configuring", 00:25:43.462 "raid_level": "raid1", 00:25:43.462 "superblock": true, 00:25:43.462 "num_base_bdevs": 4, 00:25:43.462 "num_base_bdevs_discovered": 3, 00:25:43.462 "num_base_bdevs_operational": 4, 00:25:43.462 "base_bdevs_list": [ 00:25:43.462 { 00:25:43.462 "name": "BaseBdev1", 00:25:43.462 "uuid": "9203da36-f666-494b-9adc-f24a75a50402", 00:25:43.462 "is_configured": true, 00:25:43.462 "data_offset": 2048, 00:25:43.462 "data_size": 63488 00:25:43.463 }, 00:25:43.463 { 00:25:43.463 "name": null, 00:25:43.463 "uuid": "5bbc6430-0235-48dd-b6a1-179550c6cbc2", 00:25:43.463 "is_configured": false, 00:25:43.463 "data_offset": 2048, 00:25:43.463 "data_size": 63488 00:25:43.463 }, 00:25:43.463 { 00:25:43.463 "name": "BaseBdev3", 00:25:43.463 "uuid": "00e0ef0d-e931-42bd-9d8a-978e1e9704ec", 00:25:43.463 "is_configured": true, 00:25:43.463 "data_offset": 2048, 00:25:43.463 "data_size": 63488 00:25:43.463 }, 00:25:43.463 { 00:25:43.463 "name": "BaseBdev4", 00:25:43.463 "uuid": "cfddd573-c107-45df-b81d-9b7ff31156c4", 00:25:43.463 "is_configured": true, 00:25:43.463 "data_offset": 2048, 00:25:43.463 "data_size": 63488 00:25:43.463 } 00:25:43.463 ] 00:25:43.463 }' 00:25:43.463 07:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:43.463 07:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:44.426 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.426 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:44.426 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:44.426 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:44.684 [2024-07-12 07:36:18.514056] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:44.684 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:44.684 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:44.684 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:44.684 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:44.684 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:44.684 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:44.684 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:44.684 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:44.684 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:44.684 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:44.684 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.684 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:44.943 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:44.943 "name": "Existed_Raid", 00:25:44.943 "uuid": "5981b0b6-ca09-415c-9a14-1abfc08d3203", 00:25:44.943 "strip_size_kb": 0, 00:25:44.943 "state": "configuring", 00:25:44.943 "raid_level": "raid1", 00:25:44.943 "superblock": true, 00:25:44.943 "num_base_bdevs": 4, 00:25:44.943 "num_base_bdevs_discovered": 2, 00:25:44.943 "num_base_bdevs_operational": 4, 00:25:44.943 "base_bdevs_list": [ 00:25:44.943 { 00:25:44.943 "name": "BaseBdev1", 00:25:44.943 "uuid": "9203da36-f666-494b-9adc-f24a75a50402", 00:25:44.943 "is_configured": true, 00:25:44.943 "data_offset": 2048, 00:25:44.943 "data_size": 63488 00:25:44.943 }, 00:25:44.943 { 00:25:44.943 "name": null, 00:25:44.943 "uuid": "5bbc6430-0235-48dd-b6a1-179550c6cbc2", 00:25:44.943 "is_configured": false, 00:25:44.943 "data_offset": 2048, 00:25:44.943 "data_size": 63488 00:25:44.943 }, 00:25:44.943 { 00:25:44.943 "name": null, 00:25:44.943 "uuid": "00e0ef0d-e931-42bd-9d8a-978e1e9704ec", 00:25:44.943 "is_configured": false, 00:25:44.943 "data_offset": 2048, 00:25:44.943 "data_size": 63488 00:25:44.943 }, 00:25:44.943 { 00:25:44.943 "name": "BaseBdev4", 00:25:44.943 "uuid": "cfddd573-c107-45df-b81d-9b7ff31156c4", 00:25:44.943 "is_configured": true, 00:25:44.943 "data_offset": 2048, 00:25:44.943 "data_size": 63488 00:25:44.943 } 00:25:44.943 ] 00:25:44.943 }' 00:25:44.943 07:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:44.943 07:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.879 07:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.879 07:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:46.137 07:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:46.137 07:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:46.396 [2024-07-12 07:36:20.074796] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:46.396 07:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:46.396 07:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:46.396 07:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:46.396 07:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:46.396 07:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:46.396 07:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:46.396 07:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:46.397 07:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:46.397 07:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:46.397 07:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:46.397 07:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.397 07:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:46.656 07:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.656 "name": "Existed_Raid", 00:25:46.656 "uuid": "5981b0b6-ca09-415c-9a14-1abfc08d3203", 00:25:46.656 "strip_size_kb": 0, 00:25:46.656 "state": "configuring", 00:25:46.656 "raid_level": "raid1", 00:25:46.656 "superblock": true, 00:25:46.656 "num_base_bdevs": 4, 00:25:46.656 "num_base_bdevs_discovered": 3, 00:25:46.656 "num_base_bdevs_operational": 4, 00:25:46.656 "base_bdevs_list": [ 00:25:46.656 { 00:25:46.656 "name": "BaseBdev1", 00:25:46.656 "uuid": "9203da36-f666-494b-9adc-f24a75a50402", 00:25:46.656 "is_configured": true, 00:25:46.656 "data_offset": 2048, 00:25:46.656 "data_size": 63488 00:25:46.656 }, 00:25:46.656 { 00:25:46.656 "name": null, 00:25:46.656 "uuid": "5bbc6430-0235-48dd-b6a1-179550c6cbc2", 00:25:46.656 "is_configured": false, 00:25:46.656 "data_offset": 2048, 00:25:46.656 "data_size": 63488 00:25:46.656 }, 00:25:46.656 { 00:25:46.656 "name": "BaseBdev3", 00:25:46.656 "uuid": "00e0ef0d-e931-42bd-9d8a-978e1e9704ec", 00:25:46.656 "is_configured": true, 00:25:46.656 "data_offset": 2048, 00:25:46.656 "data_size": 63488 00:25:46.656 }, 00:25:46.656 { 00:25:46.656 "name": "BaseBdev4", 00:25:46.656 "uuid": "cfddd573-c107-45df-b81d-9b7ff31156c4", 00:25:46.656 "is_configured": true, 00:25:46.656 "data_offset": 2048, 00:25:46.656 "data_size": 63488 00:25:46.656 } 00:25:46.656 ] 00:25:46.656 }' 00:25:46.656 07:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.656 07:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.224 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.224 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:47.484 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:47.484 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:47.744 [2024-07-12 07:36:21.535081] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:47.744 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:47.744 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:47.744 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:47.744 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:47.744 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:47.744 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:47.744 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:47.744 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:47.744 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:47.744 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:47.744 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:47.744 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.004 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:48.004 "name": "Existed_Raid", 00:25:48.004 "uuid": "5981b0b6-ca09-415c-9a14-1abfc08d3203", 00:25:48.004 "strip_size_kb": 0, 00:25:48.004 "state": "configuring", 00:25:48.004 "raid_level": "raid1", 00:25:48.004 "superblock": true, 00:25:48.004 "num_base_bdevs": 4, 00:25:48.004 "num_base_bdevs_discovered": 2, 00:25:48.004 "num_base_bdevs_operational": 4, 00:25:48.004 "base_bdevs_list": [ 00:25:48.004 { 00:25:48.004 "name": null, 00:25:48.004 "uuid": "9203da36-f666-494b-9adc-f24a75a50402", 00:25:48.004 "is_configured": false, 00:25:48.004 "data_offset": 2048, 00:25:48.004 "data_size": 63488 00:25:48.004 }, 00:25:48.004 { 00:25:48.004 "name": null, 00:25:48.004 "uuid": "5bbc6430-0235-48dd-b6a1-179550c6cbc2", 00:25:48.004 "is_configured": false, 00:25:48.004 "data_offset": 2048, 00:25:48.004 "data_size": 63488 00:25:48.004 }, 00:25:48.004 { 00:25:48.004 "name": "BaseBdev3", 00:25:48.004 "uuid": "00e0ef0d-e931-42bd-9d8a-978e1e9704ec", 00:25:48.004 "is_configured": true, 00:25:48.004 "data_offset": 2048, 00:25:48.004 "data_size": 63488 00:25:48.004 }, 00:25:48.004 { 00:25:48.004 "name": "BaseBdev4", 00:25:48.004 "uuid": "cfddd573-c107-45df-b81d-9b7ff31156c4", 00:25:48.004 "is_configured": true, 00:25:48.004 "data_offset": 2048, 00:25:48.004 "data_size": 63488 00:25:48.004 } 00:25:48.004 ] 00:25:48.004 }' 00:25:48.004 07:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:48.004 07:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.572 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.572 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:48.831 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:48.831 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:49.090 [2024-07-12 07:36:22.886306] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:49.090 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:49.090 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:49.090 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:49.090 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:49.090 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:49.090 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:49.090 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:49.090 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:49.090 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:49.090 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:49.090 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.090 07:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:49.348 07:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:49.349 "name": "Existed_Raid", 00:25:49.349 "uuid": "5981b0b6-ca09-415c-9a14-1abfc08d3203", 00:25:49.349 "strip_size_kb": 0, 00:25:49.349 "state": "configuring", 00:25:49.349 "raid_level": "raid1", 00:25:49.349 "superblock": true, 00:25:49.349 "num_base_bdevs": 4, 00:25:49.349 "num_base_bdevs_discovered": 3, 00:25:49.349 "num_base_bdevs_operational": 4, 00:25:49.349 "base_bdevs_list": [ 00:25:49.349 { 00:25:49.349 "name": null, 00:25:49.349 "uuid": "9203da36-f666-494b-9adc-f24a75a50402", 00:25:49.349 "is_configured": false, 00:25:49.349 "data_offset": 2048, 00:25:49.349 "data_size": 63488 00:25:49.349 }, 00:25:49.349 { 00:25:49.349 "name": "BaseBdev2", 00:25:49.349 "uuid": "5bbc6430-0235-48dd-b6a1-179550c6cbc2", 00:25:49.349 "is_configured": true, 00:25:49.349 "data_offset": 2048, 00:25:49.349 "data_size": 63488 00:25:49.349 }, 00:25:49.349 { 00:25:49.349 "name": "BaseBdev3", 00:25:49.349 "uuid": "00e0ef0d-e931-42bd-9d8a-978e1e9704ec", 00:25:49.349 "is_configured": true, 00:25:49.349 "data_offset": 2048, 00:25:49.349 "data_size": 63488 00:25:49.349 }, 00:25:49.349 { 00:25:49.349 "name": "BaseBdev4", 00:25:49.349 "uuid": "cfddd573-c107-45df-b81d-9b7ff31156c4", 00:25:49.349 "is_configured": true, 00:25:49.349 "data_offset": 2048, 00:25:49.349 "data_size": 63488 00:25:49.349 } 00:25:49.349 ] 00:25:49.349 }' 00:25:49.349 07:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:49.349 07:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.914 07:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.914 07:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:50.172 07:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:50.172 07:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.172 07:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:50.431 07:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9203da36-f666-494b-9adc-f24a75a50402 00:25:50.690 [2024-07-12 07:36:24.494001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:50.690 [2024-07-12 07:36:24.494190] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:25:50.690 [2024-07-12 07:36:24.494203] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:50.690 [2024-07-12 07:36:24.494271] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:25:50.690 [2024-07-12 07:36:24.494597] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:25:50.690 [2024-07-12 07:36:24.494609] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:25:50.690 [2024-07-12 07:36:24.494701] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:50.690 NewBaseBdev 00:25:50.690 07:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:50.690 07:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:25:50.690 07:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:50.690 07:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:50.690 07:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:50.690 07:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:50.690 07:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:50.948 07:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:51.208 [ 00:25:51.208 { 00:25:51.208 "name": "NewBaseBdev", 00:25:51.208 "aliases": [ 00:25:51.208 "9203da36-f666-494b-9adc-f24a75a50402" 00:25:51.208 ], 00:25:51.208 "product_name": "Malloc disk", 00:25:51.208 "block_size": 512, 00:25:51.208 "num_blocks": 65536, 00:25:51.208 "uuid": "9203da36-f666-494b-9adc-f24a75a50402", 00:25:51.208 "assigned_rate_limits": { 00:25:51.208 "rw_ios_per_sec": 0, 00:25:51.208 "rw_mbytes_per_sec": 0, 00:25:51.208 "r_mbytes_per_sec": 0, 00:25:51.208 "w_mbytes_per_sec": 0 00:25:51.208 }, 00:25:51.208 "claimed": true, 00:25:51.208 "claim_type": "exclusive_write", 00:25:51.208 "zoned": false, 00:25:51.208 "supported_io_types": { 00:25:51.208 "read": true, 00:25:51.208 "write": true, 00:25:51.208 "unmap": true, 00:25:51.208 "write_zeroes": true, 00:25:51.208 "flush": true, 00:25:51.208 "reset": true, 00:25:51.208 "compare": false, 00:25:51.208 "compare_and_write": false, 00:25:51.208 "abort": true, 00:25:51.208 "nvme_admin": false, 00:25:51.208 "nvme_io": false 00:25:51.208 }, 00:25:51.208 "memory_domains": [ 00:25:51.208 { 00:25:51.208 "dma_device_id": "system", 00:25:51.208 "dma_device_type": 1 00:25:51.208 }, 00:25:51.208 { 00:25:51.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:51.208 "dma_device_type": 2 00:25:51.208 } 00:25:51.208 ], 00:25:51.208 "driver_specific": {} 00:25:51.208 } 00:25:51.208 ] 00:25:51.208 07:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:51.208 07:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:51.208 07:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:51.208 07:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:51.208 07:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:51.208 07:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:51.208 07:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:51.208 07:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:51.208 07:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:51.208 07:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:51.208 07:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:51.208 07:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.208 07:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:51.468 07:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:51.468 "name": "Existed_Raid", 00:25:51.468 "uuid": "5981b0b6-ca09-415c-9a14-1abfc08d3203", 00:25:51.468 "strip_size_kb": 0, 00:25:51.468 "state": "online", 00:25:51.468 "raid_level": "raid1", 00:25:51.468 "superblock": true, 00:25:51.468 "num_base_bdevs": 4, 00:25:51.468 "num_base_bdevs_discovered": 4, 00:25:51.468 "num_base_bdevs_operational": 4, 00:25:51.468 "base_bdevs_list": [ 00:25:51.468 { 00:25:51.468 "name": "NewBaseBdev", 00:25:51.468 "uuid": "9203da36-f666-494b-9adc-f24a75a50402", 00:25:51.468 "is_configured": true, 00:25:51.468 "data_offset": 2048, 00:25:51.468 "data_size": 63488 00:25:51.468 }, 00:25:51.468 { 00:25:51.468 "name": "BaseBdev2", 00:25:51.468 "uuid": "5bbc6430-0235-48dd-b6a1-179550c6cbc2", 00:25:51.468 "is_configured": true, 00:25:51.468 "data_offset": 2048, 00:25:51.468 "data_size": 63488 00:25:51.468 }, 00:25:51.468 { 00:25:51.468 "name": "BaseBdev3", 00:25:51.468 "uuid": "00e0ef0d-e931-42bd-9d8a-978e1e9704ec", 00:25:51.468 "is_configured": true, 00:25:51.468 "data_offset": 2048, 00:25:51.468 "data_size": 63488 00:25:51.468 }, 00:25:51.468 { 00:25:51.468 "name": "BaseBdev4", 00:25:51.468 "uuid": "cfddd573-c107-45df-b81d-9b7ff31156c4", 00:25:51.468 "is_configured": true, 00:25:51.468 "data_offset": 2048, 00:25:51.468 "data_size": 63488 00:25:51.468 } 00:25:51.468 ] 00:25:51.468 }' 00:25:51.468 07:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:51.468 07:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.037 07:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:52.037 07:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:52.037 07:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:52.037 07:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:52.037 07:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:52.037 07:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:52.037 07:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:52.037 07:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:52.296 [2024-07-12 07:36:26.062814] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:52.296 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:52.296 "name": "Existed_Raid", 00:25:52.296 "aliases": [ 00:25:52.296 "5981b0b6-ca09-415c-9a14-1abfc08d3203" 00:25:52.296 ], 00:25:52.296 "product_name": "Raid Volume", 00:25:52.296 "block_size": 512, 00:25:52.296 "num_blocks": 63488, 00:25:52.296 "uuid": "5981b0b6-ca09-415c-9a14-1abfc08d3203", 00:25:52.296 "assigned_rate_limits": { 00:25:52.296 "rw_ios_per_sec": 0, 00:25:52.296 "rw_mbytes_per_sec": 0, 00:25:52.296 "r_mbytes_per_sec": 0, 00:25:52.296 "w_mbytes_per_sec": 0 00:25:52.296 }, 00:25:52.296 "claimed": false, 00:25:52.296 "zoned": false, 00:25:52.296 "supported_io_types": { 00:25:52.296 "read": true, 00:25:52.296 "write": true, 00:25:52.296 "unmap": false, 00:25:52.296 "write_zeroes": true, 00:25:52.296 "flush": false, 00:25:52.296 "reset": true, 00:25:52.296 "compare": false, 00:25:52.296 "compare_and_write": false, 00:25:52.296 "abort": false, 00:25:52.296 "nvme_admin": false, 00:25:52.296 "nvme_io": false 00:25:52.296 }, 00:25:52.296 "memory_domains": [ 00:25:52.296 { 00:25:52.296 "dma_device_id": "system", 00:25:52.296 "dma_device_type": 1 00:25:52.296 }, 00:25:52.296 { 00:25:52.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.296 "dma_device_type": 2 00:25:52.296 }, 00:25:52.296 { 00:25:52.296 "dma_device_id": "system", 00:25:52.296 "dma_device_type": 1 00:25:52.296 }, 00:25:52.296 { 00:25:52.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.296 "dma_device_type": 2 00:25:52.296 }, 00:25:52.296 { 00:25:52.296 "dma_device_id": "system", 00:25:52.296 "dma_device_type": 1 00:25:52.296 }, 00:25:52.296 { 00:25:52.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.296 "dma_device_type": 2 00:25:52.296 }, 00:25:52.296 { 00:25:52.296 "dma_device_id": "system", 00:25:52.296 "dma_device_type": 1 00:25:52.296 }, 00:25:52.296 { 00:25:52.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.296 "dma_device_type": 2 00:25:52.296 } 00:25:52.296 ], 00:25:52.296 "driver_specific": { 00:25:52.296 "raid": { 00:25:52.296 "uuid": "5981b0b6-ca09-415c-9a14-1abfc08d3203", 00:25:52.296 "strip_size_kb": 0, 00:25:52.296 "state": "online", 00:25:52.296 "raid_level": "raid1", 00:25:52.296 "superblock": true, 00:25:52.296 "num_base_bdevs": 4, 00:25:52.296 "num_base_bdevs_discovered": 4, 00:25:52.296 "num_base_bdevs_operational": 4, 00:25:52.296 "base_bdevs_list": [ 00:25:52.296 { 00:25:52.296 "name": "NewBaseBdev", 00:25:52.296 "uuid": "9203da36-f666-494b-9adc-f24a75a50402", 00:25:52.296 "is_configured": true, 00:25:52.296 "data_offset": 2048, 00:25:52.296 "data_size": 63488 00:25:52.296 }, 00:25:52.296 { 00:25:52.296 "name": "BaseBdev2", 00:25:52.296 "uuid": "5bbc6430-0235-48dd-b6a1-179550c6cbc2", 00:25:52.296 "is_configured": true, 00:25:52.296 "data_offset": 2048, 00:25:52.296 "data_size": 63488 00:25:52.296 }, 00:25:52.296 { 00:25:52.296 "name": "BaseBdev3", 00:25:52.296 "uuid": "00e0ef0d-e931-42bd-9d8a-978e1e9704ec", 00:25:52.296 "is_configured": true, 00:25:52.296 "data_offset": 2048, 00:25:52.296 "data_size": 63488 00:25:52.296 }, 00:25:52.296 { 00:25:52.296 "name": "BaseBdev4", 00:25:52.296 "uuid": "cfddd573-c107-45df-b81d-9b7ff31156c4", 00:25:52.296 "is_configured": true, 00:25:52.296 "data_offset": 2048, 00:25:52.296 "data_size": 63488 00:25:52.296 } 00:25:52.296 ] 00:25:52.296 } 00:25:52.296 } 00:25:52.296 }' 00:25:52.296 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:52.296 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:52.296 BaseBdev2 00:25:52.296 BaseBdev3 00:25:52.296 BaseBdev4' 00:25:52.296 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:52.296 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:52.296 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:52.555 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:52.555 "name": "NewBaseBdev", 00:25:52.555 "aliases": [ 00:25:52.555 "9203da36-f666-494b-9adc-f24a75a50402" 00:25:52.555 ], 00:25:52.555 "product_name": "Malloc disk", 00:25:52.555 "block_size": 512, 00:25:52.555 "num_blocks": 65536, 00:25:52.555 "uuid": "9203da36-f666-494b-9adc-f24a75a50402", 00:25:52.555 "assigned_rate_limits": { 00:25:52.555 "rw_ios_per_sec": 0, 00:25:52.555 "rw_mbytes_per_sec": 0, 00:25:52.555 "r_mbytes_per_sec": 0, 00:25:52.555 "w_mbytes_per_sec": 0 00:25:52.555 }, 00:25:52.555 "claimed": true, 00:25:52.555 "claim_type": "exclusive_write", 00:25:52.555 "zoned": false, 00:25:52.555 "supported_io_types": { 00:25:52.555 "read": true, 00:25:52.555 "write": true, 00:25:52.555 "unmap": true, 00:25:52.555 "write_zeroes": true, 00:25:52.555 "flush": true, 00:25:52.555 "reset": true, 00:25:52.555 "compare": false, 00:25:52.555 "compare_and_write": false, 00:25:52.555 "abort": true, 00:25:52.555 "nvme_admin": false, 00:25:52.555 "nvme_io": false 00:25:52.555 }, 00:25:52.555 "memory_domains": [ 00:25:52.555 { 00:25:52.555 "dma_device_id": "system", 00:25:52.555 "dma_device_type": 1 00:25:52.555 }, 00:25:52.555 { 00:25:52.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.555 "dma_device_type": 2 00:25:52.555 } 00:25:52.555 ], 00:25:52.555 "driver_specific": {} 00:25:52.555 }' 00:25:52.555 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:52.555 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:52.555 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:52.555 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:52.815 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:52.815 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:52.815 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:52.815 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:52.815 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:52.815 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:52.815 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:53.074 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:53.074 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:53.074 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:53.074 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:53.074 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:53.074 "name": "BaseBdev2", 00:25:53.074 "aliases": [ 00:25:53.074 "5bbc6430-0235-48dd-b6a1-179550c6cbc2" 00:25:53.074 ], 00:25:53.074 "product_name": "Malloc disk", 00:25:53.074 "block_size": 512, 00:25:53.074 "num_blocks": 65536, 00:25:53.074 "uuid": "5bbc6430-0235-48dd-b6a1-179550c6cbc2", 00:25:53.074 "assigned_rate_limits": { 00:25:53.074 "rw_ios_per_sec": 0, 00:25:53.074 "rw_mbytes_per_sec": 0, 00:25:53.074 "r_mbytes_per_sec": 0, 00:25:53.074 "w_mbytes_per_sec": 0 00:25:53.074 }, 00:25:53.074 "claimed": true, 00:25:53.074 "claim_type": "exclusive_write", 00:25:53.074 "zoned": false, 00:25:53.074 "supported_io_types": { 00:25:53.074 "read": true, 00:25:53.074 "write": true, 00:25:53.074 "unmap": true, 00:25:53.074 "write_zeroes": true, 00:25:53.074 "flush": true, 00:25:53.074 "reset": true, 00:25:53.074 "compare": false, 00:25:53.074 "compare_and_write": false, 00:25:53.074 "abort": true, 00:25:53.074 "nvme_admin": false, 00:25:53.074 "nvme_io": false 00:25:53.074 }, 00:25:53.074 "memory_domains": [ 00:25:53.074 { 00:25:53.074 "dma_device_id": "system", 00:25:53.074 "dma_device_type": 1 00:25:53.074 }, 00:25:53.074 { 00:25:53.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.074 "dma_device_type": 2 00:25:53.074 } 00:25:53.074 ], 00:25:53.074 "driver_specific": {} 00:25:53.074 }' 00:25:53.074 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:53.334 07:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:53.334 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:53.334 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:53.334 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:53.334 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:53.334 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:53.334 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:53.334 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:53.593 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:53.593 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:53.593 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:53.593 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:53.593 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:53.593 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:53.851 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:53.851 "name": "BaseBdev3", 00:25:53.851 "aliases": [ 00:25:53.851 "00e0ef0d-e931-42bd-9d8a-978e1e9704ec" 00:25:53.851 ], 00:25:53.851 "product_name": "Malloc disk", 00:25:53.851 "block_size": 512, 00:25:53.851 "num_blocks": 65536, 00:25:53.851 "uuid": "00e0ef0d-e931-42bd-9d8a-978e1e9704ec", 00:25:53.851 "assigned_rate_limits": { 00:25:53.851 "rw_ios_per_sec": 0, 00:25:53.851 "rw_mbytes_per_sec": 0, 00:25:53.851 "r_mbytes_per_sec": 0, 00:25:53.851 "w_mbytes_per_sec": 0 00:25:53.851 }, 00:25:53.851 "claimed": true, 00:25:53.851 "claim_type": "exclusive_write", 00:25:53.851 "zoned": false, 00:25:53.851 "supported_io_types": { 00:25:53.851 "read": true, 00:25:53.851 "write": true, 00:25:53.851 "unmap": true, 00:25:53.851 "write_zeroes": true, 00:25:53.851 "flush": true, 00:25:53.851 "reset": true, 00:25:53.851 "compare": false, 00:25:53.851 "compare_and_write": false, 00:25:53.851 "abort": true, 00:25:53.851 "nvme_admin": false, 00:25:53.851 "nvme_io": false 00:25:53.851 }, 00:25:53.851 "memory_domains": [ 00:25:53.851 { 00:25:53.851 "dma_device_id": "system", 00:25:53.851 "dma_device_type": 1 00:25:53.851 }, 00:25:53.851 { 00:25:53.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.851 "dma_device_type": 2 00:25:53.851 } 00:25:53.851 ], 00:25:53.851 "driver_specific": {} 00:25:53.851 }' 00:25:53.851 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:53.851 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:53.851 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:53.851 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:53.851 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:53.851 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:53.851 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:54.109 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:54.109 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:54.109 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:54.109 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:54.109 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:54.109 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:54.109 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:54.109 07:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:54.367 07:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:54.367 "name": "BaseBdev4", 00:25:54.367 "aliases": [ 00:25:54.367 "cfddd573-c107-45df-b81d-9b7ff31156c4" 00:25:54.367 ], 00:25:54.367 "product_name": "Malloc disk", 00:25:54.368 "block_size": 512, 00:25:54.368 "num_blocks": 65536, 00:25:54.368 "uuid": "cfddd573-c107-45df-b81d-9b7ff31156c4", 00:25:54.368 "assigned_rate_limits": { 00:25:54.368 "rw_ios_per_sec": 0, 00:25:54.368 "rw_mbytes_per_sec": 0, 00:25:54.368 "r_mbytes_per_sec": 0, 00:25:54.368 "w_mbytes_per_sec": 0 00:25:54.368 }, 00:25:54.368 "claimed": true, 00:25:54.368 "claim_type": "exclusive_write", 00:25:54.368 "zoned": false, 00:25:54.368 "supported_io_types": { 00:25:54.368 "read": true, 00:25:54.368 "write": true, 00:25:54.368 "unmap": true, 00:25:54.368 "write_zeroes": true, 00:25:54.368 "flush": true, 00:25:54.368 "reset": true, 00:25:54.368 "compare": false, 00:25:54.368 "compare_and_write": false, 00:25:54.368 "abort": true, 00:25:54.368 "nvme_admin": false, 00:25:54.368 "nvme_io": false 00:25:54.368 }, 00:25:54.368 "memory_domains": [ 00:25:54.368 { 00:25:54.368 "dma_device_id": "system", 00:25:54.368 "dma_device_type": 1 00:25:54.368 }, 00:25:54.368 { 00:25:54.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.368 "dma_device_type": 2 00:25:54.368 } 00:25:54.368 ], 00:25:54.368 "driver_specific": {} 00:25:54.368 }' 00:25:54.368 07:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:54.368 07:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:54.626 07:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:54.626 07:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:54.626 07:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:54.626 07:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:54.626 07:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:54.626 07:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:54.626 07:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:54.626 07:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:54.626 07:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:54.885 07:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:54.885 07:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:55.145 [2024-07-12 07:36:28.838336] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:55.145 [2024-07-12 07:36:28.838397] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:55.145 [2024-07-12 07:36:28.838497] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:55.145 [2024-07-12 07:36:28.838786] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:55.145 [2024-07-12 07:36:28.838808] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:25:55.145 07:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 151468 00:25:55.145 07:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 151468 ']' 00:25:55.145 07:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 151468 00:25:55.145 07:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:25:55.145 07:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:55.145 07:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 151468 00:25:55.145 07:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:55.145 07:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:55.145 07:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 151468' 00:25:55.145 killing process with pid 151468 00:25:55.145 07:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 151468 00:25:55.145 [2024-07-12 07:36:28.896786] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:55.145 07:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 151468 00:25:55.145 [2024-07-12 07:36:28.973920] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:55.713 07:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:25:55.713 00:25:55.713 real 0m33.362s 00:25:55.713 user 1m2.406s 00:25:55.713 sys 0m5.202s 00:25:55.713 07:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:55.713 07:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.713 ************************************ 00:25:55.713 END TEST raid_state_function_test_sb 00:25:55.713 ************************************ 00:25:55.713 07:36:29 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:25:55.713 07:36:29 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:25:55.713 07:36:29 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:55.713 07:36:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:55.713 ************************************ 00:25:55.713 START TEST raid_superblock_test 00:25:55.713 ************************************ 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 4 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=152550 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 152550 /var/tmp/spdk-raid.sock 00:25:55.713 07:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 152550 ']' 00:25:55.714 07:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:55.714 07:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:55.714 07:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:55.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:55.714 07:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:55.714 07:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:55.714 07:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.714 [2024-07-12 07:36:29.532495] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:55.714 [2024-07-12 07:36:29.532742] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152550 ] 00:25:55.973 [2024-07-12 07:36:29.691722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.973 [2024-07-12 07:36:29.786052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.231 [2024-07-12 07:36:29.872096] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:56.797 07:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:56.797 07:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:25:56.797 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:25:56.797 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:56.797 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:25:56.797 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:25:56.797 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:56.797 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:56.797 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:56.797 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:56.797 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:57.058 malloc1 00:25:57.058 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:57.318 [2024-07-12 07:36:30.962622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:57.318 [2024-07-12 07:36:30.962791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:57.318 [2024-07-12 07:36:30.962846] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:25:57.318 [2024-07-12 07:36:30.962899] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:57.318 [2024-07-12 07:36:30.965904] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:57.318 [2024-07-12 07:36:30.965978] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:57.318 pt1 00:25:57.318 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:57.318 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:57.318 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:25:57.318 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:25:57.318 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:57.318 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:57.318 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:57.318 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:57.318 07:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:57.318 malloc2 00:25:57.318 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:57.576 [2024-07-12 07:36:31.378323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:57.576 [2024-07-12 07:36:31.378443] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:57.576 [2024-07-12 07:36:31.378489] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:25:57.576 [2024-07-12 07:36:31.378553] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:57.576 [2024-07-12 07:36:31.381468] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:57.576 [2024-07-12 07:36:31.381528] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:57.576 pt2 00:25:57.576 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:57.576 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:57.576 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:25:57.576 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:25:57.576 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:57.576 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:57.576 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:57.576 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:57.576 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:25:57.833 malloc3 00:25:57.833 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:58.092 [2024-07-12 07:36:31.809552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:58.092 [2024-07-12 07:36:31.809671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:58.092 [2024-07-12 07:36:31.809718] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:58.092 [2024-07-12 07:36:31.809767] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:58.092 [2024-07-12 07:36:31.812603] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:58.092 [2024-07-12 07:36:31.812675] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:58.092 pt3 00:25:58.092 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:58.092 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:58.092 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:25:58.092 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:25:58.092 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:58.092 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:58.092 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:58.092 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:58.092 07:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:58.350 malloc4 00:25:58.350 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:58.350 [2024-07-12 07:36:32.193022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:58.350 [2024-07-12 07:36:32.193163] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:58.350 [2024-07-12 07:36:32.193203] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:58.350 [2024-07-12 07:36:32.193248] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:58.350 [2024-07-12 07:36:32.196043] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:58.350 [2024-07-12 07:36:32.196111] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:58.350 pt4 00:25:58.350 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:58.350 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:58.350 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:58.607 [2024-07-12 07:36:32.393156] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:58.608 [2024-07-12 07:36:32.395958] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:58.608 [2024-07-12 07:36:32.396028] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:58.608 [2024-07-12 07:36:32.396080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:58.608 [2024-07-12 07:36:32.396323] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:25:58.608 [2024-07-12 07:36:32.396334] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:58.608 [2024-07-12 07:36:32.396506] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:25:58.608 [2024-07-12 07:36:32.396986] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:25:58.608 [2024-07-12 07:36:32.397003] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:25:58.608 [2024-07-12 07:36:32.397204] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:58.608 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:58.608 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:58.608 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:58.608 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:58.608 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:58.608 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:58.608 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:58.608 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:58.608 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:58.608 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:58.608 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.608 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.865 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:58.865 "name": "raid_bdev1", 00:25:58.865 "uuid": "3e028ddb-47ce-403a-aba3-b95c567dcd61", 00:25:58.865 "strip_size_kb": 0, 00:25:58.865 "state": "online", 00:25:58.865 "raid_level": "raid1", 00:25:58.865 "superblock": true, 00:25:58.865 "num_base_bdevs": 4, 00:25:58.865 "num_base_bdevs_discovered": 4, 00:25:58.865 "num_base_bdevs_operational": 4, 00:25:58.865 "base_bdevs_list": [ 00:25:58.865 { 00:25:58.865 "name": "pt1", 00:25:58.865 "uuid": "21f403e3-fc83-599a-a9b3-b5267797ab86", 00:25:58.865 "is_configured": true, 00:25:58.865 "data_offset": 2048, 00:25:58.865 "data_size": 63488 00:25:58.865 }, 00:25:58.865 { 00:25:58.865 "name": "pt2", 00:25:58.865 "uuid": "2df2163e-f352-5de8-98af-b848d7bebdef", 00:25:58.865 "is_configured": true, 00:25:58.865 "data_offset": 2048, 00:25:58.865 "data_size": 63488 00:25:58.865 }, 00:25:58.865 { 00:25:58.865 "name": "pt3", 00:25:58.865 "uuid": "cb7a446e-cf77-57f3-83dd-0cecf822e48b", 00:25:58.865 "is_configured": true, 00:25:58.865 "data_offset": 2048, 00:25:58.865 "data_size": 63488 00:25:58.865 }, 00:25:58.865 { 00:25:58.865 "name": "pt4", 00:25:58.865 "uuid": "c6b0938a-19b2-5731-a052-2cfd1cd3aee3", 00:25:58.865 "is_configured": true, 00:25:58.865 "data_offset": 2048, 00:25:58.865 "data_size": 63488 00:25:58.865 } 00:25:58.865 ] 00:25:58.865 }' 00:25:58.865 07:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:58.865 07:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.429 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:25:59.429 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:59.429 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:59.429 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:59.429 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:59.429 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:59.429 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:59.429 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:59.687 [2024-07-12 07:36:33.425744] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:59.687 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:59.687 "name": "raid_bdev1", 00:25:59.687 "aliases": [ 00:25:59.687 "3e028ddb-47ce-403a-aba3-b95c567dcd61" 00:25:59.687 ], 00:25:59.687 "product_name": "Raid Volume", 00:25:59.687 "block_size": 512, 00:25:59.687 "num_blocks": 63488, 00:25:59.687 "uuid": "3e028ddb-47ce-403a-aba3-b95c567dcd61", 00:25:59.687 "assigned_rate_limits": { 00:25:59.687 "rw_ios_per_sec": 0, 00:25:59.687 "rw_mbytes_per_sec": 0, 00:25:59.687 "r_mbytes_per_sec": 0, 00:25:59.687 "w_mbytes_per_sec": 0 00:25:59.687 }, 00:25:59.687 "claimed": false, 00:25:59.687 "zoned": false, 00:25:59.687 "supported_io_types": { 00:25:59.687 "read": true, 00:25:59.687 "write": true, 00:25:59.687 "unmap": false, 00:25:59.687 "write_zeroes": true, 00:25:59.687 "flush": false, 00:25:59.687 "reset": true, 00:25:59.687 "compare": false, 00:25:59.687 "compare_and_write": false, 00:25:59.687 "abort": false, 00:25:59.687 "nvme_admin": false, 00:25:59.687 "nvme_io": false 00:25:59.687 }, 00:25:59.687 "memory_domains": [ 00:25:59.687 { 00:25:59.687 "dma_device_id": "system", 00:25:59.687 "dma_device_type": 1 00:25:59.687 }, 00:25:59.687 { 00:25:59.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.687 "dma_device_type": 2 00:25:59.687 }, 00:25:59.687 { 00:25:59.687 "dma_device_id": "system", 00:25:59.687 "dma_device_type": 1 00:25:59.687 }, 00:25:59.687 { 00:25:59.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.687 "dma_device_type": 2 00:25:59.687 }, 00:25:59.687 { 00:25:59.687 "dma_device_id": "system", 00:25:59.687 "dma_device_type": 1 00:25:59.687 }, 00:25:59.687 { 00:25:59.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.687 "dma_device_type": 2 00:25:59.687 }, 00:25:59.687 { 00:25:59.687 "dma_device_id": "system", 00:25:59.687 "dma_device_type": 1 00:25:59.687 }, 00:25:59.687 { 00:25:59.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.687 "dma_device_type": 2 00:25:59.687 } 00:25:59.687 ], 00:25:59.687 "driver_specific": { 00:25:59.687 "raid": { 00:25:59.687 "uuid": "3e028ddb-47ce-403a-aba3-b95c567dcd61", 00:25:59.687 "strip_size_kb": 0, 00:25:59.687 "state": "online", 00:25:59.687 "raid_level": "raid1", 00:25:59.687 "superblock": true, 00:25:59.687 "num_base_bdevs": 4, 00:25:59.687 "num_base_bdevs_discovered": 4, 00:25:59.687 "num_base_bdevs_operational": 4, 00:25:59.687 "base_bdevs_list": [ 00:25:59.687 { 00:25:59.687 "name": "pt1", 00:25:59.687 "uuid": "21f403e3-fc83-599a-a9b3-b5267797ab86", 00:25:59.687 "is_configured": true, 00:25:59.687 "data_offset": 2048, 00:25:59.687 "data_size": 63488 00:25:59.687 }, 00:25:59.687 { 00:25:59.687 "name": "pt2", 00:25:59.687 "uuid": "2df2163e-f352-5de8-98af-b848d7bebdef", 00:25:59.687 "is_configured": true, 00:25:59.687 "data_offset": 2048, 00:25:59.687 "data_size": 63488 00:25:59.687 }, 00:25:59.687 { 00:25:59.687 "name": "pt3", 00:25:59.687 "uuid": "cb7a446e-cf77-57f3-83dd-0cecf822e48b", 00:25:59.687 "is_configured": true, 00:25:59.687 "data_offset": 2048, 00:25:59.687 "data_size": 63488 00:25:59.687 }, 00:25:59.687 { 00:25:59.687 "name": "pt4", 00:25:59.687 "uuid": "c6b0938a-19b2-5731-a052-2cfd1cd3aee3", 00:25:59.687 "is_configured": true, 00:25:59.687 "data_offset": 2048, 00:25:59.687 "data_size": 63488 00:25:59.687 } 00:25:59.687 ] 00:25:59.687 } 00:25:59.687 } 00:25:59.687 }' 00:25:59.687 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:59.687 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:59.687 pt2 00:25:59.687 pt3 00:25:59.687 pt4' 00:25:59.687 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:59.687 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:59.687 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:59.945 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:59.945 "name": "pt1", 00:25:59.945 "aliases": [ 00:25:59.945 "21f403e3-fc83-599a-a9b3-b5267797ab86" 00:25:59.945 ], 00:25:59.945 "product_name": "passthru", 00:25:59.945 "block_size": 512, 00:25:59.945 "num_blocks": 65536, 00:25:59.945 "uuid": "21f403e3-fc83-599a-a9b3-b5267797ab86", 00:25:59.945 "assigned_rate_limits": { 00:25:59.945 "rw_ios_per_sec": 0, 00:25:59.945 "rw_mbytes_per_sec": 0, 00:25:59.945 "r_mbytes_per_sec": 0, 00:25:59.945 "w_mbytes_per_sec": 0 00:25:59.945 }, 00:25:59.945 "claimed": true, 00:25:59.945 "claim_type": "exclusive_write", 00:25:59.945 "zoned": false, 00:25:59.945 "supported_io_types": { 00:25:59.945 "read": true, 00:25:59.945 "write": true, 00:25:59.945 "unmap": true, 00:25:59.945 "write_zeroes": true, 00:25:59.945 "flush": true, 00:25:59.945 "reset": true, 00:25:59.945 "compare": false, 00:25:59.945 "compare_and_write": false, 00:25:59.945 "abort": true, 00:25:59.945 "nvme_admin": false, 00:25:59.945 "nvme_io": false 00:25:59.945 }, 00:25:59.945 "memory_domains": [ 00:25:59.945 { 00:25:59.945 "dma_device_id": "system", 00:25:59.945 "dma_device_type": 1 00:25:59.945 }, 00:25:59.945 { 00:25:59.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.945 "dma_device_type": 2 00:25:59.945 } 00:25:59.945 ], 00:25:59.945 "driver_specific": { 00:25:59.945 "passthru": { 00:25:59.945 "name": "pt1", 00:25:59.945 "base_bdev_name": "malloc1" 00:25:59.945 } 00:25:59.945 } 00:25:59.945 }' 00:25:59.945 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:59.945 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:00.203 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:00.203 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:00.203 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:00.203 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:00.203 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:00.203 07:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:00.203 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:00.203 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:00.203 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:00.461 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:00.461 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:00.461 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:00.461 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:00.462 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:00.462 "name": "pt2", 00:26:00.462 "aliases": [ 00:26:00.462 "2df2163e-f352-5de8-98af-b848d7bebdef" 00:26:00.462 ], 00:26:00.462 "product_name": "passthru", 00:26:00.462 "block_size": 512, 00:26:00.462 "num_blocks": 65536, 00:26:00.462 "uuid": "2df2163e-f352-5de8-98af-b848d7bebdef", 00:26:00.462 "assigned_rate_limits": { 00:26:00.462 "rw_ios_per_sec": 0, 00:26:00.462 "rw_mbytes_per_sec": 0, 00:26:00.462 "r_mbytes_per_sec": 0, 00:26:00.462 "w_mbytes_per_sec": 0 00:26:00.462 }, 00:26:00.462 "claimed": true, 00:26:00.462 "claim_type": "exclusive_write", 00:26:00.462 "zoned": false, 00:26:00.462 "supported_io_types": { 00:26:00.462 "read": true, 00:26:00.462 "write": true, 00:26:00.462 "unmap": true, 00:26:00.462 "write_zeroes": true, 00:26:00.462 "flush": true, 00:26:00.462 "reset": true, 00:26:00.462 "compare": false, 00:26:00.462 "compare_and_write": false, 00:26:00.462 "abort": true, 00:26:00.462 "nvme_admin": false, 00:26:00.462 "nvme_io": false 00:26:00.462 }, 00:26:00.462 "memory_domains": [ 00:26:00.462 { 00:26:00.462 "dma_device_id": "system", 00:26:00.462 "dma_device_type": 1 00:26:00.462 }, 00:26:00.462 { 00:26:00.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.462 "dma_device_type": 2 00:26:00.462 } 00:26:00.462 ], 00:26:00.462 "driver_specific": { 00:26:00.462 "passthru": { 00:26:00.462 "name": "pt2", 00:26:00.462 "base_bdev_name": "malloc2" 00:26:00.462 } 00:26:00.462 } 00:26:00.462 }' 00:26:00.462 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:00.720 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:00.720 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:00.720 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:00.720 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:00.720 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:00.720 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:00.720 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:00.720 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:00.720 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:00.978 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:00.978 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:00.978 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:00.978 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:00.978 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:01.237 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:01.237 "name": "pt3", 00:26:01.237 "aliases": [ 00:26:01.237 "cb7a446e-cf77-57f3-83dd-0cecf822e48b" 00:26:01.237 ], 00:26:01.237 "product_name": "passthru", 00:26:01.237 "block_size": 512, 00:26:01.237 "num_blocks": 65536, 00:26:01.237 "uuid": "cb7a446e-cf77-57f3-83dd-0cecf822e48b", 00:26:01.237 "assigned_rate_limits": { 00:26:01.237 "rw_ios_per_sec": 0, 00:26:01.237 "rw_mbytes_per_sec": 0, 00:26:01.237 "r_mbytes_per_sec": 0, 00:26:01.237 "w_mbytes_per_sec": 0 00:26:01.237 }, 00:26:01.237 "claimed": true, 00:26:01.237 "claim_type": "exclusive_write", 00:26:01.237 "zoned": false, 00:26:01.237 "supported_io_types": { 00:26:01.237 "read": true, 00:26:01.237 "write": true, 00:26:01.237 "unmap": true, 00:26:01.237 "write_zeroes": true, 00:26:01.237 "flush": true, 00:26:01.237 "reset": true, 00:26:01.237 "compare": false, 00:26:01.237 "compare_and_write": false, 00:26:01.237 "abort": true, 00:26:01.237 "nvme_admin": false, 00:26:01.237 "nvme_io": false 00:26:01.237 }, 00:26:01.237 "memory_domains": [ 00:26:01.237 { 00:26:01.237 "dma_device_id": "system", 00:26:01.237 "dma_device_type": 1 00:26:01.237 }, 00:26:01.237 { 00:26:01.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.237 "dma_device_type": 2 00:26:01.237 } 00:26:01.237 ], 00:26:01.237 "driver_specific": { 00:26:01.237 "passthru": { 00:26:01.237 "name": "pt3", 00:26:01.237 "base_bdev_name": "malloc3" 00:26:01.237 } 00:26:01.237 } 00:26:01.237 }' 00:26:01.237 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:01.237 07:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:01.237 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:01.237 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:01.237 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:01.495 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:01.495 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:01.495 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:01.495 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:01.495 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:01.495 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:01.495 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:01.495 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:01.495 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:01.495 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:01.753 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:01.753 "name": "pt4", 00:26:01.753 "aliases": [ 00:26:01.753 "c6b0938a-19b2-5731-a052-2cfd1cd3aee3" 00:26:01.753 ], 00:26:01.753 "product_name": "passthru", 00:26:01.753 "block_size": 512, 00:26:01.753 "num_blocks": 65536, 00:26:01.753 "uuid": "c6b0938a-19b2-5731-a052-2cfd1cd3aee3", 00:26:01.753 "assigned_rate_limits": { 00:26:01.753 "rw_ios_per_sec": 0, 00:26:01.753 "rw_mbytes_per_sec": 0, 00:26:01.753 "r_mbytes_per_sec": 0, 00:26:01.753 "w_mbytes_per_sec": 0 00:26:01.753 }, 00:26:01.753 "claimed": true, 00:26:01.753 "claim_type": "exclusive_write", 00:26:01.753 "zoned": false, 00:26:01.753 "supported_io_types": { 00:26:01.753 "read": true, 00:26:01.753 "write": true, 00:26:01.753 "unmap": true, 00:26:01.753 "write_zeroes": true, 00:26:01.753 "flush": true, 00:26:01.753 "reset": true, 00:26:01.753 "compare": false, 00:26:01.753 "compare_and_write": false, 00:26:01.753 "abort": true, 00:26:01.753 "nvme_admin": false, 00:26:01.753 "nvme_io": false 00:26:01.753 }, 00:26:01.753 "memory_domains": [ 00:26:01.753 { 00:26:01.753 "dma_device_id": "system", 00:26:01.753 "dma_device_type": 1 00:26:01.753 }, 00:26:01.753 { 00:26:01.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.753 "dma_device_type": 2 00:26:01.753 } 00:26:01.753 ], 00:26:01.753 "driver_specific": { 00:26:01.753 "passthru": { 00:26:01.753 "name": "pt4", 00:26:01.753 "base_bdev_name": "malloc4" 00:26:01.753 } 00:26:01.753 } 00:26:01.753 }' 00:26:01.753 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:02.011 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:02.011 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:02.011 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:02.011 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:02.011 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:02.012 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:02.012 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:02.012 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:02.012 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:02.269 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:02.269 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:02.269 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:02.269 07:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:26:02.527 [2024-07-12 07:36:36.238320] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:02.527 07:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3e028ddb-47ce-403a-aba3-b95c567dcd61 00:26:02.527 07:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 3e028ddb-47ce-403a-aba3-b95c567dcd61 ']' 00:26:02.527 07:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:02.784 [2024-07-12 07:36:36.534145] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:02.784 [2024-07-12 07:36:36.534200] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:02.784 [2024-07-12 07:36:36.534334] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:02.784 [2024-07-12 07:36:36.534444] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:02.784 [2024-07-12 07:36:36.534455] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:26:02.784 07:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.784 07:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:26:03.042 07:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:26:03.042 07:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:26:03.042 07:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:03.042 07:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:03.299 07:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:03.299 07:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:03.299 07:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:03.299 07:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:03.557 07:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:03.557 07:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:03.814 07:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:03.814 07:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:04.071 07:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:26:04.071 07:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:04.071 07:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:26:04.071 07:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:04.071 07:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:04.071 07:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:04.071 07:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:04.072 07:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:04.072 07:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:04.072 07:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:04.072 07:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:04.072 07:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:04.072 07:36:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:04.329 [2024-07-12 07:36:38.034385] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:04.329 [2024-07-12 07:36:38.036883] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:04.329 [2024-07-12 07:36:38.036939] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:04.329 [2024-07-12 07:36:38.036969] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:04.329 [2024-07-12 07:36:38.037019] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:04.329 [2024-07-12 07:36:38.037133] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:04.329 [2024-07-12 07:36:38.037191] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:04.329 [2024-07-12 07:36:38.037262] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:26:04.329 [2024-07-12 07:36:38.037304] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:04.330 [2024-07-12 07:36:38.037315] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:26:04.330 request: 00:26:04.330 { 00:26:04.330 "name": "raid_bdev1", 00:26:04.330 "raid_level": "raid1", 00:26:04.330 "base_bdevs": [ 00:26:04.330 "malloc1", 00:26:04.330 "malloc2", 00:26:04.330 "malloc3", 00:26:04.330 "malloc4" 00:26:04.330 ], 00:26:04.330 "superblock": false, 00:26:04.330 "method": "bdev_raid_create", 00:26:04.330 "req_id": 1 00:26:04.330 } 00:26:04.330 Got JSON-RPC error response 00:26:04.330 response: 00:26:04.330 { 00:26:04.330 "code": -17, 00:26:04.330 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:04.330 } 00:26:04.330 07:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:26:04.330 07:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:04.330 07:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:04.330 07:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:04.330 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.330 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:26:04.587 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:26:04.587 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:26:04.587 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:04.845 [2024-07-12 07:36:38.494403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:04.845 [2024-07-12 07:36:38.494536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:04.845 [2024-07-12 07:36:38.494577] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:04.845 [2024-07-12 07:36:38.494609] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:04.845 [2024-07-12 07:36:38.497469] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:04.845 [2024-07-12 07:36:38.497545] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:04.845 [2024-07-12 07:36:38.497655] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:04.845 [2024-07-12 07:36:38.497721] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:04.845 pt1 00:26:04.845 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:04.845 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:04.845 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:04.845 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:04.845 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:04.845 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:04.845 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:04.845 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:04.845 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:04.845 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:04.845 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.845 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.102 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:05.102 "name": "raid_bdev1", 00:26:05.102 "uuid": "3e028ddb-47ce-403a-aba3-b95c567dcd61", 00:26:05.102 "strip_size_kb": 0, 00:26:05.102 "state": "configuring", 00:26:05.102 "raid_level": "raid1", 00:26:05.102 "superblock": true, 00:26:05.102 "num_base_bdevs": 4, 00:26:05.102 "num_base_bdevs_discovered": 1, 00:26:05.102 "num_base_bdevs_operational": 4, 00:26:05.102 "base_bdevs_list": [ 00:26:05.102 { 00:26:05.102 "name": "pt1", 00:26:05.102 "uuid": "21f403e3-fc83-599a-a9b3-b5267797ab86", 00:26:05.102 "is_configured": true, 00:26:05.102 "data_offset": 2048, 00:26:05.102 "data_size": 63488 00:26:05.102 }, 00:26:05.102 { 00:26:05.102 "name": null, 00:26:05.102 "uuid": "2df2163e-f352-5de8-98af-b848d7bebdef", 00:26:05.102 "is_configured": false, 00:26:05.102 "data_offset": 2048, 00:26:05.102 "data_size": 63488 00:26:05.102 }, 00:26:05.102 { 00:26:05.102 "name": null, 00:26:05.102 "uuid": "cb7a446e-cf77-57f3-83dd-0cecf822e48b", 00:26:05.102 "is_configured": false, 00:26:05.102 "data_offset": 2048, 00:26:05.102 "data_size": 63488 00:26:05.102 }, 00:26:05.102 { 00:26:05.102 "name": null, 00:26:05.102 "uuid": "c6b0938a-19b2-5731-a052-2cfd1cd3aee3", 00:26:05.102 "is_configured": false, 00:26:05.102 "data_offset": 2048, 00:26:05.102 "data_size": 63488 00:26:05.102 } 00:26:05.102 ] 00:26:05.102 }' 00:26:05.102 07:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:05.102 07:36:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.665 07:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:26:05.665 07:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:05.665 [2024-07-12 07:36:39.494582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:05.665 [2024-07-12 07:36:39.494710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:05.665 [2024-07-12 07:36:39.494759] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:26:05.665 [2024-07-12 07:36:39.494783] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:05.665 [2024-07-12 07:36:39.495291] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:05.665 [2024-07-12 07:36:39.495349] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:05.665 [2024-07-12 07:36:39.495449] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:05.665 [2024-07-12 07:36:39.495474] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:05.665 pt2 00:26:05.665 07:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:05.922 [2024-07-12 07:36:39.686648] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:05.922 07:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:05.922 07:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:05.922 07:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:05.922 07:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:05.922 07:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:05.922 07:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:05.922 07:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:05.922 07:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:05.922 07:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:05.922 07:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:05.922 07:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.922 07:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.179 07:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:06.179 "name": "raid_bdev1", 00:26:06.179 "uuid": "3e028ddb-47ce-403a-aba3-b95c567dcd61", 00:26:06.179 "strip_size_kb": 0, 00:26:06.179 "state": "configuring", 00:26:06.179 "raid_level": "raid1", 00:26:06.179 "superblock": true, 00:26:06.179 "num_base_bdevs": 4, 00:26:06.179 "num_base_bdevs_discovered": 1, 00:26:06.179 "num_base_bdevs_operational": 4, 00:26:06.179 "base_bdevs_list": [ 00:26:06.179 { 00:26:06.179 "name": "pt1", 00:26:06.179 "uuid": "21f403e3-fc83-599a-a9b3-b5267797ab86", 00:26:06.179 "is_configured": true, 00:26:06.179 "data_offset": 2048, 00:26:06.179 "data_size": 63488 00:26:06.179 }, 00:26:06.179 { 00:26:06.179 "name": null, 00:26:06.179 "uuid": "2df2163e-f352-5de8-98af-b848d7bebdef", 00:26:06.179 "is_configured": false, 00:26:06.179 "data_offset": 2048, 00:26:06.179 "data_size": 63488 00:26:06.179 }, 00:26:06.179 { 00:26:06.179 "name": null, 00:26:06.179 "uuid": "cb7a446e-cf77-57f3-83dd-0cecf822e48b", 00:26:06.179 "is_configured": false, 00:26:06.179 "data_offset": 2048, 00:26:06.179 "data_size": 63488 00:26:06.179 }, 00:26:06.179 { 00:26:06.179 "name": null, 00:26:06.179 "uuid": "c6b0938a-19b2-5731-a052-2cfd1cd3aee3", 00:26:06.179 "is_configured": false, 00:26:06.179 "data_offset": 2048, 00:26:06.179 "data_size": 63488 00:26:06.179 } 00:26:06.179 ] 00:26:06.179 }' 00:26:06.179 07:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:06.179 07:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.743 07:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:26:06.743 07:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:06.743 07:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:07.308 [2024-07-12 07:36:40.890869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:07.308 [2024-07-12 07:36:40.890986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.308 [2024-07-12 07:36:40.891032] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:07.308 [2024-07-12 07:36:40.891066] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.308 [2024-07-12 07:36:40.891606] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.308 [2024-07-12 07:36:40.891664] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:07.308 [2024-07-12 07:36:40.891766] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:07.308 [2024-07-12 07:36:40.891790] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:07.308 pt2 00:26:07.308 07:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:07.308 07:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:07.308 07:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:07.308 [2024-07-12 07:36:41.098890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:07.308 [2024-07-12 07:36:41.099006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.308 [2024-07-12 07:36:41.099042] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:07.308 [2024-07-12 07:36:41.099072] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.308 [2024-07-12 07:36:41.099546] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.308 [2024-07-12 07:36:41.099604] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:07.308 [2024-07-12 07:36:41.099691] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:07.308 [2024-07-12 07:36:41.099713] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:07.308 pt3 00:26:07.308 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:07.308 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:07.308 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:07.566 [2024-07-12 07:36:41.298918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:07.566 [2024-07-12 07:36:41.299041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.566 [2024-07-12 07:36:41.299081] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:07.566 [2024-07-12 07:36:41.299111] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.566 [2024-07-12 07:36:41.299631] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.566 [2024-07-12 07:36:41.299692] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:07.566 [2024-07-12 07:36:41.299787] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:07.566 [2024-07-12 07:36:41.299818] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:07.566 [2024-07-12 07:36:41.299974] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:26:07.566 [2024-07-12 07:36:41.299991] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:07.566 [2024-07-12 07:36:41.300078] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:26:07.566 [2024-07-12 07:36:41.300391] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:26:07.566 [2024-07-12 07:36:41.300408] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:26:07.566 [2024-07-12 07:36:41.300515] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:07.566 pt4 00:26:07.566 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:07.566 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:07.566 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:07.566 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:07.566 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:07.566 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:07.566 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:07.566 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:07.566 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:07.566 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:07.566 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:07.566 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:07.566 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.566 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.824 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:07.824 "name": "raid_bdev1", 00:26:07.824 "uuid": "3e028ddb-47ce-403a-aba3-b95c567dcd61", 00:26:07.824 "strip_size_kb": 0, 00:26:07.824 "state": "online", 00:26:07.824 "raid_level": "raid1", 00:26:07.824 "superblock": true, 00:26:07.824 "num_base_bdevs": 4, 00:26:07.824 "num_base_bdevs_discovered": 4, 00:26:07.824 "num_base_bdevs_operational": 4, 00:26:07.824 "base_bdevs_list": [ 00:26:07.824 { 00:26:07.824 "name": "pt1", 00:26:07.824 "uuid": "21f403e3-fc83-599a-a9b3-b5267797ab86", 00:26:07.824 "is_configured": true, 00:26:07.824 "data_offset": 2048, 00:26:07.824 "data_size": 63488 00:26:07.824 }, 00:26:07.824 { 00:26:07.824 "name": "pt2", 00:26:07.824 "uuid": "2df2163e-f352-5de8-98af-b848d7bebdef", 00:26:07.824 "is_configured": true, 00:26:07.824 "data_offset": 2048, 00:26:07.824 "data_size": 63488 00:26:07.824 }, 00:26:07.824 { 00:26:07.824 "name": "pt3", 00:26:07.824 "uuid": "cb7a446e-cf77-57f3-83dd-0cecf822e48b", 00:26:07.824 "is_configured": true, 00:26:07.824 "data_offset": 2048, 00:26:07.824 "data_size": 63488 00:26:07.824 }, 00:26:07.824 { 00:26:07.824 "name": "pt4", 00:26:07.824 "uuid": "c6b0938a-19b2-5731-a052-2cfd1cd3aee3", 00:26:07.824 "is_configured": true, 00:26:07.824 "data_offset": 2048, 00:26:07.824 "data_size": 63488 00:26:07.824 } 00:26:07.824 ] 00:26:07.824 }' 00:26:07.824 07:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:07.824 07:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.390 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:26:08.390 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:08.390 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:08.390 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:08.390 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:08.390 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:08.390 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:08.390 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:08.649 [2024-07-12 07:36:42.347376] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:08.649 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:08.649 "name": "raid_bdev1", 00:26:08.649 "aliases": [ 00:26:08.649 "3e028ddb-47ce-403a-aba3-b95c567dcd61" 00:26:08.649 ], 00:26:08.649 "product_name": "Raid Volume", 00:26:08.649 "block_size": 512, 00:26:08.649 "num_blocks": 63488, 00:26:08.649 "uuid": "3e028ddb-47ce-403a-aba3-b95c567dcd61", 00:26:08.649 "assigned_rate_limits": { 00:26:08.649 "rw_ios_per_sec": 0, 00:26:08.649 "rw_mbytes_per_sec": 0, 00:26:08.649 "r_mbytes_per_sec": 0, 00:26:08.649 "w_mbytes_per_sec": 0 00:26:08.649 }, 00:26:08.649 "claimed": false, 00:26:08.649 "zoned": false, 00:26:08.649 "supported_io_types": { 00:26:08.649 "read": true, 00:26:08.649 "write": true, 00:26:08.649 "unmap": false, 00:26:08.649 "write_zeroes": true, 00:26:08.649 "flush": false, 00:26:08.649 "reset": true, 00:26:08.649 "compare": false, 00:26:08.649 "compare_and_write": false, 00:26:08.649 "abort": false, 00:26:08.649 "nvme_admin": false, 00:26:08.649 "nvme_io": false 00:26:08.649 }, 00:26:08.649 "memory_domains": [ 00:26:08.649 { 00:26:08.649 "dma_device_id": "system", 00:26:08.649 "dma_device_type": 1 00:26:08.649 }, 00:26:08.649 { 00:26:08.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.649 "dma_device_type": 2 00:26:08.649 }, 00:26:08.649 { 00:26:08.649 "dma_device_id": "system", 00:26:08.649 "dma_device_type": 1 00:26:08.649 }, 00:26:08.649 { 00:26:08.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.649 "dma_device_type": 2 00:26:08.649 }, 00:26:08.649 { 00:26:08.649 "dma_device_id": "system", 00:26:08.649 "dma_device_type": 1 00:26:08.649 }, 00:26:08.649 { 00:26:08.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.649 "dma_device_type": 2 00:26:08.649 }, 00:26:08.649 { 00:26:08.649 "dma_device_id": "system", 00:26:08.649 "dma_device_type": 1 00:26:08.649 }, 00:26:08.649 { 00:26:08.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.649 "dma_device_type": 2 00:26:08.649 } 00:26:08.649 ], 00:26:08.649 "driver_specific": { 00:26:08.649 "raid": { 00:26:08.649 "uuid": "3e028ddb-47ce-403a-aba3-b95c567dcd61", 00:26:08.649 "strip_size_kb": 0, 00:26:08.649 "state": "online", 00:26:08.649 "raid_level": "raid1", 00:26:08.649 "superblock": true, 00:26:08.649 "num_base_bdevs": 4, 00:26:08.649 "num_base_bdevs_discovered": 4, 00:26:08.649 "num_base_bdevs_operational": 4, 00:26:08.649 "base_bdevs_list": [ 00:26:08.649 { 00:26:08.649 "name": "pt1", 00:26:08.649 "uuid": "21f403e3-fc83-599a-a9b3-b5267797ab86", 00:26:08.649 "is_configured": true, 00:26:08.649 "data_offset": 2048, 00:26:08.649 "data_size": 63488 00:26:08.649 }, 00:26:08.649 { 00:26:08.649 "name": "pt2", 00:26:08.649 "uuid": "2df2163e-f352-5de8-98af-b848d7bebdef", 00:26:08.649 "is_configured": true, 00:26:08.649 "data_offset": 2048, 00:26:08.649 "data_size": 63488 00:26:08.649 }, 00:26:08.649 { 00:26:08.649 "name": "pt3", 00:26:08.649 "uuid": "cb7a446e-cf77-57f3-83dd-0cecf822e48b", 00:26:08.649 "is_configured": true, 00:26:08.649 "data_offset": 2048, 00:26:08.649 "data_size": 63488 00:26:08.649 }, 00:26:08.649 { 00:26:08.649 "name": "pt4", 00:26:08.649 "uuid": "c6b0938a-19b2-5731-a052-2cfd1cd3aee3", 00:26:08.649 "is_configured": true, 00:26:08.649 "data_offset": 2048, 00:26:08.649 "data_size": 63488 00:26:08.649 } 00:26:08.649 ] 00:26:08.649 } 00:26:08.649 } 00:26:08.649 }' 00:26:08.649 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:08.649 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:08.649 pt2 00:26:08.649 pt3 00:26:08.649 pt4' 00:26:08.649 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:08.649 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:08.649 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:08.907 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:08.907 "name": "pt1", 00:26:08.907 "aliases": [ 00:26:08.907 "21f403e3-fc83-599a-a9b3-b5267797ab86" 00:26:08.907 ], 00:26:08.907 "product_name": "passthru", 00:26:08.907 "block_size": 512, 00:26:08.907 "num_blocks": 65536, 00:26:08.907 "uuid": "21f403e3-fc83-599a-a9b3-b5267797ab86", 00:26:08.907 "assigned_rate_limits": { 00:26:08.907 "rw_ios_per_sec": 0, 00:26:08.907 "rw_mbytes_per_sec": 0, 00:26:08.907 "r_mbytes_per_sec": 0, 00:26:08.907 "w_mbytes_per_sec": 0 00:26:08.907 }, 00:26:08.907 "claimed": true, 00:26:08.907 "claim_type": "exclusive_write", 00:26:08.907 "zoned": false, 00:26:08.907 "supported_io_types": { 00:26:08.907 "read": true, 00:26:08.907 "write": true, 00:26:08.907 "unmap": true, 00:26:08.907 "write_zeroes": true, 00:26:08.907 "flush": true, 00:26:08.907 "reset": true, 00:26:08.907 "compare": false, 00:26:08.907 "compare_and_write": false, 00:26:08.907 "abort": true, 00:26:08.907 "nvme_admin": false, 00:26:08.907 "nvme_io": false 00:26:08.907 }, 00:26:08.907 "memory_domains": [ 00:26:08.907 { 00:26:08.907 "dma_device_id": "system", 00:26:08.907 "dma_device_type": 1 00:26:08.907 }, 00:26:08.907 { 00:26:08.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.907 "dma_device_type": 2 00:26:08.907 } 00:26:08.907 ], 00:26:08.907 "driver_specific": { 00:26:08.907 "passthru": { 00:26:08.907 "name": "pt1", 00:26:08.907 "base_bdev_name": "malloc1" 00:26:08.907 } 00:26:08.907 } 00:26:08.907 }' 00:26:08.907 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:08.907 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:08.908 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:08.908 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.166 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.166 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:09.166 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:09.166 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:09.166 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:09.166 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.166 07:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.166 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:09.166 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:09.166 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:09.166 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:09.423 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:09.423 "name": "pt2", 00:26:09.423 "aliases": [ 00:26:09.423 "2df2163e-f352-5de8-98af-b848d7bebdef" 00:26:09.423 ], 00:26:09.423 "product_name": "passthru", 00:26:09.423 "block_size": 512, 00:26:09.423 "num_blocks": 65536, 00:26:09.423 "uuid": "2df2163e-f352-5de8-98af-b848d7bebdef", 00:26:09.423 "assigned_rate_limits": { 00:26:09.423 "rw_ios_per_sec": 0, 00:26:09.423 "rw_mbytes_per_sec": 0, 00:26:09.423 "r_mbytes_per_sec": 0, 00:26:09.423 "w_mbytes_per_sec": 0 00:26:09.423 }, 00:26:09.423 "claimed": true, 00:26:09.423 "claim_type": "exclusive_write", 00:26:09.423 "zoned": false, 00:26:09.423 "supported_io_types": { 00:26:09.423 "read": true, 00:26:09.423 "write": true, 00:26:09.423 "unmap": true, 00:26:09.423 "write_zeroes": true, 00:26:09.423 "flush": true, 00:26:09.423 "reset": true, 00:26:09.423 "compare": false, 00:26:09.423 "compare_and_write": false, 00:26:09.423 "abort": true, 00:26:09.423 "nvme_admin": false, 00:26:09.423 "nvme_io": false 00:26:09.423 }, 00:26:09.423 "memory_domains": [ 00:26:09.423 { 00:26:09.423 "dma_device_id": "system", 00:26:09.423 "dma_device_type": 1 00:26:09.423 }, 00:26:09.423 { 00:26:09.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.423 "dma_device_type": 2 00:26:09.423 } 00:26:09.423 ], 00:26:09.423 "driver_specific": { 00:26:09.423 "passthru": { 00:26:09.423 "name": "pt2", 00:26:09.423 "base_bdev_name": "malloc2" 00:26:09.423 } 00:26:09.423 } 00:26:09.423 }' 00:26:09.423 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:09.424 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:09.424 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:09.424 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.681 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.681 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:09.681 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:09.681 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:09.681 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:09.681 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.681 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.681 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:09.681 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:09.681 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:09.681 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:09.938 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:09.938 "name": "pt3", 00:26:09.938 "aliases": [ 00:26:09.938 "cb7a446e-cf77-57f3-83dd-0cecf822e48b" 00:26:09.938 ], 00:26:09.938 "product_name": "passthru", 00:26:09.938 "block_size": 512, 00:26:09.938 "num_blocks": 65536, 00:26:09.938 "uuid": "cb7a446e-cf77-57f3-83dd-0cecf822e48b", 00:26:09.938 "assigned_rate_limits": { 00:26:09.938 "rw_ios_per_sec": 0, 00:26:09.938 "rw_mbytes_per_sec": 0, 00:26:09.938 "r_mbytes_per_sec": 0, 00:26:09.938 "w_mbytes_per_sec": 0 00:26:09.938 }, 00:26:09.938 "claimed": true, 00:26:09.938 "claim_type": "exclusive_write", 00:26:09.938 "zoned": false, 00:26:09.938 "supported_io_types": { 00:26:09.938 "read": true, 00:26:09.938 "write": true, 00:26:09.938 "unmap": true, 00:26:09.938 "write_zeroes": true, 00:26:09.938 "flush": true, 00:26:09.938 "reset": true, 00:26:09.938 "compare": false, 00:26:09.938 "compare_and_write": false, 00:26:09.938 "abort": true, 00:26:09.938 "nvme_admin": false, 00:26:09.938 "nvme_io": false 00:26:09.938 }, 00:26:09.938 "memory_domains": [ 00:26:09.938 { 00:26:09.938 "dma_device_id": "system", 00:26:09.938 "dma_device_type": 1 00:26:09.938 }, 00:26:09.938 { 00:26:09.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.938 "dma_device_type": 2 00:26:09.938 } 00:26:09.938 ], 00:26:09.938 "driver_specific": { 00:26:09.938 "passthru": { 00:26:09.938 "name": "pt3", 00:26:09.938 "base_bdev_name": "malloc3" 00:26:09.938 } 00:26:09.938 } 00:26:09.938 }' 00:26:09.938 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:09.938 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:09.938 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:09.938 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.938 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.938 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:09.938 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.195 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.195 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:10.195 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.195 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.195 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:10.195 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:10.195 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:10.195 07:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:10.454 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:10.454 "name": "pt4", 00:26:10.454 "aliases": [ 00:26:10.454 "c6b0938a-19b2-5731-a052-2cfd1cd3aee3" 00:26:10.454 ], 00:26:10.454 "product_name": "passthru", 00:26:10.454 "block_size": 512, 00:26:10.454 "num_blocks": 65536, 00:26:10.454 "uuid": "c6b0938a-19b2-5731-a052-2cfd1cd3aee3", 00:26:10.454 "assigned_rate_limits": { 00:26:10.454 "rw_ios_per_sec": 0, 00:26:10.454 "rw_mbytes_per_sec": 0, 00:26:10.454 "r_mbytes_per_sec": 0, 00:26:10.454 "w_mbytes_per_sec": 0 00:26:10.454 }, 00:26:10.454 "claimed": true, 00:26:10.454 "claim_type": "exclusive_write", 00:26:10.454 "zoned": false, 00:26:10.454 "supported_io_types": { 00:26:10.454 "read": true, 00:26:10.454 "write": true, 00:26:10.454 "unmap": true, 00:26:10.454 "write_zeroes": true, 00:26:10.454 "flush": true, 00:26:10.454 "reset": true, 00:26:10.454 "compare": false, 00:26:10.454 "compare_and_write": false, 00:26:10.454 "abort": true, 00:26:10.454 "nvme_admin": false, 00:26:10.454 "nvme_io": false 00:26:10.454 }, 00:26:10.454 "memory_domains": [ 00:26:10.454 { 00:26:10.454 "dma_device_id": "system", 00:26:10.454 "dma_device_type": 1 00:26:10.454 }, 00:26:10.454 { 00:26:10.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.454 "dma_device_type": 2 00:26:10.454 } 00:26:10.454 ], 00:26:10.454 "driver_specific": { 00:26:10.454 "passthru": { 00:26:10.454 "name": "pt4", 00:26:10.454 "base_bdev_name": "malloc4" 00:26:10.454 } 00:26:10.454 } 00:26:10.454 }' 00:26:10.454 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:10.454 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:10.713 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:10.713 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:10.713 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:10.713 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:10.713 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.713 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.713 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:10.713 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.713 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.713 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:10.713 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:10.713 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:26:10.971 [2024-07-12 07:36:44.751884] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:10.971 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 3e028ddb-47ce-403a-aba3-b95c567dcd61 '!=' 3e028ddb-47ce-403a-aba3-b95c567dcd61 ']' 00:26:10.971 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:26:10.971 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:10.971 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:10.971 07:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:11.229 [2024-07-12 07:36:45.031752] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:11.229 07:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:11.229 07:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:11.229 07:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:11.229 07:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:11.229 07:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:11.229 07:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:11.229 07:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:11.229 07:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:11.229 07:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:11.229 07:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:11.229 07:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.229 07:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.487 07:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:11.487 "name": "raid_bdev1", 00:26:11.487 "uuid": "3e028ddb-47ce-403a-aba3-b95c567dcd61", 00:26:11.487 "strip_size_kb": 0, 00:26:11.487 "state": "online", 00:26:11.487 "raid_level": "raid1", 00:26:11.487 "superblock": true, 00:26:11.487 "num_base_bdevs": 4, 00:26:11.487 "num_base_bdevs_discovered": 3, 00:26:11.487 "num_base_bdevs_operational": 3, 00:26:11.487 "base_bdevs_list": [ 00:26:11.487 { 00:26:11.487 "name": null, 00:26:11.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.487 "is_configured": false, 00:26:11.487 "data_offset": 2048, 00:26:11.487 "data_size": 63488 00:26:11.487 }, 00:26:11.487 { 00:26:11.487 "name": "pt2", 00:26:11.487 "uuid": "2df2163e-f352-5de8-98af-b848d7bebdef", 00:26:11.487 "is_configured": true, 00:26:11.487 "data_offset": 2048, 00:26:11.487 "data_size": 63488 00:26:11.487 }, 00:26:11.487 { 00:26:11.487 "name": "pt3", 00:26:11.487 "uuid": "cb7a446e-cf77-57f3-83dd-0cecf822e48b", 00:26:11.487 "is_configured": true, 00:26:11.487 "data_offset": 2048, 00:26:11.487 "data_size": 63488 00:26:11.487 }, 00:26:11.487 { 00:26:11.487 "name": "pt4", 00:26:11.487 "uuid": "c6b0938a-19b2-5731-a052-2cfd1cd3aee3", 00:26:11.487 "is_configured": true, 00:26:11.487 "data_offset": 2048, 00:26:11.487 "data_size": 63488 00:26:11.487 } 00:26:11.487 ] 00:26:11.487 }' 00:26:11.487 07:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:11.487 07:36:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.053 07:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:12.311 [2024-07-12 07:36:46.175903] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:12.311 [2024-07-12 07:36:46.175957] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:12.311 [2024-07-12 07:36:46.176056] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:12.311 [2024-07-12 07:36:46.176145] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:12.311 [2024-07-12 07:36:46.176154] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:26:12.568 07:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.568 07:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:26:12.826 07:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:26:12.826 07:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:26:12.826 07:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:26:12.826 07:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:12.826 07:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:12.826 07:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:26:12.826 07:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:12.826 07:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:13.084 07:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:26:13.084 07:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:13.084 07:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:13.343 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:26:13.343 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:13.343 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:26:13.343 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:26:13.343 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:13.601 [2024-07-12 07:36:47.228063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:13.601 [2024-07-12 07:36:47.228194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.601 [2024-07-12 07:36:47.228230] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:26:13.601 [2024-07-12 07:36:47.228261] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.601 [2024-07-12 07:36:47.231320] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.602 [2024-07-12 07:36:47.231419] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:13.602 [2024-07-12 07:36:47.231538] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:13.602 [2024-07-12 07:36:47.231582] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:13.602 pt2 00:26:13.602 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:13.602 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:13.602 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:13.602 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:13.602 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:13.602 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:13.602 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:13.602 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:13.602 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:13.602 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:13.602 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.602 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.860 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:13.860 "name": "raid_bdev1", 00:26:13.860 "uuid": "3e028ddb-47ce-403a-aba3-b95c567dcd61", 00:26:13.860 "strip_size_kb": 0, 00:26:13.860 "state": "configuring", 00:26:13.860 "raid_level": "raid1", 00:26:13.860 "superblock": true, 00:26:13.860 "num_base_bdevs": 4, 00:26:13.860 "num_base_bdevs_discovered": 1, 00:26:13.860 "num_base_bdevs_operational": 3, 00:26:13.860 "base_bdevs_list": [ 00:26:13.860 { 00:26:13.860 "name": null, 00:26:13.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.860 "is_configured": false, 00:26:13.860 "data_offset": 2048, 00:26:13.860 "data_size": 63488 00:26:13.860 }, 00:26:13.860 { 00:26:13.860 "name": "pt2", 00:26:13.860 "uuid": "2df2163e-f352-5de8-98af-b848d7bebdef", 00:26:13.860 "is_configured": true, 00:26:13.860 "data_offset": 2048, 00:26:13.860 "data_size": 63488 00:26:13.860 }, 00:26:13.860 { 00:26:13.860 "name": null, 00:26:13.860 "uuid": "cb7a446e-cf77-57f3-83dd-0cecf822e48b", 00:26:13.860 "is_configured": false, 00:26:13.860 "data_offset": 2048, 00:26:13.860 "data_size": 63488 00:26:13.860 }, 00:26:13.860 { 00:26:13.860 "name": null, 00:26:13.860 "uuid": "c6b0938a-19b2-5731-a052-2cfd1cd3aee3", 00:26:13.860 "is_configured": false, 00:26:13.860 "data_offset": 2048, 00:26:13.860 "data_size": 63488 00:26:13.860 } 00:26:13.860 ] 00:26:13.860 }' 00:26:13.860 07:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:13.860 07:36:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.427 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:26:14.427 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:26:14.427 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:14.685 [2024-07-12 07:36:48.380321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:14.685 [2024-07-12 07:36:48.380705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.685 [2024-07-12 07:36:48.380789] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:14.685 [2024-07-12 07:36:48.380883] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.685 [2024-07-12 07:36:48.381439] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.685 [2024-07-12 07:36:48.381601] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:14.685 [2024-07-12 07:36:48.381799] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:14.685 [2024-07-12 07:36:48.381897] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:14.685 pt3 00:26:14.685 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:14.685 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:14.685 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:14.685 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:14.685 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:14.685 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:14.685 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:14.685 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:14.685 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:14.685 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:14.685 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.685 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.943 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:14.943 "name": "raid_bdev1", 00:26:14.943 "uuid": "3e028ddb-47ce-403a-aba3-b95c567dcd61", 00:26:14.943 "strip_size_kb": 0, 00:26:14.943 "state": "configuring", 00:26:14.943 "raid_level": "raid1", 00:26:14.943 "superblock": true, 00:26:14.943 "num_base_bdevs": 4, 00:26:14.943 "num_base_bdevs_discovered": 2, 00:26:14.943 "num_base_bdevs_operational": 3, 00:26:14.943 "base_bdevs_list": [ 00:26:14.943 { 00:26:14.943 "name": null, 00:26:14.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.943 "is_configured": false, 00:26:14.943 "data_offset": 2048, 00:26:14.943 "data_size": 63488 00:26:14.943 }, 00:26:14.943 { 00:26:14.943 "name": "pt2", 00:26:14.943 "uuid": "2df2163e-f352-5de8-98af-b848d7bebdef", 00:26:14.943 "is_configured": true, 00:26:14.943 "data_offset": 2048, 00:26:14.943 "data_size": 63488 00:26:14.943 }, 00:26:14.943 { 00:26:14.943 "name": "pt3", 00:26:14.943 "uuid": "cb7a446e-cf77-57f3-83dd-0cecf822e48b", 00:26:14.943 "is_configured": true, 00:26:14.943 "data_offset": 2048, 00:26:14.943 "data_size": 63488 00:26:14.943 }, 00:26:14.943 { 00:26:14.943 "name": null, 00:26:14.943 "uuid": "c6b0938a-19b2-5731-a052-2cfd1cd3aee3", 00:26:14.943 "is_configured": false, 00:26:14.943 "data_offset": 2048, 00:26:14.943 "data_size": 63488 00:26:14.943 } 00:26:14.943 ] 00:26:14.943 }' 00:26:14.943 07:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:14.944 07:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.511 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:26:15.511 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:26:15.511 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:26:15.511 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:15.771 [2024-07-12 07:36:49.421977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:15.771 [2024-07-12 07:36:49.422369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:15.771 [2024-07-12 07:36:49.422452] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:15.771 [2024-07-12 07:36:49.422554] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:15.771 [2024-07-12 07:36:49.423107] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:15.771 [2024-07-12 07:36:49.423261] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:15.771 [2024-07-12 07:36:49.423447] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:15.771 [2024-07-12 07:36:49.423501] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:15.771 [2024-07-12 07:36:49.423733] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:26:15.771 [2024-07-12 07:36:49.423864] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:15.771 [2024-07-12 07:36:49.423983] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:26:15.771 [2024-07-12 07:36:49.424394] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:26:15.772 [2024-07-12 07:36:49.424509] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:26:15.772 [2024-07-12 07:36:49.424700] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:15.772 pt4 00:26:15.772 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:15.772 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:15.772 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:15.772 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:15.772 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:15.772 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:15.772 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:15.772 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:15.772 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:15.772 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:15.772 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.772 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.772 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:15.772 "name": "raid_bdev1", 00:26:15.772 "uuid": "3e028ddb-47ce-403a-aba3-b95c567dcd61", 00:26:15.772 "strip_size_kb": 0, 00:26:15.772 "state": "online", 00:26:15.772 "raid_level": "raid1", 00:26:15.772 "superblock": true, 00:26:15.772 "num_base_bdevs": 4, 00:26:15.772 "num_base_bdevs_discovered": 3, 00:26:15.772 "num_base_bdevs_operational": 3, 00:26:15.772 "base_bdevs_list": [ 00:26:15.772 { 00:26:15.772 "name": null, 00:26:15.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:15.772 "is_configured": false, 00:26:15.772 "data_offset": 2048, 00:26:15.772 "data_size": 63488 00:26:15.772 }, 00:26:15.772 { 00:26:15.772 "name": "pt2", 00:26:15.772 "uuid": "2df2163e-f352-5de8-98af-b848d7bebdef", 00:26:15.772 "is_configured": true, 00:26:15.772 "data_offset": 2048, 00:26:15.772 "data_size": 63488 00:26:15.772 }, 00:26:15.772 { 00:26:15.772 "name": "pt3", 00:26:15.772 "uuid": "cb7a446e-cf77-57f3-83dd-0cecf822e48b", 00:26:15.772 "is_configured": true, 00:26:15.772 "data_offset": 2048, 00:26:15.772 "data_size": 63488 00:26:15.772 }, 00:26:15.772 { 00:26:15.772 "name": "pt4", 00:26:15.772 "uuid": "c6b0938a-19b2-5731-a052-2cfd1cd3aee3", 00:26:15.772 "is_configured": true, 00:26:15.772 "data_offset": 2048, 00:26:15.772 "data_size": 63488 00:26:15.772 } 00:26:15.772 ] 00:26:15.772 }' 00:26:15.772 07:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:15.772 07:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.709 07:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:16.709 [2024-07-12 07:36:50.517919] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:16.709 [2024-07-12 07:36:50.518213] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:16.709 [2024-07-12 07:36:50.518441] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:16.709 [2024-07-12 07:36:50.518641] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:16.709 [2024-07-12 07:36:50.518722] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:26:16.709 07:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.709 07:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:26:16.999 07:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:26:16.999 07:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:26:16.999 07:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:26:16.999 07:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:26:16.999 07:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:17.284 07:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:17.543 [2024-07-12 07:36:51.197920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:17.543 [2024-07-12 07:36:51.198298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.543 [2024-07-12 07:36:51.198448] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:17.543 [2024-07-12 07:36:51.198549] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.543 [2024-07-12 07:36:51.201357] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.543 [2024-07-12 07:36:51.201569] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:17.543 [2024-07-12 07:36:51.201753] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:17.543 [2024-07-12 07:36:51.201893] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:17.543 [2024-07-12 07:36:51.202154] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:17.543 [2024-07-12 07:36:51.202242] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:17.543 [2024-07-12 07:36:51.202304] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:26:17.543 [2024-07-12 07:36:51.202429] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:17.543 [2024-07-12 07:36:51.202676] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:17.543 pt1 00:26:17.543 07:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:26:17.543 07:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:17.543 07:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:17.543 07:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:17.543 07:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:17.543 07:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:17.543 07:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:17.543 07:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:17.543 07:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:17.543 07:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:17.543 07:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:17.543 07:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.543 07:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.802 07:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:17.802 "name": "raid_bdev1", 00:26:17.802 "uuid": "3e028ddb-47ce-403a-aba3-b95c567dcd61", 00:26:17.802 "strip_size_kb": 0, 00:26:17.802 "state": "configuring", 00:26:17.802 "raid_level": "raid1", 00:26:17.802 "superblock": true, 00:26:17.802 "num_base_bdevs": 4, 00:26:17.802 "num_base_bdevs_discovered": 2, 00:26:17.802 "num_base_bdevs_operational": 3, 00:26:17.802 "base_bdevs_list": [ 00:26:17.802 { 00:26:17.802 "name": null, 00:26:17.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.802 "is_configured": false, 00:26:17.802 "data_offset": 2048, 00:26:17.802 "data_size": 63488 00:26:17.802 }, 00:26:17.802 { 00:26:17.802 "name": "pt2", 00:26:17.802 "uuid": "2df2163e-f352-5de8-98af-b848d7bebdef", 00:26:17.802 "is_configured": true, 00:26:17.802 "data_offset": 2048, 00:26:17.802 "data_size": 63488 00:26:17.802 }, 00:26:17.802 { 00:26:17.802 "name": "pt3", 00:26:17.802 "uuid": "cb7a446e-cf77-57f3-83dd-0cecf822e48b", 00:26:17.802 "is_configured": true, 00:26:17.802 "data_offset": 2048, 00:26:17.802 "data_size": 63488 00:26:17.802 }, 00:26:17.802 { 00:26:17.802 "name": null, 00:26:17.802 "uuid": "c6b0938a-19b2-5731-a052-2cfd1cd3aee3", 00:26:17.802 "is_configured": false, 00:26:17.802 "data_offset": 2048, 00:26:17.802 "data_size": 63488 00:26:17.802 } 00:26:17.802 ] 00:26:17.802 }' 00:26:17.802 07:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:17.802 07:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.371 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:18.371 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:26:18.630 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:26:18.630 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:18.889 [2024-07-12 07:36:52.578543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:18.889 [2024-07-12 07:36:52.578872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:18.889 [2024-07-12 07:36:52.578964] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:18.889 [2024-07-12 07:36:52.579086] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:18.889 [2024-07-12 07:36:52.579580] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:18.889 [2024-07-12 07:36:52.579744] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:18.889 [2024-07-12 07:36:52.579911] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:18.889 [2024-07-12 07:36:52.580067] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:18.889 [2024-07-12 07:36:52.580234] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:26:18.889 [2024-07-12 07:36:52.580380] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:18.889 [2024-07-12 07:36:52.580497] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:26:18.889 [2024-07-12 07:36:52.580889] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:26:18.889 [2024-07-12 07:36:52.581002] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:26:18.889 [2024-07-12 07:36:52.581187] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:18.889 pt4 00:26:18.889 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:18.889 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:18.889 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:18.889 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:18.889 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:18.889 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:18.889 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:18.889 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:18.889 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:18.889 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:18.889 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.889 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:19.149 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:19.149 "name": "raid_bdev1", 00:26:19.149 "uuid": "3e028ddb-47ce-403a-aba3-b95c567dcd61", 00:26:19.149 "strip_size_kb": 0, 00:26:19.149 "state": "online", 00:26:19.149 "raid_level": "raid1", 00:26:19.149 "superblock": true, 00:26:19.149 "num_base_bdevs": 4, 00:26:19.149 "num_base_bdevs_discovered": 3, 00:26:19.149 "num_base_bdevs_operational": 3, 00:26:19.149 "base_bdevs_list": [ 00:26:19.149 { 00:26:19.149 "name": null, 00:26:19.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.149 "is_configured": false, 00:26:19.149 "data_offset": 2048, 00:26:19.149 "data_size": 63488 00:26:19.149 }, 00:26:19.149 { 00:26:19.149 "name": "pt2", 00:26:19.149 "uuid": "2df2163e-f352-5de8-98af-b848d7bebdef", 00:26:19.149 "is_configured": true, 00:26:19.149 "data_offset": 2048, 00:26:19.149 "data_size": 63488 00:26:19.149 }, 00:26:19.149 { 00:26:19.149 "name": "pt3", 00:26:19.149 "uuid": "cb7a446e-cf77-57f3-83dd-0cecf822e48b", 00:26:19.149 "is_configured": true, 00:26:19.149 "data_offset": 2048, 00:26:19.149 "data_size": 63488 00:26:19.149 }, 00:26:19.149 { 00:26:19.149 "name": "pt4", 00:26:19.149 "uuid": "c6b0938a-19b2-5731-a052-2cfd1cd3aee3", 00:26:19.149 "is_configured": true, 00:26:19.149 "data_offset": 2048, 00:26:19.149 "data_size": 63488 00:26:19.149 } 00:26:19.149 ] 00:26:19.149 }' 00:26:19.149 07:36:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:19.149 07:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.716 07:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:19.716 07:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:19.975 07:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:26:19.975 07:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:19.975 07:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:26:20.233 [2024-07-12 07:36:54.059104] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:20.233 07:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3e028ddb-47ce-403a-aba3-b95c567dcd61 '!=' 3e028ddb-47ce-403a-aba3-b95c567dcd61 ']' 00:26:20.233 07:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 152550 00:26:20.233 07:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 152550 ']' 00:26:20.233 07:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 152550 00:26:20.233 07:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:26:20.233 07:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:20.233 07:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 152550 00:26:20.491 07:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:20.491 07:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:20.491 07:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 152550' 00:26:20.491 killing process with pid 152550 00:26:20.491 07:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 152550 00:26:20.492 [2024-07-12 07:36:54.119333] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:20.492 07:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 152550 00:26:20.492 [2024-07-12 07:36:54.119638] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:20.492 [2024-07-12 07:36:54.119850] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:20.492 [2024-07-12 07:36:54.119939] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:26:20.492 [2024-07-12 07:36:54.166725] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:20.750 07:36:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:26:20.750 00:26:20.750 real 0m24.962s 00:26:20.750 user 0m45.821s 00:26:20.750 sys 0m4.453s 00:26:20.750 07:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:20.750 07:36:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.750 ************************************ 00:26:20.750 END TEST raid_superblock_test 00:26:20.750 ************************************ 00:26:20.750 07:36:54 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:26:20.750 07:36:54 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:26:20.750 07:36:54 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:20.750 07:36:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:20.750 ************************************ 00:26:20.750 START TEST raid_read_error_test 00:26:20.750 ************************************ 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 4 read 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.jj1l0g9kWc 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=153395 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 153395 /var/tmp/spdk-raid.sock 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 153395 ']' 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:20.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:20.750 07:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.750 [2024-07-12 07:36:54.593875] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:20.751 [2024-07-12 07:36:54.594954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153395 ] 00:26:21.008 [2024-07-12 07:36:54.747885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.008 [2024-07-12 07:36:54.848886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.268 [2024-07-12 07:36:54.939909] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:21.836 07:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:21.836 07:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:26:21.836 07:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:21.836 07:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:22.094 BaseBdev1_malloc 00:26:22.094 07:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:22.352 true 00:26:22.352 07:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:22.352 [2024-07-12 07:36:56.190311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:22.352 [2024-07-12 07:36:56.190657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.352 [2024-07-12 07:36:56.190739] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:26:22.352 [2024-07-12 07:36:56.190864] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.352 [2024-07-12 07:36:56.193624] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.352 [2024-07-12 07:36:56.193812] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:22.352 BaseBdev1 00:26:22.352 07:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:22.352 07:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:22.921 BaseBdev2_malloc 00:26:22.921 07:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:22.921 true 00:26:22.921 07:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:23.181 [2024-07-12 07:36:56.991323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:23.181 [2024-07-12 07:36:56.991649] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.181 [2024-07-12 07:36:56.991724] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:26:23.181 [2024-07-12 07:36:56.991853] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.181 [2024-07-12 07:36:56.994311] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.181 [2024-07-12 07:36:56.994486] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:23.181 BaseBdev2 00:26:23.181 07:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:23.181 07:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:23.439 BaseBdev3_malloc 00:26:23.439 07:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:23.698 true 00:26:23.698 07:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:23.956 [2024-07-12 07:36:57.643531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:23.956 [2024-07-12 07:36:57.643862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.956 [2024-07-12 07:36:57.643939] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:26:23.956 [2024-07-12 07:36:57.644077] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.956 [2024-07-12 07:36:57.646657] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.956 [2024-07-12 07:36:57.646833] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:23.956 BaseBdev3 00:26:23.956 07:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:23.956 07:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:24.215 BaseBdev4_malloc 00:26:24.215 07:36:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:24.473 true 00:26:24.473 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:24.735 [2024-07-12 07:36:58.411907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:24.735 [2024-07-12 07:36:58.412189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.735 [2024-07-12 07:36:58.412262] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:24.735 [2024-07-12 07:36:58.412401] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.735 [2024-07-12 07:36:58.414904] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.735 [2024-07-12 07:36:58.415090] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:24.735 BaseBdev4 00:26:24.735 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:24.735 [2024-07-12 07:36:58.604631] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:24.735 [2024-07-12 07:36:58.607063] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:24.735 [2024-07-12 07:36:58.607270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:24.735 [2024-07-12 07:36:58.607367] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:24.735 [2024-07-12 07:36:58.607697] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:26:24.735 [2024-07-12 07:36:58.607809] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:24.735 [2024-07-12 07:36:58.607995] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:26:24.735 [2024-07-12 07:36:58.608492] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:26:24.735 [2024-07-12 07:36:58.608604] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:26:24.735 [2024-07-12 07:36:58.608879] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:24.995 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:24.995 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:24.995 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:24.995 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:24.995 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:24.995 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:24.995 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:24.995 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:24.995 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:24.995 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:24.995 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.996 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.255 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:25.255 "name": "raid_bdev1", 00:26:25.255 "uuid": "774e92b0-b50f-48c7-a573-a075e1215a62", 00:26:25.255 "strip_size_kb": 0, 00:26:25.255 "state": "online", 00:26:25.255 "raid_level": "raid1", 00:26:25.255 "superblock": true, 00:26:25.255 "num_base_bdevs": 4, 00:26:25.255 "num_base_bdevs_discovered": 4, 00:26:25.255 "num_base_bdevs_operational": 4, 00:26:25.255 "base_bdevs_list": [ 00:26:25.255 { 00:26:25.255 "name": "BaseBdev1", 00:26:25.255 "uuid": "b7a7a54a-d725-5873-a52a-0f951cda058a", 00:26:25.255 "is_configured": true, 00:26:25.255 "data_offset": 2048, 00:26:25.255 "data_size": 63488 00:26:25.255 }, 00:26:25.255 { 00:26:25.255 "name": "BaseBdev2", 00:26:25.255 "uuid": "51a90d8b-1cd7-5f43-bb10-87505b769675", 00:26:25.255 "is_configured": true, 00:26:25.255 "data_offset": 2048, 00:26:25.255 "data_size": 63488 00:26:25.255 }, 00:26:25.255 { 00:26:25.255 "name": "BaseBdev3", 00:26:25.255 "uuid": "9a1dc487-eebb-5bfd-bda9-f81e1c75dd8d", 00:26:25.255 "is_configured": true, 00:26:25.255 "data_offset": 2048, 00:26:25.255 "data_size": 63488 00:26:25.255 }, 00:26:25.255 { 00:26:25.255 "name": "BaseBdev4", 00:26:25.255 "uuid": "a1e4cf68-dc01-5eaf-8371-293bf1c50990", 00:26:25.255 "is_configured": true, 00:26:25.255 "data_offset": 2048, 00:26:25.255 "data_size": 63488 00:26:25.255 } 00:26:25.255 ] 00:26:25.255 }' 00:26:25.255 07:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:25.255 07:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.822 07:36:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:25.822 07:36:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:25.822 [2024-07-12 07:36:59.539268] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:26:26.757 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.016 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.275 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:27.275 "name": "raid_bdev1", 00:26:27.275 "uuid": "774e92b0-b50f-48c7-a573-a075e1215a62", 00:26:27.275 "strip_size_kb": 0, 00:26:27.275 "state": "online", 00:26:27.275 "raid_level": "raid1", 00:26:27.275 "superblock": true, 00:26:27.275 "num_base_bdevs": 4, 00:26:27.275 "num_base_bdevs_discovered": 4, 00:26:27.275 "num_base_bdevs_operational": 4, 00:26:27.275 "base_bdevs_list": [ 00:26:27.275 { 00:26:27.275 "name": "BaseBdev1", 00:26:27.275 "uuid": "b7a7a54a-d725-5873-a52a-0f951cda058a", 00:26:27.275 "is_configured": true, 00:26:27.275 "data_offset": 2048, 00:26:27.275 "data_size": 63488 00:26:27.275 }, 00:26:27.275 { 00:26:27.275 "name": "BaseBdev2", 00:26:27.275 "uuid": "51a90d8b-1cd7-5f43-bb10-87505b769675", 00:26:27.275 "is_configured": true, 00:26:27.275 "data_offset": 2048, 00:26:27.275 "data_size": 63488 00:26:27.275 }, 00:26:27.275 { 00:26:27.275 "name": "BaseBdev3", 00:26:27.275 "uuid": "9a1dc487-eebb-5bfd-bda9-f81e1c75dd8d", 00:26:27.275 "is_configured": true, 00:26:27.275 "data_offset": 2048, 00:26:27.275 "data_size": 63488 00:26:27.275 }, 00:26:27.275 { 00:26:27.275 "name": "BaseBdev4", 00:26:27.275 "uuid": "a1e4cf68-dc01-5eaf-8371-293bf1c50990", 00:26:27.275 "is_configured": true, 00:26:27.275 "data_offset": 2048, 00:26:27.275 "data_size": 63488 00:26:27.275 } 00:26:27.275 ] 00:26:27.275 }' 00:26:27.275 07:37:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:27.275 07:37:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.839 07:37:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:28.096 [2024-07-12 07:37:01.787478] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:28.096 [2024-07-12 07:37:01.787770] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:28.096 [2024-07-12 07:37:01.790400] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:28.096 [2024-07-12 07:37:01.790556] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:28.096 [2024-07-12 07:37:01.790724] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:28.096 [2024-07-12 07:37:01.790850] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:26:28.096 0 00:26:28.096 07:37:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 153395 00:26:28.096 07:37:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 153395 ']' 00:26:28.096 07:37:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 153395 00:26:28.096 07:37:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:26:28.097 07:37:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:28.097 07:37:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 153395 00:26:28.097 07:37:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:28.097 07:37:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:28.097 07:37:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 153395' 00:26:28.097 killing process with pid 153395 00:26:28.097 07:37:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 153395 00:26:28.097 [2024-07-12 07:37:01.838817] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:28.097 07:37:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 153395 00:26:28.097 [2024-07-12 07:37:01.874107] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:28.353 07:37:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:28.353 07:37:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.jj1l0g9kWc 00:26:28.353 07:37:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:28.353 07:37:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:26:28.353 07:37:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:26:28.353 07:37:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:28.353 07:37:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:28.353 07:37:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:28.353 00:26:28.353 real 0m7.642s 00:26:28.353 user 0m12.107s 00:26:28.353 sys 0m1.301s 00:26:28.353 07:37:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:28.353 07:37:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.353 ************************************ 00:26:28.353 END TEST raid_read_error_test 00:26:28.353 ************************************ 00:26:28.353 07:37:02 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:26:28.353 07:37:02 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:26:28.353 07:37:02 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:28.353 07:37:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:28.353 ************************************ 00:26:28.353 START TEST raid_write_error_test 00:26:28.353 ************************************ 00:26:28.353 07:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 4 write 00:26:28.353 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:26:28.353 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:26:28.353 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:26:28.353 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:26:28.353 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:28.353 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:26:28.353 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:28.353 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:28.353 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.os4RRYde6M 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=153593 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 153593 /var/tmp/spdk-raid.sock 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 153593 ']' 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:28.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:28.610 07:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.610 [2024-07-12 07:37:02.311084] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:28.611 [2024-07-12 07:37:02.311591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153593 ] 00:26:28.611 [2024-07-12 07:37:02.448882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.868 [2024-07-12 07:37:02.495229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.868 [2024-07-12 07:37:02.537013] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:29.433 07:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:29.433 07:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:26:29.433 07:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:29.433 07:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:29.699 BaseBdev1_malloc 00:26:29.699 07:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:29.967 true 00:26:29.967 07:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:30.224 [2024-07-12 07:37:03.875305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:30.224 [2024-07-12 07:37:03.875613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.224 [2024-07-12 07:37:03.875690] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005d80 00:26:30.224 [2024-07-12 07:37:03.875821] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.224 [2024-07-12 07:37:03.878467] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.224 [2024-07-12 07:37:03.878655] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:30.224 BaseBdev1 00:26:30.224 07:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:30.224 07:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:30.481 BaseBdev2_malloc 00:26:30.481 07:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:30.481 true 00:26:30.481 07:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:30.738 [2024-07-12 07:37:04.528396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:30.738 [2024-07-12 07:37:04.528704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.738 [2024-07-12 07:37:04.528780] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:26:30.738 [2024-07-12 07:37:04.528904] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.738 [2024-07-12 07:37:04.531343] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.738 [2024-07-12 07:37:04.531518] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:30.738 BaseBdev2 00:26:30.738 07:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:30.738 07:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:30.995 BaseBdev3_malloc 00:26:30.995 07:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:31.253 true 00:26:31.253 07:37:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:31.510 [2024-07-12 07:37:05.236097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:31.510 [2024-07-12 07:37:05.236402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.510 [2024-07-12 07:37:05.236477] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:26:31.511 [2024-07-12 07:37:05.236603] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.511 [2024-07-12 07:37:05.239063] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.511 [2024-07-12 07:37:05.239253] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:31.511 BaseBdev3 00:26:31.511 07:37:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:31.511 07:37:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:31.769 BaseBdev4_malloc 00:26:31.769 07:37:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:32.027 true 00:26:32.027 07:37:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:32.027 [2024-07-12 07:37:05.881226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:32.027 [2024-07-12 07:37:05.881571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:32.028 [2024-07-12 07:37:05.881647] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:32.028 [2024-07-12 07:37:05.881764] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:32.028 [2024-07-12 07:37:05.884214] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:32.028 [2024-07-12 07:37:05.884383] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:32.028 BaseBdev4 00:26:32.028 07:37:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:32.286 [2024-07-12 07:37:06.073344] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:32.286 [2024-07-12 07:37:06.075717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:32.286 [2024-07-12 07:37:06.075921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:32.286 [2024-07-12 07:37:06.076016] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:32.286 [2024-07-12 07:37:06.076360] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:26:32.286 [2024-07-12 07:37:06.076455] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:32.286 [2024-07-12 07:37:06.076615] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:26:32.286 [2024-07-12 07:37:06.077212] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:26:32.286 [2024-07-12 07:37:06.077335] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:26:32.286 [2024-07-12 07:37:06.077582] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:32.286 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:32.286 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:32.286 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:32.286 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:32.286 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:32.286 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:32.286 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:32.286 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:32.286 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:32.286 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:32.287 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.287 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.545 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:32.545 "name": "raid_bdev1", 00:26:32.545 "uuid": "86e4de72-372e-482f-aeb0-040469695f34", 00:26:32.545 "strip_size_kb": 0, 00:26:32.545 "state": "online", 00:26:32.545 "raid_level": "raid1", 00:26:32.545 "superblock": true, 00:26:32.545 "num_base_bdevs": 4, 00:26:32.545 "num_base_bdevs_discovered": 4, 00:26:32.545 "num_base_bdevs_operational": 4, 00:26:32.545 "base_bdevs_list": [ 00:26:32.545 { 00:26:32.545 "name": "BaseBdev1", 00:26:32.545 "uuid": "ce158925-b762-5424-b844-b8a170f2d343", 00:26:32.545 "is_configured": true, 00:26:32.545 "data_offset": 2048, 00:26:32.545 "data_size": 63488 00:26:32.545 }, 00:26:32.545 { 00:26:32.545 "name": "BaseBdev2", 00:26:32.545 "uuid": "eb3b5dfb-1c1c-5380-8b32-5ad8da9fe5eb", 00:26:32.545 "is_configured": true, 00:26:32.545 "data_offset": 2048, 00:26:32.545 "data_size": 63488 00:26:32.545 }, 00:26:32.545 { 00:26:32.545 "name": "BaseBdev3", 00:26:32.545 "uuid": "14db2c21-fd9e-5040-8e8e-aadaed30fe8c", 00:26:32.545 "is_configured": true, 00:26:32.545 "data_offset": 2048, 00:26:32.545 "data_size": 63488 00:26:32.545 }, 00:26:32.545 { 00:26:32.545 "name": "BaseBdev4", 00:26:32.545 "uuid": "6367734f-27dd-5f87-a5f4-4986c2f6b43b", 00:26:32.545 "is_configured": true, 00:26:32.545 "data_offset": 2048, 00:26:32.545 "data_size": 63488 00:26:32.545 } 00:26:32.545 ] 00:26:32.545 }' 00:26:32.545 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:32.545 07:37:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.111 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:33.111 07:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:33.386 [2024-07-12 07:37:07.034145] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:26:34.320 07:37:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:34.320 [2024-07-12 07:37:08.185462] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:26:34.320 [2024-07-12 07:37:08.185829] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:34.320 [2024-07-12 07:37:08.186129] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002600 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.578 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:34.578 "name": "raid_bdev1", 00:26:34.578 "uuid": "86e4de72-372e-482f-aeb0-040469695f34", 00:26:34.579 "strip_size_kb": 0, 00:26:34.579 "state": "online", 00:26:34.579 "raid_level": "raid1", 00:26:34.579 "superblock": true, 00:26:34.579 "num_base_bdevs": 4, 00:26:34.579 "num_base_bdevs_discovered": 3, 00:26:34.579 "num_base_bdevs_operational": 3, 00:26:34.579 "base_bdevs_list": [ 00:26:34.579 { 00:26:34.579 "name": null, 00:26:34.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.579 "is_configured": false, 00:26:34.579 "data_offset": 2048, 00:26:34.579 "data_size": 63488 00:26:34.579 }, 00:26:34.579 { 00:26:34.579 "name": "BaseBdev2", 00:26:34.579 "uuid": "eb3b5dfb-1c1c-5380-8b32-5ad8da9fe5eb", 00:26:34.579 "is_configured": true, 00:26:34.579 "data_offset": 2048, 00:26:34.579 "data_size": 63488 00:26:34.579 }, 00:26:34.579 { 00:26:34.579 "name": "BaseBdev3", 00:26:34.579 "uuid": "14db2c21-fd9e-5040-8e8e-aadaed30fe8c", 00:26:34.579 "is_configured": true, 00:26:34.579 "data_offset": 2048, 00:26:34.579 "data_size": 63488 00:26:34.579 }, 00:26:34.579 { 00:26:34.579 "name": "BaseBdev4", 00:26:34.579 "uuid": "6367734f-27dd-5f87-a5f4-4986c2f6b43b", 00:26:34.579 "is_configured": true, 00:26:34.579 "data_offset": 2048, 00:26:34.579 "data_size": 63488 00:26:34.579 } 00:26:34.579 ] 00:26:34.579 }' 00:26:34.579 07:37:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:34.579 07:37:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.513 07:37:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:35.513 [2024-07-12 07:37:09.312228] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:35.513 [2024-07-12 07:37:09.312558] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:35.513 [2024-07-12 07:37:09.315283] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:35.513 [2024-07-12 07:37:09.315541] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:35.513 [2024-07-12 07:37:09.315701] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:35.513 [2024-07-12 07:37:09.315924] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:26:35.513 0 00:26:35.513 07:37:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 153593 00:26:35.513 07:37:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 153593 ']' 00:26:35.513 07:37:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 153593 00:26:35.513 07:37:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:26:35.513 07:37:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:35.513 07:37:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 153593 00:26:35.513 07:37:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:35.513 07:37:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:35.513 07:37:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 153593' 00:26:35.513 killing process with pid 153593 00:26:35.513 07:37:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 153593 00:26:35.513 [2024-07-12 07:37:09.377839] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:35.513 07:37:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 153593 00:26:35.771 [2024-07-12 07:37:09.445640] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:36.030 07:37:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.os4RRYde6M 00:26:36.030 07:37:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:36.030 07:37:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:36.030 07:37:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:26:36.030 07:37:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:26:36.030 07:37:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:36.030 07:37:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:36.030 07:37:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:36.030 00:26:36.030 real 0m7.642s 00:26:36.030 user 0m12.027s 00:26:36.030 sys 0m1.254s 00:26:36.030 07:37:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:36.030 07:37:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.030 ************************************ 00:26:36.030 END TEST raid_write_error_test 00:26:36.030 ************************************ 00:26:36.289 07:37:09 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:26:36.289 07:37:09 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:26:36.289 07:37:09 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:26:36.289 07:37:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:26:36.289 07:37:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:36.289 07:37:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:36.289 ************************************ 00:26:36.289 START TEST raid_rebuild_test 00:26:36.289 ************************************ 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 false false true 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=153794 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 153794 /var/tmp/spdk-raid.sock 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 153794 ']' 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:36.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:36.289 07:37:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.289 [2024-07-12 07:37:10.026428] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:36.289 [2024-07-12 07:37:10.026905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153794 ] 00:26:36.289 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:36.289 Zero copy mechanism will not be used. 00:26:36.289 [2024-07-12 07:37:10.171441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.548 [2024-07-12 07:37:10.255474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.548 [2024-07-12 07:37:10.336607] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:37.484 07:37:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:37.484 07:37:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:26:37.484 07:37:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:37.484 07:37:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:37.484 BaseBdev1_malloc 00:26:37.484 07:37:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:37.742 [2024-07-12 07:37:11.505663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:37.742 [2024-07-12 07:37:11.506072] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.742 [2024-07-12 07:37:11.506175] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:26:37.742 [2024-07-12 07:37:11.506490] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.742 [2024-07-12 07:37:11.509534] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.742 [2024-07-12 07:37:11.509744] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:37.742 BaseBdev1 00:26:37.742 07:37:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:37.742 07:37:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:38.000 BaseBdev2_malloc 00:26:38.000 07:37:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:38.258 [2024-07-12 07:37:11.994060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:38.258 [2024-07-12 07:37:11.994428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:38.258 [2024-07-12 07:37:11.994508] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:26:38.258 [2024-07-12 07:37:11.994636] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:38.258 [2024-07-12 07:37:11.997431] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:38.258 [2024-07-12 07:37:11.997605] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:38.258 BaseBdev2 00:26:38.258 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:38.517 spare_malloc 00:26:38.517 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:38.775 spare_delay 00:26:38.775 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:38.775 [2024-07-12 07:37:12.630862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:38.775 [2024-07-12 07:37:12.631257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:38.775 [2024-07-12 07:37:12.631344] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:26:38.775 [2024-07-12 07:37:12.631467] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:38.775 [2024-07-12 07:37:12.634360] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:38.775 [2024-07-12 07:37:12.634562] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:38.775 spare 00:26:38.775 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:26:39.034 [2024-07-12 07:37:12.842979] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:39.034 [2024-07-12 07:37:12.845801] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:39.034 [2024-07-12 07:37:12.846045] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:26:39.034 [2024-07-12 07:37:12.846087] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:39.034 [2024-07-12 07:37:12.846371] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:26:39.034 [2024-07-12 07:37:12.846939] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:26:39.034 [2024-07-12 07:37:12.847051] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:26:39.034 [2024-07-12 07:37:12.847466] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:39.034 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:39.034 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:39.034 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:39.034 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:39.034 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:39.034 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:39.034 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:39.034 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:39.034 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:39.034 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:39.034 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.034 07:37:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.308 07:37:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:39.308 "name": "raid_bdev1", 00:26:39.309 "uuid": "8732d749-aed5-418f-832c-cc7bf5bf8d57", 00:26:39.309 "strip_size_kb": 0, 00:26:39.309 "state": "online", 00:26:39.309 "raid_level": "raid1", 00:26:39.309 "superblock": false, 00:26:39.309 "num_base_bdevs": 2, 00:26:39.309 "num_base_bdevs_discovered": 2, 00:26:39.309 "num_base_bdevs_operational": 2, 00:26:39.309 "base_bdevs_list": [ 00:26:39.309 { 00:26:39.309 "name": "BaseBdev1", 00:26:39.309 "uuid": "2860168d-e2a8-5615-b10f-3b1473503d96", 00:26:39.309 "is_configured": true, 00:26:39.309 "data_offset": 0, 00:26:39.309 "data_size": 65536 00:26:39.309 }, 00:26:39.309 { 00:26:39.309 "name": "BaseBdev2", 00:26:39.309 "uuid": "1503e76f-478f-5fd6-bf1f-f0127e7ec0e5", 00:26:39.309 "is_configured": true, 00:26:39.309 "data_offset": 0, 00:26:39.309 "data_size": 65536 00:26:39.309 } 00:26:39.309 ] 00:26:39.309 }' 00:26:39.309 07:37:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:39.309 07:37:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.882 07:37:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:26:39.882 07:37:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:40.141 [2024-07-12 07:37:13.999937] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:40.141 07:37:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:40.400 07:37:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:40.660 [2024-07-12 07:37:14.479935] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:26:40.660 /dev/nbd0 00:26:40.660 07:37:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:40.660 07:37:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:40.660 07:37:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:26:40.660 07:37:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:26:40.660 07:37:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:26:40.660 07:37:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:26:40.660 07:37:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:26:40.660 07:37:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:26:40.660 07:37:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:26:40.660 07:37:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:26:40.660 07:37:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:40.919 1+0 records in 00:26:40.919 1+0 records out 00:26:40.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000772284 s, 5.3 MB/s 00:26:40.919 07:37:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:40.919 07:37:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:26:40.919 07:37:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:40.919 07:37:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:26:40.919 07:37:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:26:40.919 07:37:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:40.919 07:37:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:40.919 07:37:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:26:40.919 07:37:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:26:40.919 07:37:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:26:46.210 65536+0 records in 00:26:46.210 65536+0 records out 00:26:46.210 33554432 bytes (34 MB, 32 MiB) copied, 5.07525 s, 6.6 MB/s 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:46.210 [2024-07-12 07:37:19.842433] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:46.210 07:37:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:46.470 [2024-07-12 07:37:20.097996] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:46.470 07:37:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:46.470 07:37:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:46.470 07:37:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:46.470 07:37:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:46.470 07:37:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:46.470 07:37:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:46.470 07:37:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:46.470 07:37:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:46.470 07:37:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:46.470 07:37:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:46.470 07:37:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.470 07:37:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.470 07:37:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:46.470 "name": "raid_bdev1", 00:26:46.470 "uuid": "8732d749-aed5-418f-832c-cc7bf5bf8d57", 00:26:46.470 "strip_size_kb": 0, 00:26:46.470 "state": "online", 00:26:46.470 "raid_level": "raid1", 00:26:46.470 "superblock": false, 00:26:46.470 "num_base_bdevs": 2, 00:26:46.470 "num_base_bdevs_discovered": 1, 00:26:46.470 "num_base_bdevs_operational": 1, 00:26:46.470 "base_bdevs_list": [ 00:26:46.470 { 00:26:46.470 "name": null, 00:26:46.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.470 "is_configured": false, 00:26:46.470 "data_offset": 0, 00:26:46.470 "data_size": 65536 00:26:46.470 }, 00:26:46.470 { 00:26:46.470 "name": "BaseBdev2", 00:26:46.470 "uuid": "1503e76f-478f-5fd6-bf1f-f0127e7ec0e5", 00:26:46.470 "is_configured": true, 00:26:46.470 "data_offset": 0, 00:26:46.470 "data_size": 65536 00:26:46.470 } 00:26:46.470 ] 00:26:46.470 }' 00:26:46.470 07:37:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:46.470 07:37:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.036 07:37:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:47.295 [2024-07-12 07:37:21.146217] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:47.295 [2024-07-12 07:37:21.154217] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06080 00:26:47.295 [2024-07-12 07:37:21.157078] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:47.295 07:37:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:26:48.673 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:48.673 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:48.673 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:48.673 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:48.673 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:48.673 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.673 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.673 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:48.673 "name": "raid_bdev1", 00:26:48.673 "uuid": "8732d749-aed5-418f-832c-cc7bf5bf8d57", 00:26:48.673 "strip_size_kb": 0, 00:26:48.673 "state": "online", 00:26:48.673 "raid_level": "raid1", 00:26:48.673 "superblock": false, 00:26:48.673 "num_base_bdevs": 2, 00:26:48.673 "num_base_bdevs_discovered": 2, 00:26:48.673 "num_base_bdevs_operational": 2, 00:26:48.673 "process": { 00:26:48.673 "type": "rebuild", 00:26:48.673 "target": "spare", 00:26:48.673 "progress": { 00:26:48.673 "blocks": 24576, 00:26:48.673 "percent": 37 00:26:48.673 } 00:26:48.673 }, 00:26:48.673 "base_bdevs_list": [ 00:26:48.673 { 00:26:48.673 "name": "spare", 00:26:48.673 "uuid": "9943fafe-0ffa-5a90-a94c-0a635d9fa4c8", 00:26:48.673 "is_configured": true, 00:26:48.673 "data_offset": 0, 00:26:48.673 "data_size": 65536 00:26:48.673 }, 00:26:48.673 { 00:26:48.673 "name": "BaseBdev2", 00:26:48.673 "uuid": "1503e76f-478f-5fd6-bf1f-f0127e7ec0e5", 00:26:48.673 "is_configured": true, 00:26:48.673 "data_offset": 0, 00:26:48.673 "data_size": 65536 00:26:48.673 } 00:26:48.673 ] 00:26:48.673 }' 00:26:48.673 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:48.673 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:48.673 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:48.673 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:48.673 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:48.932 [2024-07-12 07:37:22.787600] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:49.191 [2024-07-12 07:37:22.870735] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:49.191 [2024-07-12 07:37:22.871038] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:49.191 [2024-07-12 07:37:22.871088] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:49.191 [2024-07-12 07:37:22.871173] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:49.191 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:49.191 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:49.191 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:49.191 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:49.191 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:49.191 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:49.191 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:49.191 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:49.191 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:49.191 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:49.191 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.191 07:37:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.449 07:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:49.449 "name": "raid_bdev1", 00:26:49.449 "uuid": "8732d749-aed5-418f-832c-cc7bf5bf8d57", 00:26:49.449 "strip_size_kb": 0, 00:26:49.449 "state": "online", 00:26:49.449 "raid_level": "raid1", 00:26:49.449 "superblock": false, 00:26:49.449 "num_base_bdevs": 2, 00:26:49.449 "num_base_bdevs_discovered": 1, 00:26:49.449 "num_base_bdevs_operational": 1, 00:26:49.449 "base_bdevs_list": [ 00:26:49.449 { 00:26:49.449 "name": null, 00:26:49.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.449 "is_configured": false, 00:26:49.449 "data_offset": 0, 00:26:49.449 "data_size": 65536 00:26:49.449 }, 00:26:49.449 { 00:26:49.449 "name": "BaseBdev2", 00:26:49.449 "uuid": "1503e76f-478f-5fd6-bf1f-f0127e7ec0e5", 00:26:49.449 "is_configured": true, 00:26:49.449 "data_offset": 0, 00:26:49.449 "data_size": 65536 00:26:49.449 } 00:26:49.449 ] 00:26:49.449 }' 00:26:49.449 07:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:49.449 07:37:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.015 07:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:50.015 07:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:50.015 07:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:50.015 07:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:50.015 07:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:50.015 07:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.015 07:37:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.274 07:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:50.274 "name": "raid_bdev1", 00:26:50.274 "uuid": "8732d749-aed5-418f-832c-cc7bf5bf8d57", 00:26:50.274 "strip_size_kb": 0, 00:26:50.274 "state": "online", 00:26:50.274 "raid_level": "raid1", 00:26:50.274 "superblock": false, 00:26:50.274 "num_base_bdevs": 2, 00:26:50.274 "num_base_bdevs_discovered": 1, 00:26:50.274 "num_base_bdevs_operational": 1, 00:26:50.274 "base_bdevs_list": [ 00:26:50.274 { 00:26:50.274 "name": null, 00:26:50.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.274 "is_configured": false, 00:26:50.274 "data_offset": 0, 00:26:50.274 "data_size": 65536 00:26:50.274 }, 00:26:50.274 { 00:26:50.274 "name": "BaseBdev2", 00:26:50.274 "uuid": "1503e76f-478f-5fd6-bf1f-f0127e7ec0e5", 00:26:50.274 "is_configured": true, 00:26:50.274 "data_offset": 0, 00:26:50.274 "data_size": 65536 00:26:50.274 } 00:26:50.274 ] 00:26:50.274 }' 00:26:50.274 07:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:50.274 07:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:50.274 07:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:50.274 07:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:50.274 07:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:50.532 [2024-07-12 07:37:24.329821] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:50.532 [2024-07-12 07:37:24.334123] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:26:50.532 [2024-07-12 07:37:24.336328] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:50.532 07:37:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:51.910 "name": "raid_bdev1", 00:26:51.910 "uuid": "8732d749-aed5-418f-832c-cc7bf5bf8d57", 00:26:51.910 "strip_size_kb": 0, 00:26:51.910 "state": "online", 00:26:51.910 "raid_level": "raid1", 00:26:51.910 "superblock": false, 00:26:51.910 "num_base_bdevs": 2, 00:26:51.910 "num_base_bdevs_discovered": 2, 00:26:51.910 "num_base_bdevs_operational": 2, 00:26:51.910 "process": { 00:26:51.910 "type": "rebuild", 00:26:51.910 "target": "spare", 00:26:51.910 "progress": { 00:26:51.910 "blocks": 24576, 00:26:51.910 "percent": 37 00:26:51.910 } 00:26:51.910 }, 00:26:51.910 "base_bdevs_list": [ 00:26:51.910 { 00:26:51.910 "name": "spare", 00:26:51.910 "uuid": "9943fafe-0ffa-5a90-a94c-0a635d9fa4c8", 00:26:51.910 "is_configured": true, 00:26:51.910 "data_offset": 0, 00:26:51.910 "data_size": 65536 00:26:51.910 }, 00:26:51.910 { 00:26:51.910 "name": "BaseBdev2", 00:26:51.910 "uuid": "1503e76f-478f-5fd6-bf1f-f0127e7ec0e5", 00:26:51.910 "is_configured": true, 00:26:51.910 "data_offset": 0, 00:26:51.910 "data_size": 65536 00:26:51.910 } 00:26:51.910 ] 00:26:51.910 }' 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=756 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.910 07:37:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.184 07:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:52.184 "name": "raid_bdev1", 00:26:52.184 "uuid": "8732d749-aed5-418f-832c-cc7bf5bf8d57", 00:26:52.184 "strip_size_kb": 0, 00:26:52.184 "state": "online", 00:26:52.184 "raid_level": "raid1", 00:26:52.184 "superblock": false, 00:26:52.184 "num_base_bdevs": 2, 00:26:52.184 "num_base_bdevs_discovered": 2, 00:26:52.184 "num_base_bdevs_operational": 2, 00:26:52.184 "process": { 00:26:52.184 "type": "rebuild", 00:26:52.184 "target": "spare", 00:26:52.184 "progress": { 00:26:52.184 "blocks": 32768, 00:26:52.184 "percent": 50 00:26:52.184 } 00:26:52.184 }, 00:26:52.184 "base_bdevs_list": [ 00:26:52.184 { 00:26:52.184 "name": "spare", 00:26:52.184 "uuid": "9943fafe-0ffa-5a90-a94c-0a635d9fa4c8", 00:26:52.184 "is_configured": true, 00:26:52.184 "data_offset": 0, 00:26:52.184 "data_size": 65536 00:26:52.184 }, 00:26:52.184 { 00:26:52.184 "name": "BaseBdev2", 00:26:52.184 "uuid": "1503e76f-478f-5fd6-bf1f-f0127e7ec0e5", 00:26:52.184 "is_configured": true, 00:26:52.184 "data_offset": 0, 00:26:52.184 "data_size": 65536 00:26:52.184 } 00:26:52.184 ] 00:26:52.184 }' 00:26:52.184 07:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:52.454 07:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:52.454 07:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:52.454 07:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:52.454 07:37:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:53.390 07:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:53.390 07:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:53.390 07:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:53.390 07:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:53.390 07:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:53.390 07:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:53.390 07:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.390 07:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.649 07:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:53.649 "name": "raid_bdev1", 00:26:53.649 "uuid": "8732d749-aed5-418f-832c-cc7bf5bf8d57", 00:26:53.649 "strip_size_kb": 0, 00:26:53.649 "state": "online", 00:26:53.649 "raid_level": "raid1", 00:26:53.649 "superblock": false, 00:26:53.649 "num_base_bdevs": 2, 00:26:53.649 "num_base_bdevs_discovered": 2, 00:26:53.649 "num_base_bdevs_operational": 2, 00:26:53.649 "process": { 00:26:53.649 "type": "rebuild", 00:26:53.649 "target": "spare", 00:26:53.649 "progress": { 00:26:53.649 "blocks": 61440, 00:26:53.649 "percent": 93 00:26:53.649 } 00:26:53.649 }, 00:26:53.649 "base_bdevs_list": [ 00:26:53.649 { 00:26:53.649 "name": "spare", 00:26:53.649 "uuid": "9943fafe-0ffa-5a90-a94c-0a635d9fa4c8", 00:26:53.649 "is_configured": true, 00:26:53.649 "data_offset": 0, 00:26:53.649 "data_size": 65536 00:26:53.649 }, 00:26:53.649 { 00:26:53.649 "name": "BaseBdev2", 00:26:53.649 "uuid": "1503e76f-478f-5fd6-bf1f-f0127e7ec0e5", 00:26:53.649 "is_configured": true, 00:26:53.649 "data_offset": 0, 00:26:53.649 "data_size": 65536 00:26:53.649 } 00:26:53.649 ] 00:26:53.649 }' 00:26:53.649 07:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:53.649 07:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:53.649 07:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:53.649 07:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:53.649 07:37:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:53.908 [2024-07-12 07:37:27.555105] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:53.908 [2024-07-12 07:37:27.555368] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:53.908 [2024-07-12 07:37:27.555523] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:54.842 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:54.842 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:54.842 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:54.842 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:54.842 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:54.842 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:54.842 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.842 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.100 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:55.100 "name": "raid_bdev1", 00:26:55.100 "uuid": "8732d749-aed5-418f-832c-cc7bf5bf8d57", 00:26:55.100 "strip_size_kb": 0, 00:26:55.100 "state": "online", 00:26:55.100 "raid_level": "raid1", 00:26:55.100 "superblock": false, 00:26:55.100 "num_base_bdevs": 2, 00:26:55.100 "num_base_bdevs_discovered": 2, 00:26:55.100 "num_base_bdevs_operational": 2, 00:26:55.100 "base_bdevs_list": [ 00:26:55.100 { 00:26:55.100 "name": "spare", 00:26:55.100 "uuid": "9943fafe-0ffa-5a90-a94c-0a635d9fa4c8", 00:26:55.100 "is_configured": true, 00:26:55.100 "data_offset": 0, 00:26:55.100 "data_size": 65536 00:26:55.100 }, 00:26:55.100 { 00:26:55.100 "name": "BaseBdev2", 00:26:55.100 "uuid": "1503e76f-478f-5fd6-bf1f-f0127e7ec0e5", 00:26:55.100 "is_configured": true, 00:26:55.100 "data_offset": 0, 00:26:55.100 "data_size": 65536 00:26:55.100 } 00:26:55.100 ] 00:26:55.100 }' 00:26:55.100 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:55.100 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:55.100 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:55.100 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:55.100 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:26:55.100 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:55.100 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:55.100 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:55.100 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:55.100 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:55.100 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.100 07:37:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:55.360 "name": "raid_bdev1", 00:26:55.360 "uuid": "8732d749-aed5-418f-832c-cc7bf5bf8d57", 00:26:55.360 "strip_size_kb": 0, 00:26:55.360 "state": "online", 00:26:55.360 "raid_level": "raid1", 00:26:55.360 "superblock": false, 00:26:55.360 "num_base_bdevs": 2, 00:26:55.360 "num_base_bdevs_discovered": 2, 00:26:55.360 "num_base_bdevs_operational": 2, 00:26:55.360 "base_bdevs_list": [ 00:26:55.360 { 00:26:55.360 "name": "spare", 00:26:55.360 "uuid": "9943fafe-0ffa-5a90-a94c-0a635d9fa4c8", 00:26:55.360 "is_configured": true, 00:26:55.360 "data_offset": 0, 00:26:55.360 "data_size": 65536 00:26:55.360 }, 00:26:55.360 { 00:26:55.360 "name": "BaseBdev2", 00:26:55.360 "uuid": "1503e76f-478f-5fd6-bf1f-f0127e7ec0e5", 00:26:55.360 "is_configured": true, 00:26:55.360 "data_offset": 0, 00:26:55.360 "data_size": 65536 00:26:55.360 } 00:26:55.360 ] 00:26:55.360 }' 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.360 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.928 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:55.928 "name": "raid_bdev1", 00:26:55.928 "uuid": "8732d749-aed5-418f-832c-cc7bf5bf8d57", 00:26:55.928 "strip_size_kb": 0, 00:26:55.928 "state": "online", 00:26:55.928 "raid_level": "raid1", 00:26:55.928 "superblock": false, 00:26:55.928 "num_base_bdevs": 2, 00:26:55.928 "num_base_bdevs_discovered": 2, 00:26:55.928 "num_base_bdevs_operational": 2, 00:26:55.928 "base_bdevs_list": [ 00:26:55.928 { 00:26:55.928 "name": "spare", 00:26:55.928 "uuid": "9943fafe-0ffa-5a90-a94c-0a635d9fa4c8", 00:26:55.928 "is_configured": true, 00:26:55.928 "data_offset": 0, 00:26:55.928 "data_size": 65536 00:26:55.928 }, 00:26:55.928 { 00:26:55.928 "name": "BaseBdev2", 00:26:55.928 "uuid": "1503e76f-478f-5fd6-bf1f-f0127e7ec0e5", 00:26:55.928 "is_configured": true, 00:26:55.928 "data_offset": 0, 00:26:55.928 "data_size": 65536 00:26:55.928 } 00:26:55.928 ] 00:26:55.928 }' 00:26:55.928 07:37:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:55.928 07:37:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.187 07:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:56.446 [2024-07-12 07:37:30.293614] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:56.446 [2024-07-12 07:37:30.293653] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:56.446 [2024-07-12 07:37:30.293774] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:56.446 [2024-07-12 07:37:30.293852] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:56.446 [2024-07-12 07:37:30.293864] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:26:56.446 07:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.446 07:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:57.015 /dev/nbd0 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:26:57.015 07:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:26:57.273 07:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:26:57.273 07:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:26:57.273 07:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:26:57.273 07:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:57.273 1+0 records in 00:26:57.273 1+0 records out 00:26:57.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000945869 s, 4.3 MB/s 00:26:57.273 07:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:57.273 07:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:26:57.273 07:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:57.273 07:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:26:57.273 07:37:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:26:57.273 07:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:57.273 07:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:57.273 07:37:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:57.531 /dev/nbd1 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:57.531 1+0 records in 00:26:57.531 1+0 records out 00:26:57.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604366 s, 6.8 MB/s 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:57.531 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:57.789 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:57.789 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:57.789 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:57.789 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:57.789 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:57.789 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:57.789 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:57.789 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:57.789 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:57.789 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 153794 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 153794 ']' 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 153794 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 153794 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 153794' 00:26:58.048 killing process with pid 153794 00:26:58.048 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@965 -- # kill 153794 00:26:58.048 Received shutdown signal, test time was about 60.000000 seconds 00:26:58.049 00:26:58.049 Latency(us) 00:26:58.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.049 =================================================================================================================== 00:26:58.049 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:58.049 [2024-07-12 07:37:31.866192] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:58.049 07:37:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # wait 153794 00:26:58.049 [2024-07-12 07:37:31.896551] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:58.307 07:37:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:26:58.307 00:26:58.307 real 0m22.208s 00:26:58.307 user 0m30.036s 00:26:58.307 sys 0m5.063s 00:26:58.307 07:37:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:58.307 07:37:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.307 ************************************ 00:26:58.307 END TEST raid_rebuild_test 00:26:58.307 ************************************ 00:26:58.566 07:37:32 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:26:58.566 07:37:32 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:26:58.566 07:37:32 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:58.566 07:37:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:58.566 ************************************ 00:26:58.566 START TEST raid_rebuild_test_sb 00:26:58.566 ************************************ 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false true 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=154345 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 154345 /var/tmp/spdk-raid.sock 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 154345 ']' 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:58.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:58.566 07:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.566 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:58.566 Zero copy mechanism will not be used. 00:26:58.566 [2024-07-12 07:37:32.301879] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:58.566 [2024-07-12 07:37:32.302043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154345 ] 00:26:58.566 [2024-07-12 07:37:32.446323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.825 [2024-07-12 07:37:32.491750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.825 [2024-07-12 07:37:32.533139] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:58.825 07:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:58.825 07:37:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:26:58.825 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:58.825 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:59.083 BaseBdev1_malloc 00:26:59.083 07:37:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:59.339 [2024-07-12 07:37:33.062764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:59.339 [2024-07-12 07:37:33.062876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:59.339 [2024-07-12 07:37:33.062926] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:26:59.339 [2024-07-12 07:37:33.062994] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:59.339 [2024-07-12 07:37:33.065750] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:59.339 [2024-07-12 07:37:33.065815] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:59.339 BaseBdev1 00:26:59.339 07:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:59.339 07:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:59.596 BaseBdev2_malloc 00:26:59.596 07:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:59.855 [2024-07-12 07:37:33.499888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:59.855 [2024-07-12 07:37:33.499972] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:59.855 [2024-07-12 07:37:33.500009] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:26:59.855 [2024-07-12 07:37:33.500049] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:59.855 [2024-07-12 07:37:33.502401] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:59.855 [2024-07-12 07:37:33.502453] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:59.855 BaseBdev2 00:26:59.855 07:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:59.855 spare_malloc 00:26:59.855 07:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:00.115 spare_delay 00:27:00.115 07:37:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:00.373 [2024-07-12 07:37:34.072925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:00.373 [2024-07-12 07:37:34.073004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.373 [2024-07-12 07:37:34.073041] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:27:00.373 [2024-07-12 07:37:34.073080] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.373 [2024-07-12 07:37:34.075530] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.373 [2024-07-12 07:37:34.075599] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:00.373 spare 00:27:00.373 07:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:00.632 [2024-07-12 07:37:34.301038] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:00.632 [2024-07-12 07:37:34.303139] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:00.632 [2024-07-12 07:37:34.303356] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:27:00.632 [2024-07-12 07:37:34.303369] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:00.632 [2024-07-12 07:37:34.303522] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:27:00.632 [2024-07-12 07:37:34.303900] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:27:00.632 [2024-07-12 07:37:34.303910] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:27:00.632 [2024-07-12 07:37:34.304054] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:00.632 07:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:00.632 07:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:00.632 07:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:00.632 07:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:00.632 07:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:00.632 07:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:00.632 07:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:00.632 07:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:00.632 07:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:00.632 07:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:00.632 07:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.632 07:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.632 07:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:00.632 "name": "raid_bdev1", 00:27:00.632 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:00.632 "strip_size_kb": 0, 00:27:00.632 "state": "online", 00:27:00.632 "raid_level": "raid1", 00:27:00.632 "superblock": true, 00:27:00.632 "num_base_bdevs": 2, 00:27:00.632 "num_base_bdevs_discovered": 2, 00:27:00.632 "num_base_bdevs_operational": 2, 00:27:00.632 "base_bdevs_list": [ 00:27:00.632 { 00:27:00.632 "name": "BaseBdev1", 00:27:00.632 "uuid": "99ccb51d-9438-5fb5-bae4-6918e7b332b4", 00:27:00.632 "is_configured": true, 00:27:00.632 "data_offset": 2048, 00:27:00.632 "data_size": 63488 00:27:00.632 }, 00:27:00.632 { 00:27:00.632 "name": "BaseBdev2", 00:27:00.632 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:00.632 "is_configured": true, 00:27:00.632 "data_offset": 2048, 00:27:00.632 "data_size": 63488 00:27:00.632 } 00:27:00.632 ] 00:27:00.632 }' 00:27:00.632 07:37:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:00.632 07:37:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.200 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:01.200 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:01.459 [2024-07-12 07:37:35.281380] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:01.459 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:27:01.459 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.459 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:01.717 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:27:01.717 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:27:01.718 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:27:01.718 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:27:01.718 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:01.718 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:01.718 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:01.718 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:01.718 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:01.718 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:01.718 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:01.718 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:01.718 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:01.718 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:01.976 [2024-07-12 07:37:35.741304] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:27:01.976 /dev/nbd0 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:01.976 1+0 records in 00:27:01.976 1+0 records out 00:27:01.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320859 s, 12.8 MB/s 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:27:01.976 07:37:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:27:07.251 63488+0 records in 00:27:07.251 63488+0 records out 00:27:07.251 32505856 bytes (33 MB, 31 MiB) copied, 4.87816 s, 6.7 MB/s 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:07.251 [2024-07-12 07:37:40.944015] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:07.251 07:37:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:07.509 [2024-07-12 07:37:41.143653] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:07.509 07:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:07.509 07:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:07.509 07:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:07.509 07:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:07.509 07:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:07.509 07:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:07.509 07:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:07.509 07:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:07.509 07:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:07.509 07:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:07.509 07:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.509 07:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.767 07:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:07.767 "name": "raid_bdev1", 00:27:07.767 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:07.767 "strip_size_kb": 0, 00:27:07.767 "state": "online", 00:27:07.767 "raid_level": "raid1", 00:27:07.767 "superblock": true, 00:27:07.767 "num_base_bdevs": 2, 00:27:07.767 "num_base_bdevs_discovered": 1, 00:27:07.767 "num_base_bdevs_operational": 1, 00:27:07.767 "base_bdevs_list": [ 00:27:07.767 { 00:27:07.767 "name": null, 00:27:07.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:07.767 "is_configured": false, 00:27:07.767 "data_offset": 2048, 00:27:07.767 "data_size": 63488 00:27:07.767 }, 00:27:07.767 { 00:27:07.767 "name": "BaseBdev2", 00:27:07.767 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:07.767 "is_configured": true, 00:27:07.767 "data_offset": 2048, 00:27:07.767 "data_size": 63488 00:27:07.767 } 00:27:07.767 ] 00:27:07.767 }' 00:27:07.767 07:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:07.767 07:37:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.332 07:37:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:08.332 [2024-07-12 07:37:42.159819] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:08.332 [2024-07-12 07:37:42.167678] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e0e0 00:27:08.332 [2024-07-12 07:37:42.170349] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:08.332 07:37:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:09.707 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:09.707 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:09.707 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:09.707 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:09.707 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:09.707 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.707 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.707 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:09.707 "name": "raid_bdev1", 00:27:09.707 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:09.707 "strip_size_kb": 0, 00:27:09.707 "state": "online", 00:27:09.707 "raid_level": "raid1", 00:27:09.707 "superblock": true, 00:27:09.707 "num_base_bdevs": 2, 00:27:09.707 "num_base_bdevs_discovered": 2, 00:27:09.707 "num_base_bdevs_operational": 2, 00:27:09.707 "process": { 00:27:09.707 "type": "rebuild", 00:27:09.707 "target": "spare", 00:27:09.707 "progress": { 00:27:09.707 "blocks": 24576, 00:27:09.707 "percent": 38 00:27:09.707 } 00:27:09.707 }, 00:27:09.707 "base_bdevs_list": [ 00:27:09.707 { 00:27:09.707 "name": "spare", 00:27:09.707 "uuid": "b442edaa-98a4-5a6c-8590-94316bf2911e", 00:27:09.707 "is_configured": true, 00:27:09.707 "data_offset": 2048, 00:27:09.707 "data_size": 63488 00:27:09.707 }, 00:27:09.707 { 00:27:09.707 "name": "BaseBdev2", 00:27:09.707 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:09.707 "is_configured": true, 00:27:09.707 "data_offset": 2048, 00:27:09.707 "data_size": 63488 00:27:09.707 } 00:27:09.707 ] 00:27:09.707 }' 00:27:09.707 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:09.707 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:09.707 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:09.707 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:09.707 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:09.965 [2024-07-12 07:37:43.788676] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:10.224 [2024-07-12 07:37:43.884612] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:10.224 [2024-07-12 07:37:43.884727] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:10.224 [2024-07-12 07:37:43.884746] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:10.224 [2024-07-12 07:37:43.884754] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:10.224 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:10.224 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:10.224 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:10.224 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:10.224 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:10.224 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:10.224 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:10.224 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:10.224 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:10.224 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:10.224 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.224 07:37:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.483 07:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:10.483 "name": "raid_bdev1", 00:27:10.483 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:10.483 "strip_size_kb": 0, 00:27:10.483 "state": "online", 00:27:10.483 "raid_level": "raid1", 00:27:10.483 "superblock": true, 00:27:10.483 "num_base_bdevs": 2, 00:27:10.483 "num_base_bdevs_discovered": 1, 00:27:10.483 "num_base_bdevs_operational": 1, 00:27:10.483 "base_bdevs_list": [ 00:27:10.483 { 00:27:10.483 "name": null, 00:27:10.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:10.483 "is_configured": false, 00:27:10.483 "data_offset": 2048, 00:27:10.483 "data_size": 63488 00:27:10.483 }, 00:27:10.483 { 00:27:10.483 "name": "BaseBdev2", 00:27:10.483 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:10.483 "is_configured": true, 00:27:10.483 "data_offset": 2048, 00:27:10.483 "data_size": 63488 00:27:10.483 } 00:27:10.483 ] 00:27:10.483 }' 00:27:10.483 07:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:10.483 07:37:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:11.054 07:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:11.054 07:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:11.054 07:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:11.054 07:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:11.054 07:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:11.054 07:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.054 07:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.313 07:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:11.313 "name": "raid_bdev1", 00:27:11.313 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:11.313 "strip_size_kb": 0, 00:27:11.313 "state": "online", 00:27:11.313 "raid_level": "raid1", 00:27:11.313 "superblock": true, 00:27:11.313 "num_base_bdevs": 2, 00:27:11.313 "num_base_bdevs_discovered": 1, 00:27:11.313 "num_base_bdevs_operational": 1, 00:27:11.313 "base_bdevs_list": [ 00:27:11.313 { 00:27:11.313 "name": null, 00:27:11.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.313 "is_configured": false, 00:27:11.313 "data_offset": 2048, 00:27:11.313 "data_size": 63488 00:27:11.313 }, 00:27:11.313 { 00:27:11.313 "name": "BaseBdev2", 00:27:11.313 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:11.313 "is_configured": true, 00:27:11.313 "data_offset": 2048, 00:27:11.313 "data_size": 63488 00:27:11.313 } 00:27:11.313 ] 00:27:11.313 }' 00:27:11.313 07:37:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:11.313 07:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:11.313 07:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:11.313 07:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:11.313 07:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:11.573 [2024-07-12 07:37:45.264957] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:11.573 [2024-07-12 07:37:45.272538] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:27:11.573 [2024-07-12 07:37:45.274970] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:11.573 07:37:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:12.508 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:12.508 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:12.508 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:12.508 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:12.509 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:12.509 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.509 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.768 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:12.768 "name": "raid_bdev1", 00:27:12.768 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:12.768 "strip_size_kb": 0, 00:27:12.768 "state": "online", 00:27:12.768 "raid_level": "raid1", 00:27:12.768 "superblock": true, 00:27:12.768 "num_base_bdevs": 2, 00:27:12.768 "num_base_bdevs_discovered": 2, 00:27:12.768 "num_base_bdevs_operational": 2, 00:27:12.768 "process": { 00:27:12.768 "type": "rebuild", 00:27:12.768 "target": "spare", 00:27:12.768 "progress": { 00:27:12.768 "blocks": 24576, 00:27:12.768 "percent": 38 00:27:12.768 } 00:27:12.768 }, 00:27:12.768 "base_bdevs_list": [ 00:27:12.768 { 00:27:12.768 "name": "spare", 00:27:12.768 "uuid": "b442edaa-98a4-5a6c-8590-94316bf2911e", 00:27:12.768 "is_configured": true, 00:27:12.768 "data_offset": 2048, 00:27:12.768 "data_size": 63488 00:27:12.768 }, 00:27:12.768 { 00:27:12.768 "name": "BaseBdev2", 00:27:12.768 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:12.768 "is_configured": true, 00:27:12.768 "data_offset": 2048, 00:27:12.768 "data_size": 63488 00:27:12.768 } 00:27:12.768 ] 00:27:12.768 }' 00:27:12.768 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:12.768 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:12.768 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:12.768 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:12.768 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:27:12.768 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:27:12.768 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:27:12.768 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:12.768 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:12.768 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:12.768 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=777 00:27:12.768 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:12.768 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:12.768 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:13.029 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:13.029 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:13.029 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:13.029 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.029 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.029 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:13.029 "name": "raid_bdev1", 00:27:13.029 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:13.029 "strip_size_kb": 0, 00:27:13.029 "state": "online", 00:27:13.029 "raid_level": "raid1", 00:27:13.029 "superblock": true, 00:27:13.029 "num_base_bdevs": 2, 00:27:13.029 "num_base_bdevs_discovered": 2, 00:27:13.029 "num_base_bdevs_operational": 2, 00:27:13.029 "process": { 00:27:13.029 "type": "rebuild", 00:27:13.029 "target": "spare", 00:27:13.029 "progress": { 00:27:13.029 "blocks": 30720, 00:27:13.029 "percent": 48 00:27:13.029 } 00:27:13.029 }, 00:27:13.029 "base_bdevs_list": [ 00:27:13.029 { 00:27:13.029 "name": "spare", 00:27:13.029 "uuid": "b442edaa-98a4-5a6c-8590-94316bf2911e", 00:27:13.029 "is_configured": true, 00:27:13.029 "data_offset": 2048, 00:27:13.029 "data_size": 63488 00:27:13.029 }, 00:27:13.029 { 00:27:13.029 "name": "BaseBdev2", 00:27:13.029 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:13.029 "is_configured": true, 00:27:13.029 "data_offset": 2048, 00:27:13.029 "data_size": 63488 00:27:13.029 } 00:27:13.029 ] 00:27:13.029 }' 00:27:13.029 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:13.029 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:13.029 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:13.288 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:13.288 07:37:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:14.227 07:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:14.227 07:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:14.227 07:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:14.227 07:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:14.227 07:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:14.227 07:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:14.227 07:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.227 07:37:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.486 07:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:14.486 "name": "raid_bdev1", 00:27:14.486 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:14.486 "strip_size_kb": 0, 00:27:14.486 "state": "online", 00:27:14.486 "raid_level": "raid1", 00:27:14.486 "superblock": true, 00:27:14.486 "num_base_bdevs": 2, 00:27:14.486 "num_base_bdevs_discovered": 2, 00:27:14.486 "num_base_bdevs_operational": 2, 00:27:14.486 "process": { 00:27:14.486 "type": "rebuild", 00:27:14.486 "target": "spare", 00:27:14.486 "progress": { 00:27:14.486 "blocks": 59392, 00:27:14.486 "percent": 93 00:27:14.486 } 00:27:14.486 }, 00:27:14.486 "base_bdevs_list": [ 00:27:14.486 { 00:27:14.486 "name": "spare", 00:27:14.486 "uuid": "b442edaa-98a4-5a6c-8590-94316bf2911e", 00:27:14.486 "is_configured": true, 00:27:14.486 "data_offset": 2048, 00:27:14.486 "data_size": 63488 00:27:14.486 }, 00:27:14.486 { 00:27:14.486 "name": "BaseBdev2", 00:27:14.486 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:14.486 "is_configured": true, 00:27:14.486 "data_offset": 2048, 00:27:14.486 "data_size": 63488 00:27:14.486 } 00:27:14.486 ] 00:27:14.486 }' 00:27:14.486 07:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:14.486 07:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:14.486 07:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:14.486 07:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:14.486 07:37:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:14.745 [2024-07-12 07:37:48.399044] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:14.745 [2024-07-12 07:37:48.399417] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:14.745 [2024-07-12 07:37:48.399776] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:15.682 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:15.682 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:15.682 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:15.682 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:15.682 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:15.682 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:15.682 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.682 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.941 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:15.941 "name": "raid_bdev1", 00:27:15.941 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:15.941 "strip_size_kb": 0, 00:27:15.941 "state": "online", 00:27:15.941 "raid_level": "raid1", 00:27:15.941 "superblock": true, 00:27:15.941 "num_base_bdevs": 2, 00:27:15.941 "num_base_bdevs_discovered": 2, 00:27:15.941 "num_base_bdevs_operational": 2, 00:27:15.941 "base_bdevs_list": [ 00:27:15.941 { 00:27:15.941 "name": "spare", 00:27:15.941 "uuid": "b442edaa-98a4-5a6c-8590-94316bf2911e", 00:27:15.941 "is_configured": true, 00:27:15.941 "data_offset": 2048, 00:27:15.941 "data_size": 63488 00:27:15.941 }, 00:27:15.941 { 00:27:15.941 "name": "BaseBdev2", 00:27:15.942 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:15.942 "is_configured": true, 00:27:15.942 "data_offset": 2048, 00:27:15.942 "data_size": 63488 00:27:15.942 } 00:27:15.942 ] 00:27:15.942 }' 00:27:15.942 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:15.942 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:15.942 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:15.942 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:15.942 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:27:15.942 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:15.942 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:15.942 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:15.942 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:15.942 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:15.942 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.942 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.200 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:16.200 "name": "raid_bdev1", 00:27:16.200 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:16.200 "strip_size_kb": 0, 00:27:16.200 "state": "online", 00:27:16.200 "raid_level": "raid1", 00:27:16.200 "superblock": true, 00:27:16.200 "num_base_bdevs": 2, 00:27:16.200 "num_base_bdevs_discovered": 2, 00:27:16.200 "num_base_bdevs_operational": 2, 00:27:16.200 "base_bdevs_list": [ 00:27:16.200 { 00:27:16.200 "name": "spare", 00:27:16.200 "uuid": "b442edaa-98a4-5a6c-8590-94316bf2911e", 00:27:16.200 "is_configured": true, 00:27:16.200 "data_offset": 2048, 00:27:16.200 "data_size": 63488 00:27:16.200 }, 00:27:16.201 { 00:27:16.201 "name": "BaseBdev2", 00:27:16.201 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:16.201 "is_configured": true, 00:27:16.201 "data_offset": 2048, 00:27:16.201 "data_size": 63488 00:27:16.201 } 00:27:16.201 ] 00:27:16.201 }' 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.201 07:37:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.459 07:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:16.459 "name": "raid_bdev1", 00:27:16.459 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:16.459 "strip_size_kb": 0, 00:27:16.459 "state": "online", 00:27:16.459 "raid_level": "raid1", 00:27:16.459 "superblock": true, 00:27:16.459 "num_base_bdevs": 2, 00:27:16.459 "num_base_bdevs_discovered": 2, 00:27:16.459 "num_base_bdevs_operational": 2, 00:27:16.459 "base_bdevs_list": [ 00:27:16.459 { 00:27:16.459 "name": "spare", 00:27:16.459 "uuid": "b442edaa-98a4-5a6c-8590-94316bf2911e", 00:27:16.459 "is_configured": true, 00:27:16.459 "data_offset": 2048, 00:27:16.459 "data_size": 63488 00:27:16.459 }, 00:27:16.459 { 00:27:16.459 "name": "BaseBdev2", 00:27:16.459 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:16.459 "is_configured": true, 00:27:16.459 "data_offset": 2048, 00:27:16.459 "data_size": 63488 00:27:16.459 } 00:27:16.459 ] 00:27:16.459 }' 00:27:16.459 07:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:16.459 07:37:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:17.026 07:37:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:17.284 [2024-07-12 07:37:51.042088] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:17.284 [2024-07-12 07:37:51.042138] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:17.284 [2024-07-12 07:37:51.042261] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:17.284 [2024-07-12 07:37:51.042353] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:17.284 [2024-07-12 07:37:51.042364] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:27:17.284 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.284 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:27:17.543 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:17.543 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:17.543 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:27:17.543 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:17.543 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:17.543 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:17.543 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:17.543 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:17.543 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:17.543 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:17.543 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:17.543 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:17.543 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:17.814 /dev/nbd0 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:17.814 1+0 records in 00:27:17.814 1+0 records out 00:27:17.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430076 s, 9.5 MB/s 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:17.814 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:18.119 /dev/nbd1 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:18.119 1+0 records in 00:27:18.119 1+0 records out 00:27:18.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000833531 s, 4.9 MB/s 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:18.119 07:37:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:18.378 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:18.378 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:18.378 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:18.378 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:18.378 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:18.378 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:18.378 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:18.378 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:18.378 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:18.378 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:18.637 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:18.637 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:18.637 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:18.637 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:18.637 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:18.637 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:18.637 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:18.637 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:18.637 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:27:18.637 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:18.896 07:37:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:19.156 [2024-07-12 07:37:52.989661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:19.156 [2024-07-12 07:37:52.989802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:19.156 [2024-07-12 07:37:52.989851] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:19.156 [2024-07-12 07:37:52.989876] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:19.156 [2024-07-12 07:37:52.992866] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:19.156 [2024-07-12 07:37:52.992934] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:19.156 [2024-07-12 07:37:52.993075] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:19.156 [2024-07-12 07:37:52.993124] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:19.156 [2024-07-12 07:37:52.993324] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:19.156 spare 00:27:19.156 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:19.156 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:19.156 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:19.156 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:19.156 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:19.156 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:19.156 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:19.156 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:19.156 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:19.156 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:19.156 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.156 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.414 [2024-07-12 07:37:53.093434] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:27:19.414 [2024-07-12 07:37:53.093488] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:19.414 [2024-07-12 07:37:53.093685] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:27:19.414 [2024-07-12 07:37:53.094144] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:27:19.414 [2024-07-12 07:37:53.094155] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:27:19.414 [2024-07-12 07:37:53.094321] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:19.414 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:19.414 "name": "raid_bdev1", 00:27:19.414 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:19.414 "strip_size_kb": 0, 00:27:19.414 "state": "online", 00:27:19.414 "raid_level": "raid1", 00:27:19.414 "superblock": true, 00:27:19.414 "num_base_bdevs": 2, 00:27:19.414 "num_base_bdevs_discovered": 2, 00:27:19.414 "num_base_bdevs_operational": 2, 00:27:19.414 "base_bdevs_list": [ 00:27:19.414 { 00:27:19.414 "name": "spare", 00:27:19.414 "uuid": "b442edaa-98a4-5a6c-8590-94316bf2911e", 00:27:19.414 "is_configured": true, 00:27:19.414 "data_offset": 2048, 00:27:19.414 "data_size": 63488 00:27:19.414 }, 00:27:19.414 { 00:27:19.414 "name": "BaseBdev2", 00:27:19.414 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:19.414 "is_configured": true, 00:27:19.414 "data_offset": 2048, 00:27:19.414 "data_size": 63488 00:27:19.414 } 00:27:19.414 ] 00:27:19.414 }' 00:27:19.414 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:19.414 07:37:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:20.349 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:20.349 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:20.349 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:20.349 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:20.349 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:20.349 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.349 07:37:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.349 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:20.349 "name": "raid_bdev1", 00:27:20.349 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:20.349 "strip_size_kb": 0, 00:27:20.349 "state": "online", 00:27:20.349 "raid_level": "raid1", 00:27:20.349 "superblock": true, 00:27:20.349 "num_base_bdevs": 2, 00:27:20.349 "num_base_bdevs_discovered": 2, 00:27:20.349 "num_base_bdevs_operational": 2, 00:27:20.349 "base_bdevs_list": [ 00:27:20.349 { 00:27:20.349 "name": "spare", 00:27:20.349 "uuid": "b442edaa-98a4-5a6c-8590-94316bf2911e", 00:27:20.349 "is_configured": true, 00:27:20.349 "data_offset": 2048, 00:27:20.349 "data_size": 63488 00:27:20.349 }, 00:27:20.349 { 00:27:20.349 "name": "BaseBdev2", 00:27:20.349 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:20.349 "is_configured": true, 00:27:20.349 "data_offset": 2048, 00:27:20.349 "data_size": 63488 00:27:20.349 } 00:27:20.349 ] 00:27:20.349 }' 00:27:20.349 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:20.349 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:20.349 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:20.349 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:20.349 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.349 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:20.608 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:27:20.608 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:20.868 [2024-07-12 07:37:54.666686] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:20.868 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:20.868 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:20.868 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:20.868 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:20.868 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:20.868 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:20.868 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:20.868 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:20.868 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:20.868 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:20.868 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.868 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.128 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:21.128 "name": "raid_bdev1", 00:27:21.128 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:21.128 "strip_size_kb": 0, 00:27:21.128 "state": "online", 00:27:21.128 "raid_level": "raid1", 00:27:21.128 "superblock": true, 00:27:21.128 "num_base_bdevs": 2, 00:27:21.128 "num_base_bdevs_discovered": 1, 00:27:21.128 "num_base_bdevs_operational": 1, 00:27:21.128 "base_bdevs_list": [ 00:27:21.128 { 00:27:21.128 "name": null, 00:27:21.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.128 "is_configured": false, 00:27:21.128 "data_offset": 2048, 00:27:21.128 "data_size": 63488 00:27:21.128 }, 00:27:21.128 { 00:27:21.128 "name": "BaseBdev2", 00:27:21.128 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:21.128 "is_configured": true, 00:27:21.128 "data_offset": 2048, 00:27:21.128 "data_size": 63488 00:27:21.128 } 00:27:21.128 ] 00:27:21.128 }' 00:27:21.128 07:37:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:21.128 07:37:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:21.697 07:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:21.957 [2024-07-12 07:37:55.806944] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:21.957 [2024-07-12 07:37:55.807230] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:21.957 [2024-07-12 07:37:55.807248] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:21.957 [2024-07-12 07:37:55.807321] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:21.957 [2024-07-12 07:37:55.814760] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:27:21.957 [2024-07-12 07:37:55.817384] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:21.957 07:37:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:27:23.332 07:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:23.332 07:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:23.332 07:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:23.332 07:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:23.332 07:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:23.332 07:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.332 07:37:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.332 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:23.332 "name": "raid_bdev1", 00:27:23.332 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:23.332 "strip_size_kb": 0, 00:27:23.332 "state": "online", 00:27:23.332 "raid_level": "raid1", 00:27:23.332 "superblock": true, 00:27:23.332 "num_base_bdevs": 2, 00:27:23.332 "num_base_bdevs_discovered": 2, 00:27:23.332 "num_base_bdevs_operational": 2, 00:27:23.332 "process": { 00:27:23.332 "type": "rebuild", 00:27:23.332 "target": "spare", 00:27:23.332 "progress": { 00:27:23.332 "blocks": 24576, 00:27:23.332 "percent": 38 00:27:23.332 } 00:27:23.332 }, 00:27:23.332 "base_bdevs_list": [ 00:27:23.332 { 00:27:23.332 "name": "spare", 00:27:23.332 "uuid": "b442edaa-98a4-5a6c-8590-94316bf2911e", 00:27:23.332 "is_configured": true, 00:27:23.332 "data_offset": 2048, 00:27:23.332 "data_size": 63488 00:27:23.332 }, 00:27:23.332 { 00:27:23.332 "name": "BaseBdev2", 00:27:23.332 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:23.332 "is_configured": true, 00:27:23.332 "data_offset": 2048, 00:27:23.332 "data_size": 63488 00:27:23.332 } 00:27:23.332 ] 00:27:23.332 }' 00:27:23.332 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:23.332 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:23.332 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:23.332 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:23.332 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:23.590 [2024-07-12 07:37:57.375761] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:23.590 [2024-07-12 07:37:57.429745] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:23.590 [2024-07-12 07:37:57.429833] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:23.590 [2024-07-12 07:37:57.429850] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:23.590 [2024-07-12 07:37:57.429857] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:23.590 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:23.590 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:23.590 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:23.590 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:23.590 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:23.590 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:23.590 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:23.590 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:23.590 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:23.590 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:23.590 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.590 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.158 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:24.158 "name": "raid_bdev1", 00:27:24.158 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:24.158 "strip_size_kb": 0, 00:27:24.158 "state": "online", 00:27:24.158 "raid_level": "raid1", 00:27:24.158 "superblock": true, 00:27:24.158 "num_base_bdevs": 2, 00:27:24.158 "num_base_bdevs_discovered": 1, 00:27:24.158 "num_base_bdevs_operational": 1, 00:27:24.158 "base_bdevs_list": [ 00:27:24.158 { 00:27:24.158 "name": null, 00:27:24.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.158 "is_configured": false, 00:27:24.158 "data_offset": 2048, 00:27:24.158 "data_size": 63488 00:27:24.158 }, 00:27:24.158 { 00:27:24.158 "name": "BaseBdev2", 00:27:24.158 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:24.158 "is_configured": true, 00:27:24.158 "data_offset": 2048, 00:27:24.158 "data_size": 63488 00:27:24.158 } 00:27:24.158 ] 00:27:24.158 }' 00:27:24.158 07:37:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:24.158 07:37:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:24.724 07:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:24.983 [2024-07-12 07:37:58.617662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:24.983 [2024-07-12 07:37:58.617789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:24.983 [2024-07-12 07:37:58.617832] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:27:24.983 [2024-07-12 07:37:58.617873] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:24.983 [2024-07-12 07:37:58.618448] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:24.983 [2024-07-12 07:37:58.618495] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:24.983 [2024-07-12 07:37:58.618642] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:24.983 [2024-07-12 07:37:58.618656] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:24.983 [2024-07-12 07:37:58.618667] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:24.983 [2024-07-12 07:37:58.618726] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:24.983 [2024-07-12 07:37:58.626114] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caefe0 00:27:24.983 spare 00:27:24.983 [2024-07-12 07:37:58.628534] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:24.983 07:37:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:27:25.917 07:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.917 07:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:25.917 07:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:25.917 07:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:25.917 07:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:25.917 07:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.917 07:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.175 07:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:26.175 "name": "raid_bdev1", 00:27:26.175 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:26.175 "strip_size_kb": 0, 00:27:26.175 "state": "online", 00:27:26.175 "raid_level": "raid1", 00:27:26.175 "superblock": true, 00:27:26.175 "num_base_bdevs": 2, 00:27:26.175 "num_base_bdevs_discovered": 2, 00:27:26.175 "num_base_bdevs_operational": 2, 00:27:26.175 "process": { 00:27:26.175 "type": "rebuild", 00:27:26.175 "target": "spare", 00:27:26.175 "progress": { 00:27:26.175 "blocks": 24576, 00:27:26.175 "percent": 38 00:27:26.175 } 00:27:26.175 }, 00:27:26.175 "base_bdevs_list": [ 00:27:26.175 { 00:27:26.175 "name": "spare", 00:27:26.175 "uuid": "b442edaa-98a4-5a6c-8590-94316bf2911e", 00:27:26.175 "is_configured": true, 00:27:26.175 "data_offset": 2048, 00:27:26.175 "data_size": 63488 00:27:26.175 }, 00:27:26.175 { 00:27:26.175 "name": "BaseBdev2", 00:27:26.175 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:26.175 "is_configured": true, 00:27:26.175 "data_offset": 2048, 00:27:26.175 "data_size": 63488 00:27:26.175 } 00:27:26.175 ] 00:27:26.175 }' 00:27:26.175 07:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:26.175 07:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:26.175 07:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:26.175 07:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:26.175 07:37:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:26.433 [2024-07-12 07:38:00.206850] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:26.433 [2024-07-12 07:38:00.240798] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:26.433 [2024-07-12 07:38:00.240904] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:26.433 [2024-07-12 07:38:00.240921] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:26.434 [2024-07-12 07:38:00.240929] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:26.434 07:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:26.434 07:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:26.434 07:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:26.434 07:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:26.434 07:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:26.434 07:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:26.434 07:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:26.434 07:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:26.434 07:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:26.434 07:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:26.434 07:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.434 07:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.692 07:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:26.692 "name": "raid_bdev1", 00:27:26.692 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:26.692 "strip_size_kb": 0, 00:27:26.692 "state": "online", 00:27:26.692 "raid_level": "raid1", 00:27:26.692 "superblock": true, 00:27:26.692 "num_base_bdevs": 2, 00:27:26.692 "num_base_bdevs_discovered": 1, 00:27:26.692 "num_base_bdevs_operational": 1, 00:27:26.692 "base_bdevs_list": [ 00:27:26.692 { 00:27:26.692 "name": null, 00:27:26.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.692 "is_configured": false, 00:27:26.692 "data_offset": 2048, 00:27:26.692 "data_size": 63488 00:27:26.692 }, 00:27:26.692 { 00:27:26.692 "name": "BaseBdev2", 00:27:26.692 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:26.692 "is_configured": true, 00:27:26.692 "data_offset": 2048, 00:27:26.692 "data_size": 63488 00:27:26.692 } 00:27:26.692 ] 00:27:26.692 }' 00:27:26.692 07:38:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:26.692 07:38:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.259 07:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:27.259 07:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:27.259 07:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:27.259 07:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:27.259 07:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:27.259 07:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.259 07:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.518 07:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:27.518 "name": "raid_bdev1", 00:27:27.518 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:27.518 "strip_size_kb": 0, 00:27:27.518 "state": "online", 00:27:27.518 "raid_level": "raid1", 00:27:27.518 "superblock": true, 00:27:27.518 "num_base_bdevs": 2, 00:27:27.518 "num_base_bdevs_discovered": 1, 00:27:27.518 "num_base_bdevs_operational": 1, 00:27:27.518 "base_bdevs_list": [ 00:27:27.518 { 00:27:27.518 "name": null, 00:27:27.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.518 "is_configured": false, 00:27:27.518 "data_offset": 2048, 00:27:27.518 "data_size": 63488 00:27:27.518 }, 00:27:27.518 { 00:27:27.518 "name": "BaseBdev2", 00:27:27.518 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:27.518 "is_configured": true, 00:27:27.518 "data_offset": 2048, 00:27:27.518 "data_size": 63488 00:27:27.518 } 00:27:27.518 ] 00:27:27.518 }' 00:27:27.518 07:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:27.518 07:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:27.518 07:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:27.518 07:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:27.518 07:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:27.778 07:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:28.038 [2024-07-12 07:38:01.768749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:28.038 [2024-07-12 07:38:01.768879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:28.038 [2024-07-12 07:38:01.768946] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:27:28.038 [2024-07-12 07:38:01.768969] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:28.038 [2024-07-12 07:38:01.769489] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:28.038 [2024-07-12 07:38:01.769526] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:28.038 [2024-07-12 07:38:01.769634] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:28.038 [2024-07-12 07:38:01.769648] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:28.038 [2024-07-12 07:38:01.769656] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:28.038 BaseBdev1 00:27:28.038 07:38:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:27:28.977 07:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:28.977 07:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:28.977 07:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:28.977 07:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:28.977 07:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:28.977 07:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:28.977 07:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:28.977 07:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:28.977 07:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:28.977 07:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:28.977 07:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.977 07:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.236 07:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:29.236 "name": "raid_bdev1", 00:27:29.236 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:29.236 "strip_size_kb": 0, 00:27:29.236 "state": "online", 00:27:29.236 "raid_level": "raid1", 00:27:29.236 "superblock": true, 00:27:29.236 "num_base_bdevs": 2, 00:27:29.236 "num_base_bdevs_discovered": 1, 00:27:29.236 "num_base_bdevs_operational": 1, 00:27:29.236 "base_bdevs_list": [ 00:27:29.236 { 00:27:29.236 "name": null, 00:27:29.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.236 "is_configured": false, 00:27:29.236 "data_offset": 2048, 00:27:29.236 "data_size": 63488 00:27:29.236 }, 00:27:29.236 { 00:27:29.236 "name": "BaseBdev2", 00:27:29.236 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:29.236 "is_configured": true, 00:27:29.236 "data_offset": 2048, 00:27:29.236 "data_size": 63488 00:27:29.236 } 00:27:29.236 ] 00:27:29.236 }' 00:27:29.236 07:38:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:29.236 07:38:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:29.802 07:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:29.802 07:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:29.802 07:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:29.802 07:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:29.802 07:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:29.802 07:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.802 07:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.061 07:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:30.061 "name": "raid_bdev1", 00:27:30.061 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:30.061 "strip_size_kb": 0, 00:27:30.061 "state": "online", 00:27:30.061 "raid_level": "raid1", 00:27:30.061 "superblock": true, 00:27:30.061 "num_base_bdevs": 2, 00:27:30.061 "num_base_bdevs_discovered": 1, 00:27:30.061 "num_base_bdevs_operational": 1, 00:27:30.061 "base_bdevs_list": [ 00:27:30.061 { 00:27:30.061 "name": null, 00:27:30.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.061 "is_configured": false, 00:27:30.061 "data_offset": 2048, 00:27:30.061 "data_size": 63488 00:27:30.061 }, 00:27:30.062 { 00:27:30.062 "name": "BaseBdev2", 00:27:30.062 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:30.062 "is_configured": true, 00:27:30.062 "data_offset": 2048, 00:27:30.062 "data_size": 63488 00:27:30.062 } 00:27:30.062 ] 00:27:30.062 }' 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:30.062 07:38:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:30.321 [2024-07-12 07:38:04.053158] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:30.321 [2024-07-12 07:38:04.053409] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:30.321 [2024-07-12 07:38:04.053424] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:30.321 request: 00:27:30.321 { 00:27:30.321 "raid_bdev": "raid_bdev1", 00:27:30.321 "base_bdev": "BaseBdev1", 00:27:30.321 "method": "bdev_raid_add_base_bdev", 00:27:30.321 "req_id": 1 00:27:30.321 } 00:27:30.321 Got JSON-RPC error response 00:27:30.321 response: 00:27:30.321 { 00:27:30.321 "code": -22, 00:27:30.321 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:30.321 } 00:27:30.321 07:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:27:30.321 07:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:30.321 07:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:30.321 07:38:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:30.321 07:38:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:27:31.257 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:31.257 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:31.257 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:31.257 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:31.257 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:31.257 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:31.257 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:31.257 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:31.257 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:31.257 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:31.257 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.257 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.516 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:31.516 "name": "raid_bdev1", 00:27:31.516 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:31.516 "strip_size_kb": 0, 00:27:31.516 "state": "online", 00:27:31.516 "raid_level": "raid1", 00:27:31.516 "superblock": true, 00:27:31.516 "num_base_bdevs": 2, 00:27:31.516 "num_base_bdevs_discovered": 1, 00:27:31.516 "num_base_bdevs_operational": 1, 00:27:31.516 "base_bdevs_list": [ 00:27:31.516 { 00:27:31.516 "name": null, 00:27:31.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.516 "is_configured": false, 00:27:31.516 "data_offset": 2048, 00:27:31.516 "data_size": 63488 00:27:31.516 }, 00:27:31.516 { 00:27:31.516 "name": "BaseBdev2", 00:27:31.516 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:31.516 "is_configured": true, 00:27:31.516 "data_offset": 2048, 00:27:31.516 "data_size": 63488 00:27:31.516 } 00:27:31.516 ] 00:27:31.516 }' 00:27:31.516 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:31.516 07:38:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:32.082 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:32.082 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:32.082 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:32.082 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:32.082 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:32.082 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.082 07:38:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.340 07:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:32.340 "name": "raid_bdev1", 00:27:32.340 "uuid": "54dfd336-fa58-4d7f-ab4c-b64a99dbb9a0", 00:27:32.340 "strip_size_kb": 0, 00:27:32.340 "state": "online", 00:27:32.340 "raid_level": "raid1", 00:27:32.340 "superblock": true, 00:27:32.340 "num_base_bdevs": 2, 00:27:32.340 "num_base_bdevs_discovered": 1, 00:27:32.340 "num_base_bdevs_operational": 1, 00:27:32.340 "base_bdevs_list": [ 00:27:32.340 { 00:27:32.340 "name": null, 00:27:32.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.340 "is_configured": false, 00:27:32.340 "data_offset": 2048, 00:27:32.340 "data_size": 63488 00:27:32.340 }, 00:27:32.340 { 00:27:32.340 "name": "BaseBdev2", 00:27:32.340 "uuid": "0a8d7dc0-7040-506b-8afa-a070ed1a0674", 00:27:32.340 "is_configured": true, 00:27:32.340 "data_offset": 2048, 00:27:32.340 "data_size": 63488 00:27:32.340 } 00:27:32.340 ] 00:27:32.340 }' 00:27:32.340 07:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:32.340 07:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:32.340 07:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:32.598 07:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:32.598 07:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 154345 00:27:32.598 07:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 154345 ']' 00:27:32.598 07:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 154345 00:27:32.598 07:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:27:32.598 07:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:32.598 07:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 154345 00:27:32.598 killing process with pid 154345 00:27:32.598 Received shutdown signal, test time was about 60.000000 seconds 00:27:32.598 00:27:32.598 Latency(us) 00:27:32.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.598 =================================================================================================================== 00:27:32.598 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:32.598 07:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:32.598 07:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:32.598 07:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 154345' 00:27:32.598 07:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 154345 00:27:32.598 07:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 154345 00:27:32.598 [2024-07-12 07:38:06.282058] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:32.598 [2024-07-12 07:38:06.282227] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:32.598 [2024-07-12 07:38:06.282289] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:32.598 [2024-07-12 07:38:06.282298] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:27:32.598 [2024-07-12 07:38:06.339287] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:32.856 ************************************ 00:27:32.856 END TEST raid_rebuild_test_sb 00:27:32.856 ************************************ 00:27:32.856 07:38:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:27:32.856 00:27:32.856 real 0m34.435s 00:27:32.856 user 0m50.290s 00:27:32.856 sys 0m6.412s 00:27:32.856 07:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:32.856 07:38:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:33.115 07:38:06 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:27:33.115 07:38:06 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:27:33.115 07:38:06 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:33.115 07:38:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:33.115 ************************************ 00:27:33.115 START TEST raid_rebuild_test_io 00:27:33.115 ************************************ 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 false true true 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=155250 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 155250 /var/tmp/spdk-raid.sock 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@827 -- # '[' -z 155250 ']' 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:33.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:33.115 07:38:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:33.115 [2024-07-12 07:38:06.837219] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:33.115 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:33.115 Zero copy mechanism will not be used. 00:27:33.115 [2024-07-12 07:38:06.837491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155250 ] 00:27:33.115 [2024-07-12 07:38:06.990671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.373 [2024-07-12 07:38:07.046027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.373 [2024-07-12 07:38:07.088250] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:33.939 07:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:33.940 07:38:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # return 0 00:27:33.940 07:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:33.940 07:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:34.197 BaseBdev1_malloc 00:27:34.197 07:38:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:34.455 [2024-07-12 07:38:08.179827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:34.455 [2024-07-12 07:38:08.179949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.455 [2024-07-12 07:38:08.180010] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:27:34.455 [2024-07-12 07:38:08.180075] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.455 [2024-07-12 07:38:08.182895] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.455 [2024-07-12 07:38:08.182975] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:34.455 BaseBdev1 00:27:34.455 07:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:34.455 07:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:34.713 BaseBdev2_malloc 00:27:34.713 07:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:34.971 [2024-07-12 07:38:08.654015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:34.971 [2024-07-12 07:38:08.654112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.971 [2024-07-12 07:38:08.654151] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:27:34.971 [2024-07-12 07:38:08.654192] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.971 [2024-07-12 07:38:08.656654] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.971 [2024-07-12 07:38:08.656711] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:34.971 BaseBdev2 00:27:34.971 07:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:35.229 spare_malloc 00:27:35.229 07:38:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:35.229 spare_delay 00:27:35.229 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:35.487 [2024-07-12 07:38:09.272693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:35.487 [2024-07-12 07:38:09.272805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.487 [2024-07-12 07:38:09.272852] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:27:35.487 [2024-07-12 07:38:09.272904] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.487 [2024-07-12 07:38:09.275462] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.487 [2024-07-12 07:38:09.275531] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:35.487 spare 00:27:35.487 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:35.745 [2024-07-12 07:38:09.464773] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:35.746 [2024-07-12 07:38:09.467120] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:35.746 [2024-07-12 07:38:09.467257] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:27:35.746 [2024-07-12 07:38:09.467271] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:35.746 [2024-07-12 07:38:09.467446] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:27:35.746 [2024-07-12 07:38:09.467851] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:27:35.746 [2024-07-12 07:38:09.467864] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:27:35.746 [2024-07-12 07:38:09.468084] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:35.746 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:35.746 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:35.746 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:35.746 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:35.746 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:35.746 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:35.746 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:35.746 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:35.746 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:35.746 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:35.746 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.746 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.004 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:36.004 "name": "raid_bdev1", 00:27:36.004 "uuid": "dd44002d-d50b-4812-85fe-76c54209ca38", 00:27:36.004 "strip_size_kb": 0, 00:27:36.004 "state": "online", 00:27:36.004 "raid_level": "raid1", 00:27:36.004 "superblock": false, 00:27:36.004 "num_base_bdevs": 2, 00:27:36.004 "num_base_bdevs_discovered": 2, 00:27:36.004 "num_base_bdevs_operational": 2, 00:27:36.004 "base_bdevs_list": [ 00:27:36.004 { 00:27:36.004 "name": "BaseBdev1", 00:27:36.004 "uuid": "7057dae2-0949-50ba-bf4f-00b0704d2315", 00:27:36.004 "is_configured": true, 00:27:36.004 "data_offset": 0, 00:27:36.004 "data_size": 65536 00:27:36.004 }, 00:27:36.004 { 00:27:36.004 "name": "BaseBdev2", 00:27:36.004 "uuid": "66a4904f-5de9-5a5a-b0ba-6f08120be18a", 00:27:36.004 "is_configured": true, 00:27:36.004 "data_offset": 0, 00:27:36.004 "data_size": 65536 00:27:36.004 } 00:27:36.004 ] 00:27:36.004 }' 00:27:36.004 07:38:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:36.004 07:38:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:36.570 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:36.570 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:36.570 [2024-07-12 07:38:10.437273] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:36.828 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:27:36.828 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.828 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:36.828 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:27:36.828 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:27:36.828 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:36.828 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:37.087 [2024-07-12 07:38:10.756694] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:27:37.087 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:37.087 Zero copy mechanism will not be used. 00:27:37.087 Running I/O for 60 seconds... 00:27:37.087 [2024-07-12 07:38:10.899951] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:37.087 [2024-07-12 07:38:10.906145] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:27:37.087 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:37.087 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:37.087 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:37.087 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:37.087 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:37.087 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:37.087 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:37.087 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:37.087 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:37.087 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:37.087 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.087 07:38:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.346 07:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:37.346 "name": "raid_bdev1", 00:27:37.346 "uuid": "dd44002d-d50b-4812-85fe-76c54209ca38", 00:27:37.346 "strip_size_kb": 0, 00:27:37.346 "state": "online", 00:27:37.346 "raid_level": "raid1", 00:27:37.346 "superblock": false, 00:27:37.346 "num_base_bdevs": 2, 00:27:37.346 "num_base_bdevs_discovered": 1, 00:27:37.346 "num_base_bdevs_operational": 1, 00:27:37.346 "base_bdevs_list": [ 00:27:37.346 { 00:27:37.346 "name": null, 00:27:37.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.346 "is_configured": false, 00:27:37.346 "data_offset": 0, 00:27:37.346 "data_size": 65536 00:27:37.346 }, 00:27:37.346 { 00:27:37.346 "name": "BaseBdev2", 00:27:37.346 "uuid": "66a4904f-5de9-5a5a-b0ba-6f08120be18a", 00:27:37.346 "is_configured": true, 00:27:37.346 "data_offset": 0, 00:27:37.346 "data_size": 65536 00:27:37.346 } 00:27:37.346 ] 00:27:37.346 }' 00:27:37.346 07:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:37.346 07:38:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:37.914 07:38:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:38.173 [2024-07-12 07:38:12.035195] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:38.432 [2024-07-12 07:38:12.080884] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:27:38.432 [2024-07-12 07:38:12.083338] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:38.432 07:38:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:38.432 [2024-07-12 07:38:12.185392] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:38.432 [2024-07-12 07:38:12.186031] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:38.691 [2024-07-12 07:38:12.406367] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:38.692 [2024-07-12 07:38:12.406911] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:39.282 [2024-07-12 07:38:12.872252] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:39.282 [2024-07-12 07:38:12.872800] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:39.282 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:39.282 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:39.282 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:39.282 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:39.282 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:39.282 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.282 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.592 [2024-07-12 07:38:13.183307] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:39.592 [2024-07-12 07:38:13.184022] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:39.592 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:39.592 "name": "raid_bdev1", 00:27:39.592 "uuid": "dd44002d-d50b-4812-85fe-76c54209ca38", 00:27:39.592 "strip_size_kb": 0, 00:27:39.592 "state": "online", 00:27:39.592 "raid_level": "raid1", 00:27:39.592 "superblock": false, 00:27:39.592 "num_base_bdevs": 2, 00:27:39.592 "num_base_bdevs_discovered": 2, 00:27:39.592 "num_base_bdevs_operational": 2, 00:27:39.592 "process": { 00:27:39.592 "type": "rebuild", 00:27:39.592 "target": "spare", 00:27:39.592 "progress": { 00:27:39.592 "blocks": 14336, 00:27:39.592 "percent": 21 00:27:39.592 } 00:27:39.592 }, 00:27:39.592 "base_bdevs_list": [ 00:27:39.592 { 00:27:39.592 "name": "spare", 00:27:39.592 "uuid": "11823028-29df-5a56-9ef4-91ecaf6fb9b8", 00:27:39.592 "is_configured": true, 00:27:39.592 "data_offset": 0, 00:27:39.592 "data_size": 65536 00:27:39.592 }, 00:27:39.592 { 00:27:39.592 "name": "BaseBdev2", 00:27:39.592 "uuid": "66a4904f-5de9-5a5a-b0ba-6f08120be18a", 00:27:39.592 "is_configured": true, 00:27:39.592 "data_offset": 0, 00:27:39.592 "data_size": 65536 00:27:39.592 } 00:27:39.592 ] 00:27:39.592 }' 00:27:39.592 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:39.592 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:39.592 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:39.592 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:39.592 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:39.861 [2024-07-12 07:38:13.637959] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:39.861 [2024-07-12 07:38:13.685521] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:40.119 [2024-07-12 07:38:13.864259] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:40.119 [2024-07-12 07:38:13.872811] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:40.119 [2024-07-12 07:38:13.873093] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:40.120 [2024-07-12 07:38:13.873135] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:40.120 [2024-07-12 07:38:13.902368] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:27:40.120 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:40.120 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:40.120 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:40.120 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:40.120 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:40.120 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:40.120 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:40.120 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:40.120 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:40.120 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:40.120 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.120 07:38:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.378 07:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:40.378 "name": "raid_bdev1", 00:27:40.378 "uuid": "dd44002d-d50b-4812-85fe-76c54209ca38", 00:27:40.378 "strip_size_kb": 0, 00:27:40.378 "state": "online", 00:27:40.378 "raid_level": "raid1", 00:27:40.378 "superblock": false, 00:27:40.378 "num_base_bdevs": 2, 00:27:40.378 "num_base_bdevs_discovered": 1, 00:27:40.378 "num_base_bdevs_operational": 1, 00:27:40.378 "base_bdevs_list": [ 00:27:40.378 { 00:27:40.378 "name": null, 00:27:40.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.378 "is_configured": false, 00:27:40.378 "data_offset": 0, 00:27:40.378 "data_size": 65536 00:27:40.378 }, 00:27:40.378 { 00:27:40.378 "name": "BaseBdev2", 00:27:40.378 "uuid": "66a4904f-5de9-5a5a-b0ba-6f08120be18a", 00:27:40.378 "is_configured": true, 00:27:40.378 "data_offset": 0, 00:27:40.378 "data_size": 65536 00:27:40.378 } 00:27:40.378 ] 00:27:40.378 }' 00:27:40.378 07:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:40.378 07:38:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:40.946 07:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:40.946 07:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:40.946 07:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:40.946 07:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:40.946 07:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:40.946 07:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.946 07:38:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.204 07:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:41.204 "name": "raid_bdev1", 00:27:41.204 "uuid": "dd44002d-d50b-4812-85fe-76c54209ca38", 00:27:41.204 "strip_size_kb": 0, 00:27:41.204 "state": "online", 00:27:41.204 "raid_level": "raid1", 00:27:41.204 "superblock": false, 00:27:41.204 "num_base_bdevs": 2, 00:27:41.204 "num_base_bdevs_discovered": 1, 00:27:41.204 "num_base_bdevs_operational": 1, 00:27:41.204 "base_bdevs_list": [ 00:27:41.204 { 00:27:41.204 "name": null, 00:27:41.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.204 "is_configured": false, 00:27:41.204 "data_offset": 0, 00:27:41.204 "data_size": 65536 00:27:41.204 }, 00:27:41.204 { 00:27:41.204 "name": "BaseBdev2", 00:27:41.204 "uuid": "66a4904f-5de9-5a5a-b0ba-6f08120be18a", 00:27:41.204 "is_configured": true, 00:27:41.204 "data_offset": 0, 00:27:41.204 "data_size": 65536 00:27:41.204 } 00:27:41.204 ] 00:27:41.204 }' 00:27:41.204 07:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:41.463 07:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:41.463 07:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:41.463 07:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:41.463 07:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:41.722 [2024-07-12 07:38:15.389538] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:41.722 07:38:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:41.722 [2024-07-12 07:38:15.440535] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:27:41.722 [2024-07-12 07:38:15.442710] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:41.722 [2024-07-12 07:38:15.562814] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:41.722 [2024-07-12 07:38:15.563524] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:41.982 [2024-07-12 07:38:15.794205] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:41.982 [2024-07-12 07:38:15.794727] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:42.550 [2024-07-12 07:38:16.138344] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:42.808 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:42.808 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:42.808 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:42.808 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:42.808 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:42.808 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.808 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.808 [2024-07-12 07:38:16.600444] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:43.067 "name": "raid_bdev1", 00:27:43.067 "uuid": "dd44002d-d50b-4812-85fe-76c54209ca38", 00:27:43.067 "strip_size_kb": 0, 00:27:43.067 "state": "online", 00:27:43.067 "raid_level": "raid1", 00:27:43.067 "superblock": false, 00:27:43.067 "num_base_bdevs": 2, 00:27:43.067 "num_base_bdevs_discovered": 2, 00:27:43.067 "num_base_bdevs_operational": 2, 00:27:43.067 "process": { 00:27:43.067 "type": "rebuild", 00:27:43.067 "target": "spare", 00:27:43.067 "progress": { 00:27:43.067 "blocks": 14336, 00:27:43.067 "percent": 21 00:27:43.067 } 00:27:43.067 }, 00:27:43.067 "base_bdevs_list": [ 00:27:43.067 { 00:27:43.067 "name": "spare", 00:27:43.067 "uuid": "11823028-29df-5a56-9ef4-91ecaf6fb9b8", 00:27:43.067 "is_configured": true, 00:27:43.067 "data_offset": 0, 00:27:43.067 "data_size": 65536 00:27:43.067 }, 00:27:43.067 { 00:27:43.067 "name": "BaseBdev2", 00:27:43.067 "uuid": "66a4904f-5de9-5a5a-b0ba-6f08120be18a", 00:27:43.067 "is_configured": true, 00:27:43.067 "data_offset": 0, 00:27:43.067 "data_size": 65536 00:27:43.067 } 00:27:43.067 ] 00:27:43.067 }' 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=807 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.067 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.067 [2024-07-12 07:38:16.815795] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:43.326 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:43.326 "name": "raid_bdev1", 00:27:43.326 "uuid": "dd44002d-d50b-4812-85fe-76c54209ca38", 00:27:43.326 "strip_size_kb": 0, 00:27:43.326 "state": "online", 00:27:43.326 "raid_level": "raid1", 00:27:43.326 "superblock": false, 00:27:43.326 "num_base_bdevs": 2, 00:27:43.326 "num_base_bdevs_discovered": 2, 00:27:43.326 "num_base_bdevs_operational": 2, 00:27:43.326 "process": { 00:27:43.326 "type": "rebuild", 00:27:43.326 "target": "spare", 00:27:43.326 "progress": { 00:27:43.326 "blocks": 18432, 00:27:43.326 "percent": 28 00:27:43.326 } 00:27:43.326 }, 00:27:43.326 "base_bdevs_list": [ 00:27:43.326 { 00:27:43.326 "name": "spare", 00:27:43.326 "uuid": "11823028-29df-5a56-9ef4-91ecaf6fb9b8", 00:27:43.326 "is_configured": true, 00:27:43.326 "data_offset": 0, 00:27:43.326 "data_size": 65536 00:27:43.326 }, 00:27:43.326 { 00:27:43.326 "name": "BaseBdev2", 00:27:43.326 "uuid": "66a4904f-5de9-5a5a-b0ba-6f08120be18a", 00:27:43.326 "is_configured": true, 00:27:43.326 "data_offset": 0, 00:27:43.326 "data_size": 65536 00:27:43.326 } 00:27:43.326 ] 00:27:43.326 }' 00:27:43.326 07:38:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:43.326 07:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:43.326 07:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:43.326 07:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:43.326 07:38:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:43.326 [2024-07-12 07:38:17.162264] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:27:43.893 [2024-07-12 07:38:17.599645] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:27:43.893 [2024-07-12 07:38:17.600174] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:27:44.152 [2024-07-12 07:38:17.817559] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:27:44.152 [2024-07-12 07:38:18.027156] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:27:44.411 07:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:44.411 07:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:44.411 07:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:44.411 07:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:44.411 07:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:44.411 07:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:44.411 07:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.411 07:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.669 07:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:44.669 "name": "raid_bdev1", 00:27:44.669 "uuid": "dd44002d-d50b-4812-85fe-76c54209ca38", 00:27:44.669 "strip_size_kb": 0, 00:27:44.669 "state": "online", 00:27:44.670 "raid_level": "raid1", 00:27:44.670 "superblock": false, 00:27:44.670 "num_base_bdevs": 2, 00:27:44.670 "num_base_bdevs_discovered": 2, 00:27:44.670 "num_base_bdevs_operational": 2, 00:27:44.670 "process": { 00:27:44.670 "type": "rebuild", 00:27:44.670 "target": "spare", 00:27:44.670 "progress": { 00:27:44.670 "blocks": 38912, 00:27:44.670 "percent": 59 00:27:44.670 } 00:27:44.670 }, 00:27:44.670 "base_bdevs_list": [ 00:27:44.670 { 00:27:44.670 "name": "spare", 00:27:44.670 "uuid": "11823028-29df-5a56-9ef4-91ecaf6fb9b8", 00:27:44.670 "is_configured": true, 00:27:44.670 "data_offset": 0, 00:27:44.670 "data_size": 65536 00:27:44.670 }, 00:27:44.670 { 00:27:44.670 "name": "BaseBdev2", 00:27:44.670 "uuid": "66a4904f-5de9-5a5a-b0ba-6f08120be18a", 00:27:44.670 "is_configured": true, 00:27:44.670 "data_offset": 0, 00:27:44.670 "data_size": 65536 00:27:44.670 } 00:27:44.670 ] 00:27:44.670 }' 00:27:44.670 07:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:44.670 [2024-07-12 07:38:18.371997] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:27:44.670 07:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:44.670 07:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:44.670 07:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:44.670 07:38:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:45.236 [2024-07-12 07:38:18.816211] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:27:45.803 07:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:45.803 07:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:45.803 07:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:45.803 07:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:45.803 07:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:45.803 07:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:45.803 07:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.803 07:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.803 [2024-07-12 07:38:19.495011] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:27:45.803 07:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:45.803 "name": "raid_bdev1", 00:27:45.803 "uuid": "dd44002d-d50b-4812-85fe-76c54209ca38", 00:27:45.803 "strip_size_kb": 0, 00:27:45.803 "state": "online", 00:27:45.803 "raid_level": "raid1", 00:27:45.803 "superblock": false, 00:27:45.803 "num_base_bdevs": 2, 00:27:45.803 "num_base_bdevs_discovered": 2, 00:27:45.803 "num_base_bdevs_operational": 2, 00:27:45.803 "process": { 00:27:45.803 "type": "rebuild", 00:27:45.803 "target": "spare", 00:27:45.803 "progress": { 00:27:45.803 "blocks": 59392, 00:27:45.803 "percent": 90 00:27:45.803 } 00:27:45.803 }, 00:27:45.803 "base_bdevs_list": [ 00:27:45.803 { 00:27:45.803 "name": "spare", 00:27:45.803 "uuid": "11823028-29df-5a56-9ef4-91ecaf6fb9b8", 00:27:45.803 "is_configured": true, 00:27:45.803 "data_offset": 0, 00:27:45.803 "data_size": 65536 00:27:45.803 }, 00:27:45.803 { 00:27:45.803 "name": "BaseBdev2", 00:27:45.803 "uuid": "66a4904f-5de9-5a5a-b0ba-6f08120be18a", 00:27:45.803 "is_configured": true, 00:27:45.803 "data_offset": 0, 00:27:45.803 "data_size": 65536 00:27:45.803 } 00:27:45.803 ] 00:27:45.803 }' 00:27:46.061 07:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:46.061 07:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:46.061 07:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:46.061 07:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:46.061 07:38:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:46.061 [2024-07-12 07:38:19.934997] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:46.319 [2024-07-12 07:38:20.041494] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:46.319 [2024-07-12 07:38:20.044546] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:47.253 07:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:47.253 07:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.253 07:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.253 07:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:47.253 07:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:47.253 07:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.253 07:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.253 07:38:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.253 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:47.253 "name": "raid_bdev1", 00:27:47.253 "uuid": "dd44002d-d50b-4812-85fe-76c54209ca38", 00:27:47.253 "strip_size_kb": 0, 00:27:47.253 "state": "online", 00:27:47.253 "raid_level": "raid1", 00:27:47.253 "superblock": false, 00:27:47.253 "num_base_bdevs": 2, 00:27:47.253 "num_base_bdevs_discovered": 2, 00:27:47.253 "num_base_bdevs_operational": 2, 00:27:47.253 "base_bdevs_list": [ 00:27:47.253 { 00:27:47.253 "name": "spare", 00:27:47.253 "uuid": "11823028-29df-5a56-9ef4-91ecaf6fb9b8", 00:27:47.253 "is_configured": true, 00:27:47.253 "data_offset": 0, 00:27:47.253 "data_size": 65536 00:27:47.253 }, 00:27:47.253 { 00:27:47.253 "name": "BaseBdev2", 00:27:47.253 "uuid": "66a4904f-5de9-5a5a-b0ba-6f08120be18a", 00:27:47.253 "is_configured": true, 00:27:47.253 "data_offset": 0, 00:27:47.253 "data_size": 65536 00:27:47.253 } 00:27:47.253 ] 00:27:47.253 }' 00:27:47.253 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:47.253 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:47.253 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:47.511 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:47.511 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:27:47.511 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:47.511 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.511 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:47.511 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:47.511 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.511 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.511 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.511 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:47.511 "name": "raid_bdev1", 00:27:47.511 "uuid": "dd44002d-d50b-4812-85fe-76c54209ca38", 00:27:47.511 "strip_size_kb": 0, 00:27:47.511 "state": "online", 00:27:47.511 "raid_level": "raid1", 00:27:47.511 "superblock": false, 00:27:47.511 "num_base_bdevs": 2, 00:27:47.511 "num_base_bdevs_discovered": 2, 00:27:47.511 "num_base_bdevs_operational": 2, 00:27:47.511 "base_bdevs_list": [ 00:27:47.511 { 00:27:47.511 "name": "spare", 00:27:47.511 "uuid": "11823028-29df-5a56-9ef4-91ecaf6fb9b8", 00:27:47.511 "is_configured": true, 00:27:47.511 "data_offset": 0, 00:27:47.511 "data_size": 65536 00:27:47.511 }, 00:27:47.511 { 00:27:47.511 "name": "BaseBdev2", 00:27:47.511 "uuid": "66a4904f-5de9-5a5a-b0ba-6f08120be18a", 00:27:47.511 "is_configured": true, 00:27:47.511 "data_offset": 0, 00:27:47.511 "data_size": 65536 00:27:47.511 } 00:27:47.511 ] 00:27:47.511 }' 00:27:47.511 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:47.769 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:47.769 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:47.769 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:47.769 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:47.769 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:47.769 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:47.769 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:47.769 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:47.769 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:47.769 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:47.769 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:47.769 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:47.769 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:47.769 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.769 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.028 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:48.028 "name": "raid_bdev1", 00:27:48.028 "uuid": "dd44002d-d50b-4812-85fe-76c54209ca38", 00:27:48.028 "strip_size_kb": 0, 00:27:48.028 "state": "online", 00:27:48.028 "raid_level": "raid1", 00:27:48.028 "superblock": false, 00:27:48.028 "num_base_bdevs": 2, 00:27:48.028 "num_base_bdevs_discovered": 2, 00:27:48.028 "num_base_bdevs_operational": 2, 00:27:48.028 "base_bdevs_list": [ 00:27:48.028 { 00:27:48.028 "name": "spare", 00:27:48.028 "uuid": "11823028-29df-5a56-9ef4-91ecaf6fb9b8", 00:27:48.028 "is_configured": true, 00:27:48.028 "data_offset": 0, 00:27:48.028 "data_size": 65536 00:27:48.028 }, 00:27:48.028 { 00:27:48.028 "name": "BaseBdev2", 00:27:48.028 "uuid": "66a4904f-5de9-5a5a-b0ba-6f08120be18a", 00:27:48.028 "is_configured": true, 00:27:48.028 "data_offset": 0, 00:27:48.028 "data_size": 65536 00:27:48.028 } 00:27:48.028 ] 00:27:48.028 }' 00:27:48.028 07:38:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:48.028 07:38:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:48.594 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:48.852 [2024-07-12 07:38:22.577133] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:48.852 [2024-07-12 07:38:22.577437] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:48.852 00:27:48.852 Latency(us) 00:27:48.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.852 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:27:48.852 raid_bdev1 : 11.89 122.76 368.28 0.00 0.00 11870.25 294.52 115343.36 00:27:48.852 =================================================================================================================== 00:27:48.852 Total : 122.76 368.28 0.00 0.00 11870.25 294.52 115343.36 00:27:48.853 [2024-07-12 07:38:22.658349] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:48.853 [2024-07-12 07:38:22.658570] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:48.853 [2024-07-12 07:38:22.658707] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:48.853 [2024-07-12 07:38:22.658964] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:27:48.853 0 00:27:48.853 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:27:48.853 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.111 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:49.111 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:49.111 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:49.111 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:27:49.111 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:49.111 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:27:49.111 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:49.111 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:49.111 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:49.111 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:49.111 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:49.111 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:49.111 07:38:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:27:49.369 /dev/nbd0 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:49.369 1+0 records in 00:27:49.369 1+0 records out 00:27:49.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695884 s, 5.9 MB/s 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:49.369 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:27:49.628 /dev/nbd1 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:49.628 1+0 records in 00:27:49.628 1+0 records out 00:27:49.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703495 s, 5.8 MB/s 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:49.628 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:49.886 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:27:49.886 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:49.886 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:49.886 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:49.886 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:49.887 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:49.887 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:50.144 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:50.144 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:50.145 07:38:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:50.145 07:38:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:50.145 07:38:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:27:50.145 07:38:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 155250 00:27:50.145 07:38:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@946 -- # '[' -z 155250 ']' 00:27:50.145 07:38:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # kill -0 155250 00:27:50.145 07:38:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # uname 00:27:50.145 07:38:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:50.145 07:38:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 155250 00:27:50.402 07:38:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:50.402 07:38:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:50.402 07:38:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 155250' 00:27:50.402 killing process with pid 155250 00:27:50.402 07:38:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@965 -- # kill 155250 00:27:50.402 Received shutdown signal, test time was about 13.273672 seconds 00:27:50.402 00:27:50.402 Latency(us) 00:27:50.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.402 =================================================================================================================== 00:27:50.402 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:50.402 07:38:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # wait 155250 00:27:50.402 [2024-07-12 07:38:24.033868] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:50.402 [2024-07-12 07:38:24.059833] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:27:50.661 00:27:50.661 real 0m17.563s 00:27:50.661 user 0m26.697s 00:27:50.661 sys 0m2.628s 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:50.661 ************************************ 00:27:50.661 END TEST raid_rebuild_test_io 00:27:50.661 ************************************ 00:27:50.661 07:38:24 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:27:50.661 07:38:24 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:27:50.661 07:38:24 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:50.661 07:38:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:50.661 ************************************ 00:27:50.661 START TEST raid_rebuild_test_sb_io 00:27:50.661 ************************************ 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true true true 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=155723 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 155723 /var/tmp/spdk-raid.sock 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@827 -- # '[' -z 155723 ']' 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:50.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:50.661 07:38:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:50.661 [2024-07-12 07:38:24.481259] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:50.661 [2024-07-12 07:38:24.481775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155723 ] 00:27:50.661 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:50.661 Zero copy mechanism will not be used. 00:27:50.920 [2024-07-12 07:38:24.633024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.920 [2024-07-12 07:38:24.680444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.920 [2024-07-12 07:38:24.722903] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:51.486 07:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:51.486 07:38:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # return 0 00:27:51.486 07:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:51.486 07:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:51.744 BaseBdev1_malloc 00:27:51.744 07:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:52.003 [2024-07-12 07:38:25.785094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:52.003 [2024-07-12 07:38:25.785438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:52.003 [2024-07-12 07:38:25.785515] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:27:52.003 [2024-07-12 07:38:25.785652] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:52.003 [2024-07-12 07:38:25.788326] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:52.003 [2024-07-12 07:38:25.788515] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:52.003 BaseBdev1 00:27:52.003 07:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:52.003 07:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:52.276 BaseBdev2_malloc 00:27:52.276 07:38:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:52.534 [2024-07-12 07:38:26.166219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:52.534 [2024-07-12 07:38:26.166513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:52.534 [2024-07-12 07:38:26.166584] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:27:52.534 [2024-07-12 07:38:26.166696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:52.534 [2024-07-12 07:38:26.169100] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:52.534 [2024-07-12 07:38:26.169283] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:52.534 BaseBdev2 00:27:52.534 07:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:52.534 spare_malloc 00:27:52.794 07:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:52.794 spare_delay 00:27:52.794 07:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:53.054 [2024-07-12 07:38:26.768826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:53.054 [2024-07-12 07:38:26.769171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.054 [2024-07-12 07:38:26.769246] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:27:53.054 [2024-07-12 07:38:26.769496] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.054 [2024-07-12 07:38:26.772001] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.054 [2024-07-12 07:38:26.772181] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:53.054 spare 00:27:53.054 07:38:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:53.314 [2024-07-12 07:38:27.021040] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:53.314 [2024-07-12 07:38:27.023440] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:53.314 [2024-07-12 07:38:27.023770] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:27:53.314 [2024-07-12 07:38:27.023880] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:53.314 [2024-07-12 07:38:27.024077] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:27:53.314 [2024-07-12 07:38:27.024579] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:27:53.314 [2024-07-12 07:38:27.024689] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:27:53.314 [2024-07-12 07:38:27.024963] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:53.314 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:53.314 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:53.314 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:53.314 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:53.314 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:53.314 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:53.314 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:53.314 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:53.314 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:53.314 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:53.314 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.314 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.573 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:53.573 "name": "raid_bdev1", 00:27:53.573 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:27:53.573 "strip_size_kb": 0, 00:27:53.573 "state": "online", 00:27:53.573 "raid_level": "raid1", 00:27:53.573 "superblock": true, 00:27:53.573 "num_base_bdevs": 2, 00:27:53.573 "num_base_bdevs_discovered": 2, 00:27:53.573 "num_base_bdevs_operational": 2, 00:27:53.573 "base_bdevs_list": [ 00:27:53.573 { 00:27:53.573 "name": "BaseBdev1", 00:27:53.573 "uuid": "4c921295-9f5c-5130-b742-197e514a58de", 00:27:53.573 "is_configured": true, 00:27:53.573 "data_offset": 2048, 00:27:53.573 "data_size": 63488 00:27:53.573 }, 00:27:53.573 { 00:27:53.573 "name": "BaseBdev2", 00:27:53.573 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:27:53.573 "is_configured": true, 00:27:53.574 "data_offset": 2048, 00:27:53.574 "data_size": 63488 00:27:53.574 } 00:27:53.574 ] 00:27:53.574 }' 00:27:53.574 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:53.574 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:54.143 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:54.143 07:38:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:54.143 [2024-07-12 07:38:28.009434] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:54.403 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:27:54.403 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:54.403 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.403 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:27:54.403 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:27:54.403 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:54.403 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:54.661 [2024-07-12 07:38:28.347315] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:27:54.661 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:54.661 Zero copy mechanism will not be used. 00:27:54.661 Running I/O for 60 seconds... 00:27:54.661 [2024-07-12 07:38:28.468648] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:54.661 [2024-07-12 07:38:28.474733] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:27:54.661 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:54.661 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:54.662 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:54.662 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:54.662 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:54.662 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:54.662 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:54.662 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:54.662 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:54.662 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:54.662 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.662 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.921 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:54.921 "name": "raid_bdev1", 00:27:54.921 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:27:54.921 "strip_size_kb": 0, 00:27:54.921 "state": "online", 00:27:54.921 "raid_level": "raid1", 00:27:54.921 "superblock": true, 00:27:54.921 "num_base_bdevs": 2, 00:27:54.921 "num_base_bdevs_discovered": 1, 00:27:54.921 "num_base_bdevs_operational": 1, 00:27:54.921 "base_bdevs_list": [ 00:27:54.921 { 00:27:54.921 "name": null, 00:27:54.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:54.921 "is_configured": false, 00:27:54.921 "data_offset": 2048, 00:27:54.921 "data_size": 63488 00:27:54.921 }, 00:27:54.921 { 00:27:54.921 "name": "BaseBdev2", 00:27:54.921 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:27:54.921 "is_configured": true, 00:27:54.921 "data_offset": 2048, 00:27:54.921 "data_size": 63488 00:27:54.921 } 00:27:54.921 ] 00:27:54.921 }' 00:27:54.921 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:54.921 07:38:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:55.860 07:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:55.861 [2024-07-12 07:38:29.665665] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:55.861 [2024-07-12 07:38:29.704222] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:27:55.861 [2024-07-12 07:38:29.706670] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:55.861 07:38:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:56.120 [2024-07-12 07:38:29.815398] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:56.120 [2024-07-12 07:38:29.816094] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:56.380 [2024-07-12 07:38:30.024382] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:56.380 [2024-07-12 07:38:30.025025] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:56.640 [2024-07-12 07:38:30.365951] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:56.640 [2024-07-12 07:38:30.366686] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:56.899 07:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:56.899 07:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:56.899 07:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:56.899 07:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:56.899 07:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:56.899 07:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.899 07:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.157 07:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:57.157 "name": "raid_bdev1", 00:27:57.157 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:27:57.157 "strip_size_kb": 0, 00:27:57.157 "state": "online", 00:27:57.157 "raid_level": "raid1", 00:27:57.157 "superblock": true, 00:27:57.157 "num_base_bdevs": 2, 00:27:57.157 "num_base_bdevs_discovered": 2, 00:27:57.157 "num_base_bdevs_operational": 2, 00:27:57.157 "process": { 00:27:57.157 "type": "rebuild", 00:27:57.157 "target": "spare", 00:27:57.157 "progress": { 00:27:57.157 "blocks": 16384, 00:27:57.157 "percent": 25 00:27:57.157 } 00:27:57.157 }, 00:27:57.157 "base_bdevs_list": [ 00:27:57.157 { 00:27:57.157 "name": "spare", 00:27:57.157 "uuid": "2cd9640d-050e-5483-bca8-a879e6b3e520", 00:27:57.157 "is_configured": true, 00:27:57.157 "data_offset": 2048, 00:27:57.157 "data_size": 63488 00:27:57.157 }, 00:27:57.157 { 00:27:57.157 "name": "BaseBdev2", 00:27:57.157 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:27:57.157 "is_configured": true, 00:27:57.157 "data_offset": 2048, 00:27:57.157 "data_size": 63488 00:27:57.157 } 00:27:57.157 ] 00:27:57.157 }' 00:27:57.157 07:38:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:57.415 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:57.415 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:57.416 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:57.416 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:57.416 [2024-07-12 07:38:31.105885] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:57.416 [2024-07-12 07:38:31.106621] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:57.674 [2024-07-12 07:38:31.311706] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:57.674 [2024-07-12 07:38:31.525811] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:57.674 [2024-07-12 07:38:31.534191] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:57.674 [2024-07-12 07:38:31.534439] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:57.674 [2024-07-12 07:38:31.534492] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:57.674 [2024-07-12 07:38:31.557508] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:27:57.933 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:57.933 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:57.933 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:57.933 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:57.933 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:57.933 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:57.933 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:57.933 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:57.933 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:57.933 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:57.933 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.933 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.192 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:58.192 "name": "raid_bdev1", 00:27:58.192 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:27:58.192 "strip_size_kb": 0, 00:27:58.192 "state": "online", 00:27:58.192 "raid_level": "raid1", 00:27:58.192 "superblock": true, 00:27:58.192 "num_base_bdevs": 2, 00:27:58.192 "num_base_bdevs_discovered": 1, 00:27:58.192 "num_base_bdevs_operational": 1, 00:27:58.192 "base_bdevs_list": [ 00:27:58.192 { 00:27:58.192 "name": null, 00:27:58.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.192 "is_configured": false, 00:27:58.192 "data_offset": 2048, 00:27:58.192 "data_size": 63488 00:27:58.192 }, 00:27:58.192 { 00:27:58.192 "name": "BaseBdev2", 00:27:58.192 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:27:58.192 "is_configured": true, 00:27:58.192 "data_offset": 2048, 00:27:58.192 "data_size": 63488 00:27:58.192 } 00:27:58.192 ] 00:27:58.192 }' 00:27:58.192 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:58.192 07:38:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:58.761 07:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:58.761 07:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:58.761 07:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:58.761 07:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:58.761 07:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:58.761 07:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.761 07:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.021 07:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:59.021 "name": "raid_bdev1", 00:27:59.021 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:27:59.021 "strip_size_kb": 0, 00:27:59.021 "state": "online", 00:27:59.021 "raid_level": "raid1", 00:27:59.021 "superblock": true, 00:27:59.021 "num_base_bdevs": 2, 00:27:59.021 "num_base_bdevs_discovered": 1, 00:27:59.021 "num_base_bdevs_operational": 1, 00:27:59.021 "base_bdevs_list": [ 00:27:59.021 { 00:27:59.021 "name": null, 00:27:59.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.021 "is_configured": false, 00:27:59.021 "data_offset": 2048, 00:27:59.021 "data_size": 63488 00:27:59.021 }, 00:27:59.021 { 00:27:59.021 "name": "BaseBdev2", 00:27:59.021 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:27:59.021 "is_configured": true, 00:27:59.021 "data_offset": 2048, 00:27:59.021 "data_size": 63488 00:27:59.021 } 00:27:59.021 ] 00:27:59.021 }' 00:27:59.021 07:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:59.021 07:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:59.021 07:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:59.021 07:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:59.021 07:38:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:59.290 [2024-07-12 07:38:33.076898] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:59.290 [2024-07-12 07:38:33.121755] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:27:59.290 [2024-07-12 07:38:33.124124] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:59.290 07:38:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:59.550 [2024-07-12 07:38:33.243686] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:59.550 [2024-07-12 07:38:33.244409] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:59.809 [2024-07-12 07:38:33.452021] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:59.809 [2024-07-12 07:38:33.452520] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:00.067 [2024-07-12 07:38:33.796386] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:00.067 [2024-07-12 07:38:33.797129] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:00.326 [2024-07-12 07:38:34.018005] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:00.326 [2024-07-12 07:38:34.018518] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:00.326 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:00.326 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:00.326 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:00.326 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:00.326 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:00.326 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.326 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.585 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:00.585 "name": "raid_bdev1", 00:28:00.585 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:00.585 "strip_size_kb": 0, 00:28:00.585 "state": "online", 00:28:00.585 "raid_level": "raid1", 00:28:00.585 "superblock": true, 00:28:00.585 "num_base_bdevs": 2, 00:28:00.585 "num_base_bdevs_discovered": 2, 00:28:00.585 "num_base_bdevs_operational": 2, 00:28:00.585 "process": { 00:28:00.585 "type": "rebuild", 00:28:00.585 "target": "spare", 00:28:00.585 "progress": { 00:28:00.585 "blocks": 16384, 00:28:00.585 "percent": 25 00:28:00.585 } 00:28:00.585 }, 00:28:00.585 "base_bdevs_list": [ 00:28:00.585 { 00:28:00.585 "name": "spare", 00:28:00.585 "uuid": "2cd9640d-050e-5483-bca8-a879e6b3e520", 00:28:00.585 "is_configured": true, 00:28:00.585 "data_offset": 2048, 00:28:00.585 "data_size": 63488 00:28:00.585 }, 00:28:00.585 { 00:28:00.585 "name": "BaseBdev2", 00:28:00.585 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:00.586 "is_configured": true, 00:28:00.586 "data_offset": 2048, 00:28:00.586 "data_size": 63488 00:28:00.586 } 00:28:00.586 ] 00:28:00.586 }' 00:28:00.586 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:00.586 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:00.586 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:00.844 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:00.844 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:28:00.844 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:28:00.844 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:28:00.844 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:28:00.844 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:00.844 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:28:00.844 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=825 00:28:00.844 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:00.844 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:00.844 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:00.844 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:00.844 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:00.844 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:00.844 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.844 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.844 [2024-07-12 07:38:34.619425] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:01.104 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:01.104 "name": "raid_bdev1", 00:28:01.104 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:01.104 "strip_size_kb": 0, 00:28:01.104 "state": "online", 00:28:01.104 "raid_level": "raid1", 00:28:01.104 "superblock": true, 00:28:01.104 "num_base_bdevs": 2, 00:28:01.104 "num_base_bdevs_discovered": 2, 00:28:01.104 "num_base_bdevs_operational": 2, 00:28:01.104 "process": { 00:28:01.104 "type": "rebuild", 00:28:01.104 "target": "spare", 00:28:01.104 "progress": { 00:28:01.104 "blocks": 20480, 00:28:01.104 "percent": 32 00:28:01.104 } 00:28:01.104 }, 00:28:01.104 "base_bdevs_list": [ 00:28:01.104 { 00:28:01.104 "name": "spare", 00:28:01.104 "uuid": "2cd9640d-050e-5483-bca8-a879e6b3e520", 00:28:01.104 "is_configured": true, 00:28:01.104 "data_offset": 2048, 00:28:01.104 "data_size": 63488 00:28:01.104 }, 00:28:01.104 { 00:28:01.104 "name": "BaseBdev2", 00:28:01.104 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:01.104 "is_configured": true, 00:28:01.104 "data_offset": 2048, 00:28:01.104 "data_size": 63488 00:28:01.104 } 00:28:01.104 ] 00:28:01.104 }' 00:28:01.104 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:01.104 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:01.104 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:01.104 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:01.104 07:38:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:01.104 [2024-07-12 07:38:34.841085] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:01.363 [2024-07-12 07:38:35.176915] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:28:01.621 [2024-07-12 07:38:35.299228] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:28:01.880 [2024-07-12 07:38:35.623186] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:28:02.138 07:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:02.138 07:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:02.138 07:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:02.138 07:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:02.138 07:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:02.138 07:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:02.139 07:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.139 07:38:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.139 [2024-07-12 07:38:35.851781] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:28:02.397 07:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:02.397 "name": "raid_bdev1", 00:28:02.397 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:02.397 "strip_size_kb": 0, 00:28:02.397 "state": "online", 00:28:02.397 "raid_level": "raid1", 00:28:02.397 "superblock": true, 00:28:02.397 "num_base_bdevs": 2, 00:28:02.397 "num_base_bdevs_discovered": 2, 00:28:02.397 "num_base_bdevs_operational": 2, 00:28:02.397 "process": { 00:28:02.397 "type": "rebuild", 00:28:02.397 "target": "spare", 00:28:02.397 "progress": { 00:28:02.397 "blocks": 36864, 00:28:02.397 "percent": 58 00:28:02.397 } 00:28:02.397 }, 00:28:02.397 "base_bdevs_list": [ 00:28:02.397 { 00:28:02.397 "name": "spare", 00:28:02.397 "uuid": "2cd9640d-050e-5483-bca8-a879e6b3e520", 00:28:02.397 "is_configured": true, 00:28:02.397 "data_offset": 2048, 00:28:02.397 "data_size": 63488 00:28:02.397 }, 00:28:02.397 { 00:28:02.397 "name": "BaseBdev2", 00:28:02.397 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:02.397 "is_configured": true, 00:28:02.397 "data_offset": 2048, 00:28:02.397 "data_size": 63488 00:28:02.397 } 00:28:02.397 ] 00:28:02.397 }' 00:28:02.397 07:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:02.397 07:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:02.397 07:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:02.397 07:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:02.397 07:38:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:02.963 [2024-07-12 07:38:36.820234] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:28:03.530 07:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:03.530 07:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:03.530 07:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:03.530 07:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:03.530 07:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:03.530 07:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:03.530 07:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.530 07:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.530 [2024-07-12 07:38:37.364268] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:03.786 07:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:03.786 "name": "raid_bdev1", 00:28:03.786 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:03.786 "strip_size_kb": 0, 00:28:03.786 "state": "online", 00:28:03.786 "raid_level": "raid1", 00:28:03.787 "superblock": true, 00:28:03.787 "num_base_bdevs": 2, 00:28:03.787 "num_base_bdevs_discovered": 2, 00:28:03.787 "num_base_bdevs_operational": 2, 00:28:03.787 "process": { 00:28:03.787 "type": "rebuild", 00:28:03.787 "target": "spare", 00:28:03.787 "progress": { 00:28:03.787 "blocks": 63488, 00:28:03.787 "percent": 100 00:28:03.787 } 00:28:03.787 }, 00:28:03.787 "base_bdevs_list": [ 00:28:03.787 { 00:28:03.787 "name": "spare", 00:28:03.787 "uuid": "2cd9640d-050e-5483-bca8-a879e6b3e520", 00:28:03.787 "is_configured": true, 00:28:03.787 "data_offset": 2048, 00:28:03.787 "data_size": 63488 00:28:03.787 }, 00:28:03.787 { 00:28:03.787 "name": "BaseBdev2", 00:28:03.787 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:03.787 "is_configured": true, 00:28:03.787 "data_offset": 2048, 00:28:03.787 "data_size": 63488 00:28:03.787 } 00:28:03.787 ] 00:28:03.787 }' 00:28:03.787 07:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:03.787 [2024-07-12 07:38:37.464251] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:03.787 [2024-07-12 07:38:37.466591] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:03.787 07:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:03.787 07:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:03.787 07:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:03.787 07:38:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:04.722 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:04.722 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:04.722 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:04.722 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:04.722 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:04.722 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:04.722 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.722 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.981 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:04.981 "name": "raid_bdev1", 00:28:04.981 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:04.981 "strip_size_kb": 0, 00:28:04.981 "state": "online", 00:28:04.981 "raid_level": "raid1", 00:28:04.981 "superblock": true, 00:28:04.981 "num_base_bdevs": 2, 00:28:04.981 "num_base_bdevs_discovered": 2, 00:28:04.981 "num_base_bdevs_operational": 2, 00:28:04.981 "base_bdevs_list": [ 00:28:04.981 { 00:28:04.981 "name": "spare", 00:28:04.981 "uuid": "2cd9640d-050e-5483-bca8-a879e6b3e520", 00:28:04.981 "is_configured": true, 00:28:04.981 "data_offset": 2048, 00:28:04.981 "data_size": 63488 00:28:04.981 }, 00:28:04.981 { 00:28:04.981 "name": "BaseBdev2", 00:28:04.981 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:04.981 "is_configured": true, 00:28:04.981 "data_offset": 2048, 00:28:04.981 "data_size": 63488 00:28:04.981 } 00:28:04.981 ] 00:28:04.981 }' 00:28:04.981 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:04.981 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:04.981 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:05.240 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:05.240 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:28:05.240 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:05.240 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:05.240 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:05.240 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:05.240 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:05.240 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.240 07:38:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:05.500 "name": "raid_bdev1", 00:28:05.500 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:05.500 "strip_size_kb": 0, 00:28:05.500 "state": "online", 00:28:05.500 "raid_level": "raid1", 00:28:05.500 "superblock": true, 00:28:05.500 "num_base_bdevs": 2, 00:28:05.500 "num_base_bdevs_discovered": 2, 00:28:05.500 "num_base_bdevs_operational": 2, 00:28:05.500 "base_bdevs_list": [ 00:28:05.500 { 00:28:05.500 "name": "spare", 00:28:05.500 "uuid": "2cd9640d-050e-5483-bca8-a879e6b3e520", 00:28:05.500 "is_configured": true, 00:28:05.500 "data_offset": 2048, 00:28:05.500 "data_size": 63488 00:28:05.500 }, 00:28:05.500 { 00:28:05.500 "name": "BaseBdev2", 00:28:05.500 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:05.500 "is_configured": true, 00:28:05.500 "data_offset": 2048, 00:28:05.500 "data_size": 63488 00:28:05.500 } 00:28:05.500 ] 00:28:05.500 }' 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.500 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.759 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:05.759 "name": "raid_bdev1", 00:28:05.759 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:05.759 "strip_size_kb": 0, 00:28:05.759 "state": "online", 00:28:05.759 "raid_level": "raid1", 00:28:05.759 "superblock": true, 00:28:05.759 "num_base_bdevs": 2, 00:28:05.759 "num_base_bdevs_discovered": 2, 00:28:05.759 "num_base_bdevs_operational": 2, 00:28:05.759 "base_bdevs_list": [ 00:28:05.759 { 00:28:05.759 "name": "spare", 00:28:05.759 "uuid": "2cd9640d-050e-5483-bca8-a879e6b3e520", 00:28:05.759 "is_configured": true, 00:28:05.759 "data_offset": 2048, 00:28:05.759 "data_size": 63488 00:28:05.759 }, 00:28:05.759 { 00:28:05.759 "name": "BaseBdev2", 00:28:05.759 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:05.759 "is_configured": true, 00:28:05.759 "data_offset": 2048, 00:28:05.759 "data_size": 63488 00:28:05.759 } 00:28:05.759 ] 00:28:05.759 }' 00:28:05.759 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:05.759 07:38:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:06.328 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:06.328 [2024-07-12 07:38:40.193503] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:06.328 [2024-07-12 07:38:40.193731] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:06.587 00:28:06.587 Latency(us) 00:28:06.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.587 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:06.587 raid_bdev1 : 11.92 119.72 359.16 0.00 0.00 11625.23 300.37 111348.78 00:28:06.587 =================================================================================================================== 00:28:06.587 Total : 119.72 359.16 0.00 0.00 11625.23 300.37 111348.78 00:28:06.587 [2024-07-12 07:38:40.273607] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:06.587 [2024-07-12 07:38:40.273783] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:06.587 0 00:28:06.587 [2024-07-12 07:38:40.273905] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:06.587 [2024-07-12 07:38:40.273917] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:28:06.587 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:28:06.587 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.846 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:06.846 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:06.846 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:06.846 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:28:06.846 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:06.846 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:28:06.846 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:06.846 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:06.846 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:06.846 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:06.846 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:06.846 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:06.846 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:28:07.104 /dev/nbd0 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:07.104 1+0 records in 00:28:07.104 1+0 records out 00:28:07.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506695 s, 8.1 MB/s 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:07.104 07:38:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:28:07.363 /dev/nbd1 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:07.363 1+0 records in 00:28:07.363 1+0 records out 00:28:07.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066527 s, 6.2 MB/s 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:07.363 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:07.621 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:07.621 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:07.621 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:07.621 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:07.621 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:07.621 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:07.880 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:07.880 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:07.880 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:07.880 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:07.880 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:07.880 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:07.880 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:07.880 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:07.880 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:08.139 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:08.139 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:08.139 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:08.139 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:08.139 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:08.139 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:08.139 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:08.139 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:08.139 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:28:08.139 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:08.139 07:38:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:08.398 [2024-07-12 07:38:42.232407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:08.398 [2024-07-12 07:38:42.232750] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.398 [2024-07-12 07:38:42.232827] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:08.398 [2024-07-12 07:38:42.233012] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.398 [2024-07-12 07:38:42.235503] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.398 [2024-07-12 07:38:42.235696] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:08.398 [2024-07-12 07:38:42.235883] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:08.398 [2024-07-12 07:38:42.236024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:08.398 [2024-07-12 07:38:42.236277] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:08.398 spare 00:28:08.398 07:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:08.398 07:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:08.398 07:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:08.398 07:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:08.398 07:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:08.398 07:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:08.398 07:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:08.398 07:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:08.398 07:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:08.398 07:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:08.398 07:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.398 07:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.657 [2024-07-12 07:38:42.336469] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:28:08.657 [2024-07-12 07:38:42.336742] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:08.657 [2024-07-12 07:38:42.336967] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:28:08.657 [2024-07-12 07:38:42.337564] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:28:08.657 [2024-07-12 07:38:42.337679] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:28:08.657 [2024-07-12 07:38:42.337894] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:08.657 07:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:08.657 "name": "raid_bdev1", 00:28:08.657 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:08.657 "strip_size_kb": 0, 00:28:08.657 "state": "online", 00:28:08.657 "raid_level": "raid1", 00:28:08.657 "superblock": true, 00:28:08.657 "num_base_bdevs": 2, 00:28:08.657 "num_base_bdevs_discovered": 2, 00:28:08.657 "num_base_bdevs_operational": 2, 00:28:08.657 "base_bdevs_list": [ 00:28:08.657 { 00:28:08.657 "name": "spare", 00:28:08.658 "uuid": "2cd9640d-050e-5483-bca8-a879e6b3e520", 00:28:08.658 "is_configured": true, 00:28:08.658 "data_offset": 2048, 00:28:08.658 "data_size": 63488 00:28:08.658 }, 00:28:08.658 { 00:28:08.658 "name": "BaseBdev2", 00:28:08.658 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:08.658 "is_configured": true, 00:28:08.658 "data_offset": 2048, 00:28:08.658 "data_size": 63488 00:28:08.658 } 00:28:08.658 ] 00:28:08.658 }' 00:28:08.658 07:38:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:08.658 07:38:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:09.604 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:09.604 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:09.604 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:09.604 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:09.604 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:09.604 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.604 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.604 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:09.604 "name": "raid_bdev1", 00:28:09.604 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:09.604 "strip_size_kb": 0, 00:28:09.604 "state": "online", 00:28:09.604 "raid_level": "raid1", 00:28:09.604 "superblock": true, 00:28:09.604 "num_base_bdevs": 2, 00:28:09.604 "num_base_bdevs_discovered": 2, 00:28:09.604 "num_base_bdevs_operational": 2, 00:28:09.604 "base_bdevs_list": [ 00:28:09.604 { 00:28:09.604 "name": "spare", 00:28:09.604 "uuid": "2cd9640d-050e-5483-bca8-a879e6b3e520", 00:28:09.604 "is_configured": true, 00:28:09.604 "data_offset": 2048, 00:28:09.604 "data_size": 63488 00:28:09.604 }, 00:28:09.604 { 00:28:09.604 "name": "BaseBdev2", 00:28:09.605 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:09.605 "is_configured": true, 00:28:09.605 "data_offset": 2048, 00:28:09.605 "data_size": 63488 00:28:09.605 } 00:28:09.605 ] 00:28:09.605 }' 00:28:09.605 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:09.605 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:09.605 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:09.605 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:09.605 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.605 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:09.863 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:28:09.863 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:10.121 [2024-07-12 07:38:43.982405] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:10.121 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:10.121 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:10.121 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:10.121 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:10.121 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:10.121 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:10.121 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:10.121 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:10.121 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:10.121 07:38:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:10.378 07:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.378 07:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.378 07:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:10.378 "name": "raid_bdev1", 00:28:10.378 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:10.378 "strip_size_kb": 0, 00:28:10.378 "state": "online", 00:28:10.378 "raid_level": "raid1", 00:28:10.378 "superblock": true, 00:28:10.378 "num_base_bdevs": 2, 00:28:10.378 "num_base_bdevs_discovered": 1, 00:28:10.378 "num_base_bdevs_operational": 1, 00:28:10.378 "base_bdevs_list": [ 00:28:10.378 { 00:28:10.378 "name": null, 00:28:10.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.378 "is_configured": false, 00:28:10.378 "data_offset": 2048, 00:28:10.378 "data_size": 63488 00:28:10.378 }, 00:28:10.378 { 00:28:10.378 "name": "BaseBdev2", 00:28:10.378 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:10.378 "is_configured": true, 00:28:10.378 "data_offset": 2048, 00:28:10.378 "data_size": 63488 00:28:10.378 } 00:28:10.378 ] 00:28:10.378 }' 00:28:10.378 07:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:10.378 07:38:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:10.981 07:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:11.257 [2024-07-12 07:38:44.946790] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:11.257 [2024-07-12 07:38:44.947247] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:11.257 [2024-07-12 07:38:44.947366] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:11.257 [2024-07-12 07:38:44.947459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:11.257 [2024-07-12 07:38:44.951995] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027a60 00:28:11.257 [2024-07-12 07:38:44.954160] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:11.257 07:38:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:28:12.203 07:38:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:12.203 07:38:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:12.203 07:38:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:12.203 07:38:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:12.203 07:38:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:12.203 07:38:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.203 07:38:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.462 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:12.462 "name": "raid_bdev1", 00:28:12.462 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:12.462 "strip_size_kb": 0, 00:28:12.462 "state": "online", 00:28:12.462 "raid_level": "raid1", 00:28:12.462 "superblock": true, 00:28:12.462 "num_base_bdevs": 2, 00:28:12.462 "num_base_bdevs_discovered": 2, 00:28:12.462 "num_base_bdevs_operational": 2, 00:28:12.462 "process": { 00:28:12.462 "type": "rebuild", 00:28:12.462 "target": "spare", 00:28:12.462 "progress": { 00:28:12.462 "blocks": 24576, 00:28:12.462 "percent": 38 00:28:12.462 } 00:28:12.462 }, 00:28:12.462 "base_bdevs_list": [ 00:28:12.462 { 00:28:12.462 "name": "spare", 00:28:12.462 "uuid": "2cd9640d-050e-5483-bca8-a879e6b3e520", 00:28:12.462 "is_configured": true, 00:28:12.462 "data_offset": 2048, 00:28:12.462 "data_size": 63488 00:28:12.462 }, 00:28:12.462 { 00:28:12.462 "name": "BaseBdev2", 00:28:12.462 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:12.462 "is_configured": true, 00:28:12.462 "data_offset": 2048, 00:28:12.462 "data_size": 63488 00:28:12.462 } 00:28:12.462 ] 00:28:12.462 }' 00:28:12.462 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:12.462 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:12.462 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:12.462 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:12.462 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:12.721 [2024-07-12 07:38:46.532480] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:12.721 [2024-07-12 07:38:46.563422] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:12.721 [2024-07-12 07:38:46.563626] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:12.721 [2024-07-12 07:38:46.563673] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:12.721 [2024-07-12 07:38:46.563749] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:12.721 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:12.721 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:12.721 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:12.721 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:12.721 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:12.721 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:12.721 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:12.721 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:12.721 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:12.721 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:12.721 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.721 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.980 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:12.980 "name": "raid_bdev1", 00:28:12.980 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:12.980 "strip_size_kb": 0, 00:28:12.980 "state": "online", 00:28:12.980 "raid_level": "raid1", 00:28:12.980 "superblock": true, 00:28:12.980 "num_base_bdevs": 2, 00:28:12.980 "num_base_bdevs_discovered": 1, 00:28:12.980 "num_base_bdevs_operational": 1, 00:28:12.980 "base_bdevs_list": [ 00:28:12.980 { 00:28:12.980 "name": null, 00:28:12.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.980 "is_configured": false, 00:28:12.980 "data_offset": 2048, 00:28:12.980 "data_size": 63488 00:28:12.980 }, 00:28:12.980 { 00:28:12.980 "name": "BaseBdev2", 00:28:12.980 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:12.980 "is_configured": true, 00:28:12.980 "data_offset": 2048, 00:28:12.980 "data_size": 63488 00:28:12.980 } 00:28:12.980 ] 00:28:12.980 }' 00:28:12.980 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:12.980 07:38:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:13.546 07:38:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:13.803 [2024-07-12 07:38:47.476674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:13.803 [2024-07-12 07:38:47.478117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:13.803 [2024-07-12 07:38:47.478388] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:28:13.803 [2024-07-12 07:38:47.478503] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:13.803 [2024-07-12 07:38:47.478995] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:13.803 [2024-07-12 07:38:47.479151] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:13.803 [2024-07-12 07:38:47.479352] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:13.803 [2024-07-12 07:38:47.479450] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:13.803 [2024-07-12 07:38:47.479529] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:13.803 [2024-07-12 07:38:47.479626] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:13.803 [2024-07-12 07:38:47.484129] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:28:13.803 spare 00:28:13.803 [2024-07-12 07:38:47.486305] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:13.803 07:38:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:28:14.737 07:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:14.737 07:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:14.737 07:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:14.737 07:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:14.737 07:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:14.737 07:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.737 07:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.995 07:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:14.995 "name": "raid_bdev1", 00:28:14.995 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:14.995 "strip_size_kb": 0, 00:28:14.995 "state": "online", 00:28:14.995 "raid_level": "raid1", 00:28:14.995 "superblock": true, 00:28:14.995 "num_base_bdevs": 2, 00:28:14.995 "num_base_bdevs_discovered": 2, 00:28:14.995 "num_base_bdevs_operational": 2, 00:28:14.995 "process": { 00:28:14.995 "type": "rebuild", 00:28:14.995 "target": "spare", 00:28:14.995 "progress": { 00:28:14.995 "blocks": 24576, 00:28:14.995 "percent": 38 00:28:14.995 } 00:28:14.995 }, 00:28:14.995 "base_bdevs_list": [ 00:28:14.995 { 00:28:14.995 "name": "spare", 00:28:14.995 "uuid": "2cd9640d-050e-5483-bca8-a879e6b3e520", 00:28:14.995 "is_configured": true, 00:28:14.995 "data_offset": 2048, 00:28:14.995 "data_size": 63488 00:28:14.995 }, 00:28:14.995 { 00:28:14.995 "name": "BaseBdev2", 00:28:14.995 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:14.995 "is_configured": true, 00:28:14.995 "data_offset": 2048, 00:28:14.995 "data_size": 63488 00:28:14.995 } 00:28:14.995 ] 00:28:14.995 }' 00:28:14.995 07:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:14.995 07:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:14.995 07:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:14.995 07:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:14.995 07:38:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:15.252 [2024-07-12 07:38:49.116806] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:15.509 [2024-07-12 07:38:49.195766] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:15.509 [2024-07-12 07:38:49.196015] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:15.509 [2024-07-12 07:38:49.196065] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:15.509 [2024-07-12 07:38:49.196143] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:15.509 07:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:15.509 07:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:15.509 07:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:15.509 07:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:15.509 07:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:15.509 07:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:15.509 07:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:15.509 07:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:15.509 07:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:15.509 07:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:15.509 07:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.509 07:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.765 07:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:15.765 "name": "raid_bdev1", 00:28:15.765 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:15.765 "strip_size_kb": 0, 00:28:15.765 "state": "online", 00:28:15.765 "raid_level": "raid1", 00:28:15.765 "superblock": true, 00:28:15.765 "num_base_bdevs": 2, 00:28:15.765 "num_base_bdevs_discovered": 1, 00:28:15.765 "num_base_bdevs_operational": 1, 00:28:15.765 "base_bdevs_list": [ 00:28:15.765 { 00:28:15.765 "name": null, 00:28:15.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.765 "is_configured": false, 00:28:15.765 "data_offset": 2048, 00:28:15.765 "data_size": 63488 00:28:15.765 }, 00:28:15.765 { 00:28:15.765 "name": "BaseBdev2", 00:28:15.765 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:15.765 "is_configured": true, 00:28:15.765 "data_offset": 2048, 00:28:15.765 "data_size": 63488 00:28:15.765 } 00:28:15.765 ] 00:28:15.765 }' 00:28:15.765 07:38:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:15.765 07:38:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:16.331 07:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:16.331 07:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:16.331 07:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:16.331 07:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:16.331 07:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:16.331 07:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.331 07:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.589 07:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:16.589 "name": "raid_bdev1", 00:28:16.589 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:16.589 "strip_size_kb": 0, 00:28:16.589 "state": "online", 00:28:16.589 "raid_level": "raid1", 00:28:16.589 "superblock": true, 00:28:16.589 "num_base_bdevs": 2, 00:28:16.589 "num_base_bdevs_discovered": 1, 00:28:16.589 "num_base_bdevs_operational": 1, 00:28:16.589 "base_bdevs_list": [ 00:28:16.589 { 00:28:16.589 "name": null, 00:28:16.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.589 "is_configured": false, 00:28:16.589 "data_offset": 2048, 00:28:16.589 "data_size": 63488 00:28:16.589 }, 00:28:16.589 { 00:28:16.589 "name": "BaseBdev2", 00:28:16.589 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:16.589 "is_configured": true, 00:28:16.589 "data_offset": 2048, 00:28:16.589 "data_size": 63488 00:28:16.589 } 00:28:16.589 ] 00:28:16.589 }' 00:28:16.589 07:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:16.589 07:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:16.589 07:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:16.589 07:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:16.589 07:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:16.847 07:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:17.105 [2024-07-12 07:38:50.871565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:17.105 [2024-07-12 07:38:50.871934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:17.105 [2024-07-12 07:38:50.872025] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:17.105 [2024-07-12 07:38:50.872144] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:17.105 [2024-07-12 07:38:50.872619] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:17.105 [2024-07-12 07:38:50.872767] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:17.105 [2024-07-12 07:38:50.872940] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:17.105 [2024-07-12 07:38:50.873024] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:17.105 [2024-07-12 07:38:50.873091] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:17.105 BaseBdev1 00:28:17.105 07:38:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:28:18.037 07:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:18.037 07:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:18.037 07:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:18.037 07:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:18.037 07:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:18.037 07:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:18.037 07:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:18.037 07:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:18.037 07:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:18.037 07:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:18.037 07:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.037 07:38:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.295 07:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:18.295 "name": "raid_bdev1", 00:28:18.295 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:18.295 "strip_size_kb": 0, 00:28:18.295 "state": "online", 00:28:18.295 "raid_level": "raid1", 00:28:18.295 "superblock": true, 00:28:18.295 "num_base_bdevs": 2, 00:28:18.295 "num_base_bdevs_discovered": 1, 00:28:18.295 "num_base_bdevs_operational": 1, 00:28:18.295 "base_bdevs_list": [ 00:28:18.295 { 00:28:18.295 "name": null, 00:28:18.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:18.295 "is_configured": false, 00:28:18.295 "data_offset": 2048, 00:28:18.295 "data_size": 63488 00:28:18.295 }, 00:28:18.295 { 00:28:18.295 "name": "BaseBdev2", 00:28:18.295 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:18.295 "is_configured": true, 00:28:18.295 "data_offset": 2048, 00:28:18.295 "data_size": 63488 00:28:18.295 } 00:28:18.295 ] 00:28:18.295 }' 00:28:18.295 07:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:18.295 07:38:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:18.860 07:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:18.860 07:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:18.860 07:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:18.860 07:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:18.860 07:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:18.860 07:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.860 07:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.118 07:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:19.118 "name": "raid_bdev1", 00:28:19.118 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:19.118 "strip_size_kb": 0, 00:28:19.118 "state": "online", 00:28:19.118 "raid_level": "raid1", 00:28:19.118 "superblock": true, 00:28:19.118 "num_base_bdevs": 2, 00:28:19.118 "num_base_bdevs_discovered": 1, 00:28:19.118 "num_base_bdevs_operational": 1, 00:28:19.118 "base_bdevs_list": [ 00:28:19.118 { 00:28:19.118 "name": null, 00:28:19.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.118 "is_configured": false, 00:28:19.118 "data_offset": 2048, 00:28:19.118 "data_size": 63488 00:28:19.118 }, 00:28:19.118 { 00:28:19.118 "name": "BaseBdev2", 00:28:19.118 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:19.118 "is_configured": true, 00:28:19.118 "data_offset": 2048, 00:28:19.118 "data_size": 63488 00:28:19.118 } 00:28:19.118 ] 00:28:19.118 }' 00:28:19.118 07:38:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:19.375 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:19.375 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:19.375 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:19.375 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:19.375 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:28:19.375 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:19.375 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:19.375 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:19.375 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:19.375 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:19.375 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:19.375 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:19.375 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:19.375 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:19.375 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:19.631 [2024-07-12 07:38:53.260221] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:19.631 [2024-07-12 07:38:53.260633] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:19.631 [2024-07-12 07:38:53.260743] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:19.631 request: 00:28:19.631 { 00:28:19.631 "raid_bdev": "raid_bdev1", 00:28:19.631 "base_bdev": "BaseBdev1", 00:28:19.631 "method": "bdev_raid_add_base_bdev", 00:28:19.631 "req_id": 1 00:28:19.631 } 00:28:19.631 Got JSON-RPC error response 00:28:19.631 response: 00:28:19.631 { 00:28:19.631 "code": -22, 00:28:19.631 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:19.631 } 00:28:19.631 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:28:19.631 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:19.631 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:19.631 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:19.631 07:38:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:28:20.564 07:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:20.564 07:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:20.564 07:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:20.564 07:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:20.564 07:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:20.564 07:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:20.564 07:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:20.564 07:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:20.564 07:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:20.564 07:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:20.564 07:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.564 07:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.822 07:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:20.822 "name": "raid_bdev1", 00:28:20.822 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:20.822 "strip_size_kb": 0, 00:28:20.822 "state": "online", 00:28:20.822 "raid_level": "raid1", 00:28:20.822 "superblock": true, 00:28:20.822 "num_base_bdevs": 2, 00:28:20.822 "num_base_bdevs_discovered": 1, 00:28:20.822 "num_base_bdevs_operational": 1, 00:28:20.822 "base_bdevs_list": [ 00:28:20.822 { 00:28:20.822 "name": null, 00:28:20.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:20.822 "is_configured": false, 00:28:20.822 "data_offset": 2048, 00:28:20.822 "data_size": 63488 00:28:20.822 }, 00:28:20.822 { 00:28:20.822 "name": "BaseBdev2", 00:28:20.822 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:20.822 "is_configured": true, 00:28:20.822 "data_offset": 2048, 00:28:20.822 "data_size": 63488 00:28:20.822 } 00:28:20.822 ] 00:28:20.822 }' 00:28:20.822 07:38:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:20.822 07:38:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:21.387 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:21.387 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:21.387 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:21.387 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:21.387 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:21.387 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.387 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:21.645 "name": "raid_bdev1", 00:28:21.645 "uuid": "15e35040-00a0-45e6-92e2-53abf4ae8993", 00:28:21.645 "strip_size_kb": 0, 00:28:21.645 "state": "online", 00:28:21.645 "raid_level": "raid1", 00:28:21.645 "superblock": true, 00:28:21.645 "num_base_bdevs": 2, 00:28:21.645 "num_base_bdevs_discovered": 1, 00:28:21.645 "num_base_bdevs_operational": 1, 00:28:21.645 "base_bdevs_list": [ 00:28:21.645 { 00:28:21.645 "name": null, 00:28:21.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.645 "is_configured": false, 00:28:21.645 "data_offset": 2048, 00:28:21.645 "data_size": 63488 00:28:21.645 }, 00:28:21.645 { 00:28:21.645 "name": "BaseBdev2", 00:28:21.645 "uuid": "f517e586-8990-5105-aceb-424500135c77", 00:28:21.645 "is_configured": true, 00:28:21.645 "data_offset": 2048, 00:28:21.645 "data_size": 63488 00:28:21.645 } 00:28:21.645 ] 00:28:21.645 }' 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 155723 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@946 -- # '[' -z 155723 ']' 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # kill -0 155723 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # uname 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 155723 00:28:21.645 killing process with pid 155723 00:28:21.645 Received shutdown signal, test time was about 27.122931 seconds 00:28:21.645 00:28:21.645 Latency(us) 00:28:21.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.645 =================================================================================================================== 00:28:21.645 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 155723' 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@965 -- # kill 155723 00:28:21.645 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # wait 155723 00:28:21.645 [2024-07-12 07:38:55.473033] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:21.645 [2024-07-12 07:38:55.473175] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:21.646 [2024-07-12 07:38:55.473232] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:21.646 [2024-07-12 07:38:55.473242] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:28:21.646 [2024-07-12 07:38:55.499591] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:21.903 ************************************ 00:28:21.903 END TEST raid_rebuild_test_sb_io 00:28:21.903 ************************************ 00:28:21.903 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:28:21.903 00:28:21.903 real 0m31.358s 00:28:21.903 user 0m49.117s 00:28:21.903 sys 0m4.203s 00:28:21.903 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:21.903 07:38:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:22.162 07:38:55 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:28:22.162 07:38:55 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:28:22.162 07:38:55 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:28:22.162 07:38:55 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:22.162 07:38:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:22.162 ************************************ 00:28:22.162 START TEST raid_rebuild_test 00:28:22.162 ************************************ 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 false false true 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=156593 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 156593 /var/tmp/spdk-raid.sock 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 156593 ']' 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:22.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:22.162 07:38:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:22.162 [2024-07-12 07:38:55.919087] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:22.162 [2024-07-12 07:38:55.919805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156593 ] 00:28:22.162 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:22.162 Zero copy mechanism will not be used. 00:28:22.420 [2024-07-12 07:38:56.070630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.420 [2024-07-12 07:38:56.117957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.420 [2024-07-12 07:38:56.160220] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:22.986 07:38:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:22.986 07:38:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:28:22.986 07:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:22.986 07:38:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:23.244 BaseBdev1_malloc 00:28:23.244 07:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:23.502 [2024-07-12 07:38:57.255417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:23.502 [2024-07-12 07:38:57.255770] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:23.502 [2024-07-12 07:38:57.255907] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:28:23.502 [2024-07-12 07:38:57.256026] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:23.502 [2024-07-12 07:38:57.258672] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:23.502 [2024-07-12 07:38:57.258847] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:23.502 BaseBdev1 00:28:23.502 07:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:23.502 07:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:23.760 BaseBdev2_malloc 00:28:23.760 07:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:24.018 [2024-07-12 07:38:57.656705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:24.018 [2024-07-12 07:38:57.656995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.018 [2024-07-12 07:38:57.657073] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:28:24.018 [2024-07-12 07:38:57.657289] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.018 [2024-07-12 07:38:57.659785] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.018 [2024-07-12 07:38:57.659942] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:24.018 BaseBdev2 00:28:24.018 07:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:24.018 07:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:24.018 BaseBdev3_malloc 00:28:24.277 07:38:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:24.277 [2024-07-12 07:38:58.080551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:24.277 [2024-07-12 07:38:58.080845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.277 [2024-07-12 07:38:58.080967] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:24.277 [2024-07-12 07:38:58.081098] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.277 [2024-07-12 07:38:58.083504] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.277 [2024-07-12 07:38:58.083683] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:24.277 BaseBdev3 00:28:24.277 07:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:24.277 07:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:24.535 BaseBdev4_malloc 00:28:24.535 07:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:24.794 [2024-07-12 07:38:58.465633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:24.794 [2024-07-12 07:38:58.465985] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.794 [2024-07-12 07:38:58.466059] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:24.794 [2024-07-12 07:38:58.466253] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.794 [2024-07-12 07:38:58.468658] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.794 [2024-07-12 07:38:58.468839] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:24.794 BaseBdev4 00:28:24.794 07:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:24.794 spare_malloc 00:28:25.052 07:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:25.052 spare_delay 00:28:25.052 07:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:25.312 [2024-07-12 07:38:59.058777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:25.312 [2024-07-12 07:38:59.059105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:25.312 [2024-07-12 07:38:59.059177] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:25.312 [2024-07-12 07:38:59.059278] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:25.312 [2024-07-12 07:38:59.061708] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:25.312 [2024-07-12 07:38:59.061900] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:25.312 spare 00:28:25.312 07:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:28:25.572 [2024-07-12 07:38:59.250874] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:25.572 [2024-07-12 07:38:59.253195] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:25.572 [2024-07-12 07:38:59.253405] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:25.572 [2024-07-12 07:38:59.253483] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:25.572 [2024-07-12 07:38:59.253712] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:28:25.572 [2024-07-12 07:38:59.253840] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:25.572 [2024-07-12 07:38:59.254056] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:28:25.572 [2024-07-12 07:38:59.254527] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:28:25.572 [2024-07-12 07:38:59.254625] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:28:25.572 [2024-07-12 07:38:59.254899] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:25.572 07:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:25.572 07:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:25.572 07:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:25.572 07:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:25.572 07:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:25.572 07:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:25.572 07:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:25.572 07:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:25.572 07:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:25.572 07:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:25.572 07:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.572 07:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.572 07:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:25.572 "name": "raid_bdev1", 00:28:25.572 "uuid": "806e0f03-fe4d-4657-8e9c-9d2527991b36", 00:28:25.572 "strip_size_kb": 0, 00:28:25.572 "state": "online", 00:28:25.572 "raid_level": "raid1", 00:28:25.572 "superblock": false, 00:28:25.572 "num_base_bdevs": 4, 00:28:25.572 "num_base_bdevs_discovered": 4, 00:28:25.572 "num_base_bdevs_operational": 4, 00:28:25.572 "base_bdevs_list": [ 00:28:25.572 { 00:28:25.572 "name": "BaseBdev1", 00:28:25.572 "uuid": "846f308c-5c54-5161-9b1e-ef45eba28ec1", 00:28:25.572 "is_configured": true, 00:28:25.572 "data_offset": 0, 00:28:25.572 "data_size": 65536 00:28:25.572 }, 00:28:25.572 { 00:28:25.572 "name": "BaseBdev2", 00:28:25.572 "uuid": "6c83fbad-45f4-517e-b0b1-cddfcb691ccf", 00:28:25.572 "is_configured": true, 00:28:25.572 "data_offset": 0, 00:28:25.572 "data_size": 65536 00:28:25.572 }, 00:28:25.572 { 00:28:25.572 "name": "BaseBdev3", 00:28:25.572 "uuid": "5d9a1e49-c557-5082-b8e4-c733226ab4c3", 00:28:25.572 "is_configured": true, 00:28:25.572 "data_offset": 0, 00:28:25.572 "data_size": 65536 00:28:25.572 }, 00:28:25.572 { 00:28:25.572 "name": "BaseBdev4", 00:28:25.572 "uuid": "2bd46957-e69a-55ff-b18f-4d45f71f8d29", 00:28:25.572 "is_configured": true, 00:28:25.572 "data_offset": 0, 00:28:25.572 "data_size": 65536 00:28:25.572 } 00:28:25.572 ] 00:28:25.572 }' 00:28:25.572 07:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:25.572 07:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.509 07:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:26.509 07:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:26.509 [2024-07-12 07:39:00.267276] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:26.509 07:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:28:26.509 07:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.509 07:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:26.767 07:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:28:26.767 07:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:28:26.767 07:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:28:26.767 07:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:28:26.767 07:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:26.767 07:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:26.767 07:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:26.767 07:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:26.767 07:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:26.767 07:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:26.767 07:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:26.767 07:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:26.767 07:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:26.767 07:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:27.026 [2024-07-12 07:39:00.683163] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:28:27.026 /dev/nbd0 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:27.026 1+0 records in 00:28:27.026 1+0 records out 00:28:27.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042163 s, 9.7 MB/s 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:28:27.026 07:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:28:33.617 65536+0 records in 00:28:33.617 65536+0 records out 00:28:33.617 33554432 bytes (34 MB, 32 MiB) copied, 5.52831 s, 6.1 MB/s 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:33.617 [2024-07-12 07:39:06.490903] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:33.617 [2024-07-12 07:39:06.742542] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:33.617 07:39:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:33.618 07:39:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:33.618 07:39:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:33.618 07:39:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:33.618 07:39:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:33.618 07:39:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:33.618 07:39:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:33.618 07:39:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:33.618 07:39:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:33.618 07:39:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.618 07:39:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.618 07:39:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:33.618 "name": "raid_bdev1", 00:28:33.618 "uuid": "806e0f03-fe4d-4657-8e9c-9d2527991b36", 00:28:33.618 "strip_size_kb": 0, 00:28:33.618 "state": "online", 00:28:33.618 "raid_level": "raid1", 00:28:33.618 "superblock": false, 00:28:33.618 "num_base_bdevs": 4, 00:28:33.618 "num_base_bdevs_discovered": 3, 00:28:33.618 "num_base_bdevs_operational": 3, 00:28:33.618 "base_bdevs_list": [ 00:28:33.618 { 00:28:33.618 "name": null, 00:28:33.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:33.618 "is_configured": false, 00:28:33.618 "data_offset": 0, 00:28:33.618 "data_size": 65536 00:28:33.618 }, 00:28:33.618 { 00:28:33.618 "name": "BaseBdev2", 00:28:33.618 "uuid": "6c83fbad-45f4-517e-b0b1-cddfcb691ccf", 00:28:33.618 "is_configured": true, 00:28:33.618 "data_offset": 0, 00:28:33.618 "data_size": 65536 00:28:33.618 }, 00:28:33.618 { 00:28:33.618 "name": "BaseBdev3", 00:28:33.618 "uuid": "5d9a1e49-c557-5082-b8e4-c733226ab4c3", 00:28:33.618 "is_configured": true, 00:28:33.618 "data_offset": 0, 00:28:33.618 "data_size": 65536 00:28:33.618 }, 00:28:33.618 { 00:28:33.618 "name": "BaseBdev4", 00:28:33.618 "uuid": "2bd46957-e69a-55ff-b18f-4d45f71f8d29", 00:28:33.618 "is_configured": true, 00:28:33.618 "data_offset": 0, 00:28:33.618 "data_size": 65536 00:28:33.618 } 00:28:33.618 ] 00:28:33.618 }' 00:28:33.618 07:39:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:33.618 07:39:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.876 07:39:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:33.876 [2024-07-12 07:39:07.726713] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:33.876 [2024-07-12 07:39:07.730304] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d063c0 00:28:33.876 [2024-07-12 07:39:07.732681] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:33.876 07:39:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:35.306 07:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:35.306 07:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:35.306 07:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:35.306 07:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:35.306 07:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:35.306 07:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.306 07:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.306 07:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:35.306 "name": "raid_bdev1", 00:28:35.306 "uuid": "806e0f03-fe4d-4657-8e9c-9d2527991b36", 00:28:35.306 "strip_size_kb": 0, 00:28:35.306 "state": "online", 00:28:35.306 "raid_level": "raid1", 00:28:35.306 "superblock": false, 00:28:35.306 "num_base_bdevs": 4, 00:28:35.306 "num_base_bdevs_discovered": 4, 00:28:35.306 "num_base_bdevs_operational": 4, 00:28:35.306 "process": { 00:28:35.306 "type": "rebuild", 00:28:35.306 "target": "spare", 00:28:35.306 "progress": { 00:28:35.306 "blocks": 24576, 00:28:35.306 "percent": 37 00:28:35.306 } 00:28:35.306 }, 00:28:35.306 "base_bdevs_list": [ 00:28:35.306 { 00:28:35.306 "name": "spare", 00:28:35.306 "uuid": "b03977da-c052-5da1-af71-cb67883d8f28", 00:28:35.306 "is_configured": true, 00:28:35.306 "data_offset": 0, 00:28:35.306 "data_size": 65536 00:28:35.306 }, 00:28:35.306 { 00:28:35.306 "name": "BaseBdev2", 00:28:35.306 "uuid": "6c83fbad-45f4-517e-b0b1-cddfcb691ccf", 00:28:35.306 "is_configured": true, 00:28:35.306 "data_offset": 0, 00:28:35.306 "data_size": 65536 00:28:35.306 }, 00:28:35.306 { 00:28:35.306 "name": "BaseBdev3", 00:28:35.306 "uuid": "5d9a1e49-c557-5082-b8e4-c733226ab4c3", 00:28:35.306 "is_configured": true, 00:28:35.306 "data_offset": 0, 00:28:35.306 "data_size": 65536 00:28:35.306 }, 00:28:35.306 { 00:28:35.306 "name": "BaseBdev4", 00:28:35.306 "uuid": "2bd46957-e69a-55ff-b18f-4d45f71f8d29", 00:28:35.306 "is_configured": true, 00:28:35.306 "data_offset": 0, 00:28:35.306 "data_size": 65536 00:28:35.306 } 00:28:35.306 ] 00:28:35.306 }' 00:28:35.306 07:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:35.306 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:35.306 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:35.306 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:35.306 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:35.565 [2024-07-12 07:39:09.330375] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:35.565 [2024-07-12 07:39:09.342358] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:35.565 [2024-07-12 07:39:09.342552] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:35.565 [2024-07-12 07:39:09.342600] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:35.565 [2024-07-12 07:39:09.342682] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:35.565 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:35.565 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:35.565 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:35.565 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:35.565 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:35.565 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:35.565 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:35.565 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:35.565 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:35.565 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:35.565 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.565 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.824 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:35.824 "name": "raid_bdev1", 00:28:35.824 "uuid": "806e0f03-fe4d-4657-8e9c-9d2527991b36", 00:28:35.824 "strip_size_kb": 0, 00:28:35.824 "state": "online", 00:28:35.824 "raid_level": "raid1", 00:28:35.824 "superblock": false, 00:28:35.824 "num_base_bdevs": 4, 00:28:35.824 "num_base_bdevs_discovered": 3, 00:28:35.824 "num_base_bdevs_operational": 3, 00:28:35.824 "base_bdevs_list": [ 00:28:35.824 { 00:28:35.824 "name": null, 00:28:35.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.824 "is_configured": false, 00:28:35.824 "data_offset": 0, 00:28:35.824 "data_size": 65536 00:28:35.824 }, 00:28:35.824 { 00:28:35.824 "name": "BaseBdev2", 00:28:35.824 "uuid": "6c83fbad-45f4-517e-b0b1-cddfcb691ccf", 00:28:35.824 "is_configured": true, 00:28:35.824 "data_offset": 0, 00:28:35.824 "data_size": 65536 00:28:35.824 }, 00:28:35.824 { 00:28:35.824 "name": "BaseBdev3", 00:28:35.824 "uuid": "5d9a1e49-c557-5082-b8e4-c733226ab4c3", 00:28:35.824 "is_configured": true, 00:28:35.824 "data_offset": 0, 00:28:35.824 "data_size": 65536 00:28:35.824 }, 00:28:35.824 { 00:28:35.824 "name": "BaseBdev4", 00:28:35.824 "uuid": "2bd46957-e69a-55ff-b18f-4d45f71f8d29", 00:28:35.824 "is_configured": true, 00:28:35.824 "data_offset": 0, 00:28:35.824 "data_size": 65536 00:28:35.824 } 00:28:35.824 ] 00:28:35.824 }' 00:28:35.824 07:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:35.824 07:39:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.393 07:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:36.393 07:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:36.393 07:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:36.393 07:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:36.393 07:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:36.393 07:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:36.393 07:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.653 07:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:36.653 "name": "raid_bdev1", 00:28:36.653 "uuid": "806e0f03-fe4d-4657-8e9c-9d2527991b36", 00:28:36.653 "strip_size_kb": 0, 00:28:36.653 "state": "online", 00:28:36.653 "raid_level": "raid1", 00:28:36.653 "superblock": false, 00:28:36.653 "num_base_bdevs": 4, 00:28:36.653 "num_base_bdevs_discovered": 3, 00:28:36.653 "num_base_bdevs_operational": 3, 00:28:36.653 "base_bdevs_list": [ 00:28:36.653 { 00:28:36.653 "name": null, 00:28:36.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.653 "is_configured": false, 00:28:36.653 "data_offset": 0, 00:28:36.653 "data_size": 65536 00:28:36.653 }, 00:28:36.653 { 00:28:36.653 "name": "BaseBdev2", 00:28:36.653 "uuid": "6c83fbad-45f4-517e-b0b1-cddfcb691ccf", 00:28:36.653 "is_configured": true, 00:28:36.653 "data_offset": 0, 00:28:36.653 "data_size": 65536 00:28:36.653 }, 00:28:36.653 { 00:28:36.653 "name": "BaseBdev3", 00:28:36.653 "uuid": "5d9a1e49-c557-5082-b8e4-c733226ab4c3", 00:28:36.653 "is_configured": true, 00:28:36.653 "data_offset": 0, 00:28:36.653 "data_size": 65536 00:28:36.653 }, 00:28:36.653 { 00:28:36.653 "name": "BaseBdev4", 00:28:36.653 "uuid": "2bd46957-e69a-55ff-b18f-4d45f71f8d29", 00:28:36.653 "is_configured": true, 00:28:36.653 "data_offset": 0, 00:28:36.653 "data_size": 65536 00:28:36.653 } 00:28:36.653 ] 00:28:36.653 }' 00:28:36.653 07:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:36.653 07:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:36.653 07:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:36.653 07:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:36.653 07:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:36.913 [2024-07-12 07:39:10.701604] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:36.913 [2024-07-12 07:39:10.704957] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06560 00:28:36.913 [2024-07-12 07:39:10.707164] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:36.913 07:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:37.851 07:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:37.851 07:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:37.851 07:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:37.851 07:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:37.851 07:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:38.118 07:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.118 07:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.118 07:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:38.118 "name": "raid_bdev1", 00:28:38.118 "uuid": "806e0f03-fe4d-4657-8e9c-9d2527991b36", 00:28:38.118 "strip_size_kb": 0, 00:28:38.118 "state": "online", 00:28:38.118 "raid_level": "raid1", 00:28:38.118 "superblock": false, 00:28:38.118 "num_base_bdevs": 4, 00:28:38.118 "num_base_bdevs_discovered": 4, 00:28:38.118 "num_base_bdevs_operational": 4, 00:28:38.118 "process": { 00:28:38.118 "type": "rebuild", 00:28:38.118 "target": "spare", 00:28:38.118 "progress": { 00:28:38.118 "blocks": 24576, 00:28:38.118 "percent": 37 00:28:38.118 } 00:28:38.118 }, 00:28:38.118 "base_bdevs_list": [ 00:28:38.118 { 00:28:38.118 "name": "spare", 00:28:38.118 "uuid": "b03977da-c052-5da1-af71-cb67883d8f28", 00:28:38.118 "is_configured": true, 00:28:38.118 "data_offset": 0, 00:28:38.118 "data_size": 65536 00:28:38.118 }, 00:28:38.118 { 00:28:38.118 "name": "BaseBdev2", 00:28:38.118 "uuid": "6c83fbad-45f4-517e-b0b1-cddfcb691ccf", 00:28:38.118 "is_configured": true, 00:28:38.118 "data_offset": 0, 00:28:38.118 "data_size": 65536 00:28:38.118 }, 00:28:38.118 { 00:28:38.118 "name": "BaseBdev3", 00:28:38.118 "uuid": "5d9a1e49-c557-5082-b8e4-c733226ab4c3", 00:28:38.118 "is_configured": true, 00:28:38.118 "data_offset": 0, 00:28:38.118 "data_size": 65536 00:28:38.118 }, 00:28:38.118 { 00:28:38.118 "name": "BaseBdev4", 00:28:38.118 "uuid": "2bd46957-e69a-55ff-b18f-4d45f71f8d29", 00:28:38.118 "is_configured": true, 00:28:38.118 "data_offset": 0, 00:28:38.118 "data_size": 65536 00:28:38.118 } 00:28:38.118 ] 00:28:38.118 }' 00:28:38.118 07:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:38.386 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:38.386 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:38.386 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:38.386 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:28:38.386 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:28:38.386 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:38.386 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:28:38.386 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:38.644 [2024-07-12 07:39:12.340257] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:38.644 [2024-07-12 07:39:12.415853] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06560 00:28:38.644 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:28:38.644 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:28:38.644 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:38.644 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:38.644 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:38.644 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:38.644 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:38.644 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.644 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.903 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:38.903 "name": "raid_bdev1", 00:28:38.903 "uuid": "806e0f03-fe4d-4657-8e9c-9d2527991b36", 00:28:38.903 "strip_size_kb": 0, 00:28:38.903 "state": "online", 00:28:38.903 "raid_level": "raid1", 00:28:38.903 "superblock": false, 00:28:38.903 "num_base_bdevs": 4, 00:28:38.903 "num_base_bdevs_discovered": 3, 00:28:38.903 "num_base_bdevs_operational": 3, 00:28:38.903 "process": { 00:28:38.903 "type": "rebuild", 00:28:38.903 "target": "spare", 00:28:38.903 "progress": { 00:28:38.903 "blocks": 38912, 00:28:38.903 "percent": 59 00:28:38.903 } 00:28:38.903 }, 00:28:38.903 "base_bdevs_list": [ 00:28:38.903 { 00:28:38.903 "name": "spare", 00:28:38.903 "uuid": "b03977da-c052-5da1-af71-cb67883d8f28", 00:28:38.903 "is_configured": true, 00:28:38.903 "data_offset": 0, 00:28:38.903 "data_size": 65536 00:28:38.903 }, 00:28:38.903 { 00:28:38.903 "name": null, 00:28:38.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.903 "is_configured": false, 00:28:38.903 "data_offset": 0, 00:28:38.903 "data_size": 65536 00:28:38.903 }, 00:28:38.903 { 00:28:38.903 "name": "BaseBdev3", 00:28:38.903 "uuid": "5d9a1e49-c557-5082-b8e4-c733226ab4c3", 00:28:38.903 "is_configured": true, 00:28:38.903 "data_offset": 0, 00:28:38.903 "data_size": 65536 00:28:38.903 }, 00:28:38.903 { 00:28:38.903 "name": "BaseBdev4", 00:28:38.903 "uuid": "2bd46957-e69a-55ff-b18f-4d45f71f8d29", 00:28:38.903 "is_configured": true, 00:28:38.903 "data_offset": 0, 00:28:38.903 "data_size": 65536 00:28:38.903 } 00:28:38.903 ] 00:28:38.903 }' 00:28:38.903 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:38.903 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:38.903 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:38.903 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:38.903 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=863 00:28:38.903 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:38.903 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:38.903 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:38.903 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:38.903 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:38.903 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:38.903 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.903 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.162 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:39.162 "name": "raid_bdev1", 00:28:39.162 "uuid": "806e0f03-fe4d-4657-8e9c-9d2527991b36", 00:28:39.162 "strip_size_kb": 0, 00:28:39.162 "state": "online", 00:28:39.162 "raid_level": "raid1", 00:28:39.162 "superblock": false, 00:28:39.162 "num_base_bdevs": 4, 00:28:39.162 "num_base_bdevs_discovered": 3, 00:28:39.162 "num_base_bdevs_operational": 3, 00:28:39.162 "process": { 00:28:39.162 "type": "rebuild", 00:28:39.162 "target": "spare", 00:28:39.162 "progress": { 00:28:39.162 "blocks": 45056, 00:28:39.162 "percent": 68 00:28:39.162 } 00:28:39.162 }, 00:28:39.162 "base_bdevs_list": [ 00:28:39.162 { 00:28:39.162 "name": "spare", 00:28:39.162 "uuid": "b03977da-c052-5da1-af71-cb67883d8f28", 00:28:39.162 "is_configured": true, 00:28:39.162 "data_offset": 0, 00:28:39.162 "data_size": 65536 00:28:39.162 }, 00:28:39.162 { 00:28:39.162 "name": null, 00:28:39.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.162 "is_configured": false, 00:28:39.162 "data_offset": 0, 00:28:39.162 "data_size": 65536 00:28:39.162 }, 00:28:39.162 { 00:28:39.162 "name": "BaseBdev3", 00:28:39.162 "uuid": "5d9a1e49-c557-5082-b8e4-c733226ab4c3", 00:28:39.162 "is_configured": true, 00:28:39.162 "data_offset": 0, 00:28:39.162 "data_size": 65536 00:28:39.162 }, 00:28:39.162 { 00:28:39.162 "name": "BaseBdev4", 00:28:39.162 "uuid": "2bd46957-e69a-55ff-b18f-4d45f71f8d29", 00:28:39.162 "is_configured": true, 00:28:39.162 "data_offset": 0, 00:28:39.162 "data_size": 65536 00:28:39.162 } 00:28:39.162 ] 00:28:39.162 }' 00:28:39.162 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:39.162 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:39.162 07:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:39.162 07:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:39.162 07:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:40.098 [2024-07-12 07:39:13.923967] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:40.098 [2024-07-12 07:39:13.924207] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:40.098 [2024-07-12 07:39:13.924361] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:40.357 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:40.357 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:40.357 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:40.357 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:40.357 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:40.357 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:40.357 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.357 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.615 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:40.615 "name": "raid_bdev1", 00:28:40.615 "uuid": "806e0f03-fe4d-4657-8e9c-9d2527991b36", 00:28:40.615 "strip_size_kb": 0, 00:28:40.615 "state": "online", 00:28:40.615 "raid_level": "raid1", 00:28:40.615 "superblock": false, 00:28:40.615 "num_base_bdevs": 4, 00:28:40.615 "num_base_bdevs_discovered": 3, 00:28:40.615 "num_base_bdevs_operational": 3, 00:28:40.615 "base_bdevs_list": [ 00:28:40.615 { 00:28:40.615 "name": "spare", 00:28:40.615 "uuid": "b03977da-c052-5da1-af71-cb67883d8f28", 00:28:40.615 "is_configured": true, 00:28:40.615 "data_offset": 0, 00:28:40.615 "data_size": 65536 00:28:40.615 }, 00:28:40.615 { 00:28:40.615 "name": null, 00:28:40.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:40.615 "is_configured": false, 00:28:40.615 "data_offset": 0, 00:28:40.615 "data_size": 65536 00:28:40.615 }, 00:28:40.615 { 00:28:40.615 "name": "BaseBdev3", 00:28:40.615 "uuid": "5d9a1e49-c557-5082-b8e4-c733226ab4c3", 00:28:40.615 "is_configured": true, 00:28:40.616 "data_offset": 0, 00:28:40.616 "data_size": 65536 00:28:40.616 }, 00:28:40.616 { 00:28:40.616 "name": "BaseBdev4", 00:28:40.616 "uuid": "2bd46957-e69a-55ff-b18f-4d45f71f8d29", 00:28:40.616 "is_configured": true, 00:28:40.616 "data_offset": 0, 00:28:40.616 "data_size": 65536 00:28:40.616 } 00:28:40.616 ] 00:28:40.616 }' 00:28:40.616 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:40.616 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:40.616 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:40.616 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:40.616 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:28:40.616 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:40.616 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:40.616 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:40.616 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:40.616 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:40.616 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.616 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.874 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:40.874 "name": "raid_bdev1", 00:28:40.874 "uuid": "806e0f03-fe4d-4657-8e9c-9d2527991b36", 00:28:40.874 "strip_size_kb": 0, 00:28:40.874 "state": "online", 00:28:40.874 "raid_level": "raid1", 00:28:40.874 "superblock": false, 00:28:40.874 "num_base_bdevs": 4, 00:28:40.874 "num_base_bdevs_discovered": 3, 00:28:40.874 "num_base_bdevs_operational": 3, 00:28:40.874 "base_bdevs_list": [ 00:28:40.874 { 00:28:40.874 "name": "spare", 00:28:40.874 "uuid": "b03977da-c052-5da1-af71-cb67883d8f28", 00:28:40.874 "is_configured": true, 00:28:40.874 "data_offset": 0, 00:28:40.874 "data_size": 65536 00:28:40.874 }, 00:28:40.874 { 00:28:40.874 "name": null, 00:28:40.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:40.874 "is_configured": false, 00:28:40.874 "data_offset": 0, 00:28:40.874 "data_size": 65536 00:28:40.874 }, 00:28:40.874 { 00:28:40.874 "name": "BaseBdev3", 00:28:40.874 "uuid": "5d9a1e49-c557-5082-b8e4-c733226ab4c3", 00:28:40.874 "is_configured": true, 00:28:40.874 "data_offset": 0, 00:28:40.874 "data_size": 65536 00:28:40.874 }, 00:28:40.874 { 00:28:40.874 "name": "BaseBdev4", 00:28:40.874 "uuid": "2bd46957-e69a-55ff-b18f-4d45f71f8d29", 00:28:40.874 "is_configured": true, 00:28:40.874 "data_offset": 0, 00:28:40.874 "data_size": 65536 00:28:40.874 } 00:28:40.874 ] 00:28:40.874 }' 00:28:40.874 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:40.874 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:40.874 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:41.133 "name": "raid_bdev1", 00:28:41.133 "uuid": "806e0f03-fe4d-4657-8e9c-9d2527991b36", 00:28:41.133 "strip_size_kb": 0, 00:28:41.133 "state": "online", 00:28:41.133 "raid_level": "raid1", 00:28:41.133 "superblock": false, 00:28:41.133 "num_base_bdevs": 4, 00:28:41.133 "num_base_bdevs_discovered": 3, 00:28:41.133 "num_base_bdevs_operational": 3, 00:28:41.133 "base_bdevs_list": [ 00:28:41.133 { 00:28:41.133 "name": "spare", 00:28:41.133 "uuid": "b03977da-c052-5da1-af71-cb67883d8f28", 00:28:41.133 "is_configured": true, 00:28:41.133 "data_offset": 0, 00:28:41.133 "data_size": 65536 00:28:41.133 }, 00:28:41.133 { 00:28:41.133 "name": null, 00:28:41.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.133 "is_configured": false, 00:28:41.133 "data_offset": 0, 00:28:41.133 "data_size": 65536 00:28:41.133 }, 00:28:41.133 { 00:28:41.133 "name": "BaseBdev3", 00:28:41.133 "uuid": "5d9a1e49-c557-5082-b8e4-c733226ab4c3", 00:28:41.133 "is_configured": true, 00:28:41.133 "data_offset": 0, 00:28:41.133 "data_size": 65536 00:28:41.133 }, 00:28:41.133 { 00:28:41.133 "name": "BaseBdev4", 00:28:41.133 "uuid": "2bd46957-e69a-55ff-b18f-4d45f71f8d29", 00:28:41.133 "is_configured": true, 00:28:41.133 "data_offset": 0, 00:28:41.133 "data_size": 65536 00:28:41.133 } 00:28:41.133 ] 00:28:41.133 }' 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:41.133 07:39:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.701 07:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:41.960 [2024-07-12 07:39:15.791796] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:41.960 [2024-07-12 07:39:15.791946] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:41.960 [2024-07-12 07:39:15.792169] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:41.960 [2024-07-12 07:39:15.792349] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:41.960 [2024-07-12 07:39:15.792434] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:28:41.960 07:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.960 07:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:28:42.218 07:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:42.219 07:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:42.219 07:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:42.219 07:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:42.219 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:42.219 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:42.219 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:42.219 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:42.219 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:42.219 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:42.219 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:42.219 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:42.219 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:42.478 /dev/nbd0 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:42.478 1+0 records in 00:28:42.478 1+0 records out 00:28:42.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360186 s, 11.4 MB/s 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:42.478 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:42.738 /dev/nbd1 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:42.738 1+0 records in 00:28:42.738 1+0 records out 00:28:42.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526044 s, 7.8 MB/s 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:42.738 07:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:42.998 07:39:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 156593 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 156593 ']' 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 156593 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 156593 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 156593' 00:28:43.594 killing process with pid 156593 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@965 -- # kill 156593 00:28:43.594 Received shutdown signal, test time was about 60.000000 seconds 00:28:43.594 00:28:43.594 Latency(us) 00:28:43.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.594 =================================================================================================================== 00:28:43.594 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:43.594 [2024-07-12 07:39:17.187432] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:43.594 07:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # wait 156593 00:28:43.594 [2024-07-12 07:39:17.237560] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:28:43.854 00:28:43.854 real 0m21.652s 00:28:43.854 user 0m29.655s 00:28:43.854 sys 0m4.293s 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.854 ************************************ 00:28:43.854 END TEST raid_rebuild_test 00:28:43.854 ************************************ 00:28:43.854 07:39:17 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:28:43.854 07:39:17 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:28:43.854 07:39:17 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:43.854 07:39:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:43.854 ************************************ 00:28:43.854 START TEST raid_rebuild_test_sb 00:28:43.854 ************************************ 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 true false true 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:28:43.854 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=157129 00:28:43.855 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:43.855 07:39:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 157129 /var/tmp/spdk-raid.sock 00:28:43.855 07:39:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 157129 ']' 00:28:43.855 07:39:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:43.855 07:39:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:43.855 07:39:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:43.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:43.855 07:39:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:43.855 07:39:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:43.855 [2024-07-12 07:39:17.643503] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:43.855 [2024-07-12 07:39:17.643879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157129 ] 00:28:43.855 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:43.855 Zero copy mechanism will not be used. 00:28:44.114 [2024-07-12 07:39:17.781751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.114 [2024-07-12 07:39:17.825772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.114 [2024-07-12 07:39:17.868657] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:44.684 07:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:44.684 07:39:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:28:44.684 07:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:44.684 07:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:44.944 BaseBdev1_malloc 00:28:44.944 07:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:45.203 [2024-07-12 07:39:18.935393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:45.203 [2024-07-12 07:39:18.935673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:45.203 [2024-07-12 07:39:18.935753] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:28:45.203 [2024-07-12 07:39:18.936310] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:45.203 [2024-07-12 07:39:18.944311] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:45.203 [2024-07-12 07:39:18.944754] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:45.203 BaseBdev1 00:28:45.203 07:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:45.203 07:39:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:45.462 BaseBdev2_malloc 00:28:45.462 07:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:45.722 [2024-07-12 07:39:19.376759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:45.722 [2024-07-12 07:39:19.377015] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:45.722 [2024-07-12 07:39:19.377098] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:28:45.722 [2024-07-12 07:39:19.377235] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:45.722 [2024-07-12 07:39:19.380063] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:45.722 [2024-07-12 07:39:19.380223] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:45.722 BaseBdev2 00:28:45.722 07:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:45.722 07:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:45.981 BaseBdev3_malloc 00:28:45.981 07:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:45.981 [2024-07-12 07:39:19.802565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:45.981 [2024-07-12 07:39:19.802943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:45.981 [2024-07-12 07:39:19.803030] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:45.981 [2024-07-12 07:39:19.803171] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:45.981 [2024-07-12 07:39:19.806027] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:45.981 [2024-07-12 07:39:19.806225] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:45.981 BaseBdev3 00:28:45.981 07:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:45.981 07:39:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:46.241 BaseBdev4_malloc 00:28:46.241 07:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:46.500 [2024-07-12 07:39:20.255230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:46.500 [2024-07-12 07:39:20.255501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:46.500 [2024-07-12 07:39:20.255578] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:46.500 [2024-07-12 07:39:20.255715] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:46.500 [2024-07-12 07:39:20.258552] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:46.500 [2024-07-12 07:39:20.258752] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:46.500 BaseBdev4 00:28:46.500 07:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:46.759 spare_malloc 00:28:46.759 07:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:47.018 spare_delay 00:28:47.018 07:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:47.018 [2024-07-12 07:39:20.879324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:47.018 [2024-07-12 07:39:20.879577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:47.018 [2024-07-12 07:39:20.879660] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:47.018 [2024-07-12 07:39:20.879789] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:47.018 [2024-07-12 07:39:20.882720] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:47.018 [2024-07-12 07:39:20.882922] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:47.018 spare 00:28:47.018 07:39:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:28:47.588 [2024-07-12 07:39:21.199514] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:47.588 [2024-07-12 07:39:21.201988] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:47.588 [2024-07-12 07:39:21.202182] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:47.588 [2024-07-12 07:39:21.202260] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:47.588 [2024-07-12 07:39:21.202597] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:28:47.588 [2024-07-12 07:39:21.202693] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:47.588 [2024-07-12 07:39:21.202884] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:28:47.588 [2024-07-12 07:39:21.203438] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:28:47.588 [2024-07-12 07:39:21.203550] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:28:47.588 [2024-07-12 07:39:21.203826] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:47.588 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:47.588 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:47.588 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:47.588 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:47.588 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:47.588 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:47.588 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:47.588 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:47.588 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:47.588 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:47.588 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.588 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.588 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:47.588 "name": "raid_bdev1", 00:28:47.588 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:28:47.588 "strip_size_kb": 0, 00:28:47.588 "state": "online", 00:28:47.588 "raid_level": "raid1", 00:28:47.588 "superblock": true, 00:28:47.588 "num_base_bdevs": 4, 00:28:47.588 "num_base_bdevs_discovered": 4, 00:28:47.588 "num_base_bdevs_operational": 4, 00:28:47.588 "base_bdevs_list": [ 00:28:47.588 { 00:28:47.588 "name": "BaseBdev1", 00:28:47.588 "uuid": "f9451bda-f4d0-5690-a3ed-08036a0bd202", 00:28:47.588 "is_configured": true, 00:28:47.588 "data_offset": 2048, 00:28:47.588 "data_size": 63488 00:28:47.588 }, 00:28:47.588 { 00:28:47.588 "name": "BaseBdev2", 00:28:47.588 "uuid": "93bb6f9b-ffea-5439-b0a9-1a005cb23eec", 00:28:47.588 "is_configured": true, 00:28:47.588 "data_offset": 2048, 00:28:47.588 "data_size": 63488 00:28:47.588 }, 00:28:47.588 { 00:28:47.588 "name": "BaseBdev3", 00:28:47.588 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:28:47.588 "is_configured": true, 00:28:47.588 "data_offset": 2048, 00:28:47.588 "data_size": 63488 00:28:47.588 }, 00:28:47.588 { 00:28:47.588 "name": "BaseBdev4", 00:28:47.588 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:28:47.588 "is_configured": true, 00:28:47.588 "data_offset": 2048, 00:28:47.588 "data_size": 63488 00:28:47.588 } 00:28:47.588 ] 00:28:47.588 }' 00:28:47.588 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:47.588 07:39:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.158 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:48.158 07:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:48.418 [2024-07-12 07:39:22.144270] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:48.418 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:28:48.418 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.418 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:48.677 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:28:48.677 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:28:48.677 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:28:48.677 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:28:48.677 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:48.677 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:48.677 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:48.677 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:48.677 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:48.677 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:48.677 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:48.677 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:48.677 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:48.677 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:48.936 [2024-07-12 07:39:22.608154] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:28:48.936 /dev/nbd0 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:48.936 1+0 records in 00:28:48.936 1+0 records out 00:28:48.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413834 s, 9.9 MB/s 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:28:48.936 07:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:28:54.209 63488+0 records in 00:28:54.209 63488+0 records out 00:28:54.209 32505856 bytes (33 MB, 31 MiB) copied, 5.08446 s, 6.4 MB/s 00:28:54.209 07:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:54.209 07:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:54.209 07:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:54.209 07:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:54.209 07:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:54.209 07:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:54.209 07:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:54.209 07:39:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:54.209 [2024-07-12 07:39:28.008030] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:54.209 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:54.209 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:54.209 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:54.209 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:54.209 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:54.209 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:54.209 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:54.209 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:54.469 [2024-07-12 07:39:28.227830] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:54.469 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:54.469 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:54.469 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:54.469 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:54.469 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:54.469 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:54.469 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:54.469 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:54.469 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:54.469 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:54.469 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.469 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.728 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:54.728 "name": "raid_bdev1", 00:28:54.728 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:28:54.728 "strip_size_kb": 0, 00:28:54.728 "state": "online", 00:28:54.728 "raid_level": "raid1", 00:28:54.728 "superblock": true, 00:28:54.728 "num_base_bdevs": 4, 00:28:54.728 "num_base_bdevs_discovered": 3, 00:28:54.728 "num_base_bdevs_operational": 3, 00:28:54.728 "base_bdevs_list": [ 00:28:54.728 { 00:28:54.728 "name": null, 00:28:54.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:54.728 "is_configured": false, 00:28:54.728 "data_offset": 2048, 00:28:54.728 "data_size": 63488 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "name": "BaseBdev2", 00:28:54.728 "uuid": "93bb6f9b-ffea-5439-b0a9-1a005cb23eec", 00:28:54.728 "is_configured": true, 00:28:54.728 "data_offset": 2048, 00:28:54.728 "data_size": 63488 00:28:54.728 }, 00:28:54.728 { 00:28:54.728 "name": "BaseBdev3", 00:28:54.728 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:28:54.729 "is_configured": true, 00:28:54.729 "data_offset": 2048, 00:28:54.729 "data_size": 63488 00:28:54.729 }, 00:28:54.729 { 00:28:54.729 "name": "BaseBdev4", 00:28:54.729 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:28:54.729 "is_configured": true, 00:28:54.729 "data_offset": 2048, 00:28:54.729 "data_size": 63488 00:28:54.729 } 00:28:54.729 ] 00:28:54.729 }' 00:28:54.729 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:54.729 07:39:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.297 07:39:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:55.297 [2024-07-12 07:39:29.059942] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:55.297 [2024-07-12 07:39:29.065996] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:28:55.297 [2024-07-12 07:39:29.068580] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:55.297 07:39:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:56.237 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:56.237 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:56.237 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:56.237 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:56.237 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:56.237 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.237 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.497 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:56.497 "name": "raid_bdev1", 00:28:56.497 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:28:56.497 "strip_size_kb": 0, 00:28:56.497 "state": "online", 00:28:56.497 "raid_level": "raid1", 00:28:56.497 "superblock": true, 00:28:56.497 "num_base_bdevs": 4, 00:28:56.497 "num_base_bdevs_discovered": 4, 00:28:56.497 "num_base_bdevs_operational": 4, 00:28:56.497 "process": { 00:28:56.497 "type": "rebuild", 00:28:56.497 "target": "spare", 00:28:56.497 "progress": { 00:28:56.497 "blocks": 24576, 00:28:56.497 "percent": 38 00:28:56.497 } 00:28:56.497 }, 00:28:56.497 "base_bdevs_list": [ 00:28:56.497 { 00:28:56.497 "name": "spare", 00:28:56.497 "uuid": "88da4226-98eb-5777-9976-b651d2909e23", 00:28:56.497 "is_configured": true, 00:28:56.497 "data_offset": 2048, 00:28:56.497 "data_size": 63488 00:28:56.497 }, 00:28:56.497 { 00:28:56.497 "name": "BaseBdev2", 00:28:56.497 "uuid": "93bb6f9b-ffea-5439-b0a9-1a005cb23eec", 00:28:56.497 "is_configured": true, 00:28:56.497 "data_offset": 2048, 00:28:56.497 "data_size": 63488 00:28:56.497 }, 00:28:56.497 { 00:28:56.497 "name": "BaseBdev3", 00:28:56.497 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:28:56.497 "is_configured": true, 00:28:56.497 "data_offset": 2048, 00:28:56.497 "data_size": 63488 00:28:56.497 }, 00:28:56.497 { 00:28:56.497 "name": "BaseBdev4", 00:28:56.497 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:28:56.497 "is_configured": true, 00:28:56.497 "data_offset": 2048, 00:28:56.497 "data_size": 63488 00:28:56.497 } 00:28:56.497 ] 00:28:56.497 }' 00:28:56.497 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:56.497 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:56.497 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:56.756 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:56.756 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:57.015 [2024-07-12 07:39:30.643139] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:57.015 [2024-07-12 07:39:30.680450] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:57.015 [2024-07-12 07:39:30.680550] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:57.015 [2024-07-12 07:39:30.680569] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:57.015 [2024-07-12 07:39:30.680577] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:57.015 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:57.015 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:57.015 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:57.015 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:57.015 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:57.015 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:57.015 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:57.015 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:57.015 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:57.015 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:57.015 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.015 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:57.275 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:57.275 "name": "raid_bdev1", 00:28:57.275 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:28:57.275 "strip_size_kb": 0, 00:28:57.275 "state": "online", 00:28:57.275 "raid_level": "raid1", 00:28:57.275 "superblock": true, 00:28:57.275 "num_base_bdevs": 4, 00:28:57.275 "num_base_bdevs_discovered": 3, 00:28:57.275 "num_base_bdevs_operational": 3, 00:28:57.275 "base_bdevs_list": [ 00:28:57.275 { 00:28:57.275 "name": null, 00:28:57.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.275 "is_configured": false, 00:28:57.275 "data_offset": 2048, 00:28:57.275 "data_size": 63488 00:28:57.275 }, 00:28:57.275 { 00:28:57.275 "name": "BaseBdev2", 00:28:57.275 "uuid": "93bb6f9b-ffea-5439-b0a9-1a005cb23eec", 00:28:57.275 "is_configured": true, 00:28:57.275 "data_offset": 2048, 00:28:57.275 "data_size": 63488 00:28:57.275 }, 00:28:57.275 { 00:28:57.275 "name": "BaseBdev3", 00:28:57.275 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:28:57.275 "is_configured": true, 00:28:57.275 "data_offset": 2048, 00:28:57.275 "data_size": 63488 00:28:57.275 }, 00:28:57.275 { 00:28:57.275 "name": "BaseBdev4", 00:28:57.275 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:28:57.275 "is_configured": true, 00:28:57.275 "data_offset": 2048, 00:28:57.275 "data_size": 63488 00:28:57.275 } 00:28:57.275 ] 00:28:57.275 }' 00:28:57.275 07:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:57.275 07:39:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.844 07:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:57.844 07:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:57.844 07:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:57.844 07:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:57.844 07:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:57.844 07:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:57.844 07:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.844 07:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:57.844 "name": "raid_bdev1", 00:28:57.844 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:28:57.844 "strip_size_kb": 0, 00:28:57.844 "state": "online", 00:28:57.844 "raid_level": "raid1", 00:28:57.844 "superblock": true, 00:28:57.844 "num_base_bdevs": 4, 00:28:57.844 "num_base_bdevs_discovered": 3, 00:28:57.844 "num_base_bdevs_operational": 3, 00:28:57.844 "base_bdevs_list": [ 00:28:57.844 { 00:28:57.844 "name": null, 00:28:57.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.844 "is_configured": false, 00:28:57.844 "data_offset": 2048, 00:28:57.844 "data_size": 63488 00:28:57.844 }, 00:28:57.844 { 00:28:57.844 "name": "BaseBdev2", 00:28:57.844 "uuid": "93bb6f9b-ffea-5439-b0a9-1a005cb23eec", 00:28:57.844 "is_configured": true, 00:28:57.844 "data_offset": 2048, 00:28:57.844 "data_size": 63488 00:28:57.844 }, 00:28:57.844 { 00:28:57.844 "name": "BaseBdev3", 00:28:57.844 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:28:57.844 "is_configured": true, 00:28:57.844 "data_offset": 2048, 00:28:57.844 "data_size": 63488 00:28:57.844 }, 00:28:57.844 { 00:28:57.844 "name": "BaseBdev4", 00:28:57.844 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:28:57.844 "is_configured": true, 00:28:57.844 "data_offset": 2048, 00:28:57.844 "data_size": 63488 00:28:57.844 } 00:28:57.844 ] 00:28:57.844 }' 00:28:57.844 07:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:58.104 07:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:58.104 07:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:58.104 07:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:58.104 07:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:58.104 [2024-07-12 07:39:31.943926] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:58.104 [2024-07-12 07:39:31.949982] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e5c0 00:28:58.104 [2024-07-12 07:39:31.952701] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:58.104 07:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:59.482 07:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:59.482 07:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:59.482 07:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:59.482 07:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:59.482 07:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:59.482 07:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.482 07:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.482 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:59.482 "name": "raid_bdev1", 00:28:59.482 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:28:59.482 "strip_size_kb": 0, 00:28:59.482 "state": "online", 00:28:59.482 "raid_level": "raid1", 00:28:59.482 "superblock": true, 00:28:59.482 "num_base_bdevs": 4, 00:28:59.482 "num_base_bdevs_discovered": 4, 00:28:59.482 "num_base_bdevs_operational": 4, 00:28:59.482 "process": { 00:28:59.482 "type": "rebuild", 00:28:59.482 "target": "spare", 00:28:59.482 "progress": { 00:28:59.482 "blocks": 24576, 00:28:59.482 "percent": 38 00:28:59.482 } 00:28:59.482 }, 00:28:59.482 "base_bdevs_list": [ 00:28:59.482 { 00:28:59.482 "name": "spare", 00:28:59.482 "uuid": "88da4226-98eb-5777-9976-b651d2909e23", 00:28:59.482 "is_configured": true, 00:28:59.482 "data_offset": 2048, 00:28:59.482 "data_size": 63488 00:28:59.482 }, 00:28:59.482 { 00:28:59.482 "name": "BaseBdev2", 00:28:59.482 "uuid": "93bb6f9b-ffea-5439-b0a9-1a005cb23eec", 00:28:59.482 "is_configured": true, 00:28:59.482 "data_offset": 2048, 00:28:59.482 "data_size": 63488 00:28:59.482 }, 00:28:59.482 { 00:28:59.482 "name": "BaseBdev3", 00:28:59.482 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:28:59.482 "is_configured": true, 00:28:59.482 "data_offset": 2048, 00:28:59.482 "data_size": 63488 00:28:59.482 }, 00:28:59.482 { 00:28:59.482 "name": "BaseBdev4", 00:28:59.482 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:28:59.482 "is_configured": true, 00:28:59.482 "data_offset": 2048, 00:28:59.482 "data_size": 63488 00:28:59.482 } 00:28:59.482 ] 00:28:59.482 }' 00:28:59.482 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:59.483 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:59.483 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:59.483 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:59.483 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:28:59.483 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:28:59.483 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:28:59.483 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:28:59.483 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:59.483 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:28:59.483 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:59.741 [2024-07-12 07:39:33.463177] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:59.741 [2024-07-12 07:39:33.563397] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e5c0 00:28:59.741 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:28:59.741 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:28:59.741 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:59.741 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:59.741 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:59.741 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:59.741 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:59.741 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.741 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.000 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:00.000 "name": "raid_bdev1", 00:29:00.000 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:00.000 "strip_size_kb": 0, 00:29:00.000 "state": "online", 00:29:00.000 "raid_level": "raid1", 00:29:00.000 "superblock": true, 00:29:00.000 "num_base_bdevs": 4, 00:29:00.000 "num_base_bdevs_discovered": 3, 00:29:00.000 "num_base_bdevs_operational": 3, 00:29:00.000 "process": { 00:29:00.000 "type": "rebuild", 00:29:00.000 "target": "spare", 00:29:00.000 "progress": { 00:29:00.000 "blocks": 34816, 00:29:00.000 "percent": 54 00:29:00.000 } 00:29:00.000 }, 00:29:00.000 "base_bdevs_list": [ 00:29:00.000 { 00:29:00.000 "name": "spare", 00:29:00.000 "uuid": "88da4226-98eb-5777-9976-b651d2909e23", 00:29:00.000 "is_configured": true, 00:29:00.000 "data_offset": 2048, 00:29:00.000 "data_size": 63488 00:29:00.000 }, 00:29:00.000 { 00:29:00.000 "name": null, 00:29:00.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:00.000 "is_configured": false, 00:29:00.000 "data_offset": 2048, 00:29:00.000 "data_size": 63488 00:29:00.000 }, 00:29:00.000 { 00:29:00.000 "name": "BaseBdev3", 00:29:00.000 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:00.000 "is_configured": true, 00:29:00.000 "data_offset": 2048, 00:29:00.000 "data_size": 63488 00:29:00.000 }, 00:29:00.000 { 00:29:00.000 "name": "BaseBdev4", 00:29:00.000 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:00.000 "is_configured": true, 00:29:00.000 "data_offset": 2048, 00:29:00.000 "data_size": 63488 00:29:00.000 } 00:29:00.000 ] 00:29:00.000 }' 00:29:00.000 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:00.260 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:00.260 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:00.260 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:00.260 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=884 00:29:00.260 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:00.260 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:00.260 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:00.260 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:00.260 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:00.260 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:00.260 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.260 07:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.260 07:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:00.260 "name": "raid_bdev1", 00:29:00.260 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:00.260 "strip_size_kb": 0, 00:29:00.260 "state": "online", 00:29:00.260 "raid_level": "raid1", 00:29:00.260 "superblock": true, 00:29:00.260 "num_base_bdevs": 4, 00:29:00.260 "num_base_bdevs_discovered": 3, 00:29:00.260 "num_base_bdevs_operational": 3, 00:29:00.260 "process": { 00:29:00.260 "type": "rebuild", 00:29:00.260 "target": "spare", 00:29:00.260 "progress": { 00:29:00.260 "blocks": 40960, 00:29:00.260 "percent": 64 00:29:00.260 } 00:29:00.260 }, 00:29:00.260 "base_bdevs_list": [ 00:29:00.260 { 00:29:00.260 "name": "spare", 00:29:00.260 "uuid": "88da4226-98eb-5777-9976-b651d2909e23", 00:29:00.260 "is_configured": true, 00:29:00.260 "data_offset": 2048, 00:29:00.260 "data_size": 63488 00:29:00.260 }, 00:29:00.260 { 00:29:00.260 "name": null, 00:29:00.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:00.260 "is_configured": false, 00:29:00.260 "data_offset": 2048, 00:29:00.260 "data_size": 63488 00:29:00.260 }, 00:29:00.260 { 00:29:00.260 "name": "BaseBdev3", 00:29:00.260 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:00.260 "is_configured": true, 00:29:00.260 "data_offset": 2048, 00:29:00.260 "data_size": 63488 00:29:00.260 }, 00:29:00.260 { 00:29:00.260 "name": "BaseBdev4", 00:29:00.260 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:00.260 "is_configured": true, 00:29:00.260 "data_offset": 2048, 00:29:00.260 "data_size": 63488 00:29:00.260 } 00:29:00.260 ] 00:29:00.260 }' 00:29:00.260 07:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:00.519 07:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:00.519 07:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:00.519 07:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:00.519 07:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:01.457 [2024-07-12 07:39:35.174442] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:01.457 [2024-07-12 07:39:35.174536] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:01.457 [2024-07-12 07:39:35.174684] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:01.457 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:01.457 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:01.457 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:01.457 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:01.457 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:01.457 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:01.457 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.457 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.716 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:01.716 "name": "raid_bdev1", 00:29:01.716 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:01.716 "strip_size_kb": 0, 00:29:01.716 "state": "online", 00:29:01.716 "raid_level": "raid1", 00:29:01.716 "superblock": true, 00:29:01.716 "num_base_bdevs": 4, 00:29:01.716 "num_base_bdevs_discovered": 3, 00:29:01.716 "num_base_bdevs_operational": 3, 00:29:01.716 "base_bdevs_list": [ 00:29:01.716 { 00:29:01.716 "name": "spare", 00:29:01.716 "uuid": "88da4226-98eb-5777-9976-b651d2909e23", 00:29:01.716 "is_configured": true, 00:29:01.716 "data_offset": 2048, 00:29:01.716 "data_size": 63488 00:29:01.716 }, 00:29:01.716 { 00:29:01.716 "name": null, 00:29:01.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.716 "is_configured": false, 00:29:01.716 "data_offset": 2048, 00:29:01.716 "data_size": 63488 00:29:01.716 }, 00:29:01.716 { 00:29:01.716 "name": "BaseBdev3", 00:29:01.716 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:01.717 "is_configured": true, 00:29:01.717 "data_offset": 2048, 00:29:01.717 "data_size": 63488 00:29:01.717 }, 00:29:01.717 { 00:29:01.717 "name": "BaseBdev4", 00:29:01.717 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:01.717 "is_configured": true, 00:29:01.717 "data_offset": 2048, 00:29:01.717 "data_size": 63488 00:29:01.717 } 00:29:01.717 ] 00:29:01.717 }' 00:29:01.717 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:01.717 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:01.717 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:01.717 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:01.717 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:29:01.717 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:01.717 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:01.717 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:01.717 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:01.717 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:01.717 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.717 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.976 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:01.976 "name": "raid_bdev1", 00:29:01.976 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:01.976 "strip_size_kb": 0, 00:29:01.976 "state": "online", 00:29:01.976 "raid_level": "raid1", 00:29:01.976 "superblock": true, 00:29:01.976 "num_base_bdevs": 4, 00:29:01.976 "num_base_bdevs_discovered": 3, 00:29:01.976 "num_base_bdevs_operational": 3, 00:29:01.976 "base_bdevs_list": [ 00:29:01.976 { 00:29:01.976 "name": "spare", 00:29:01.976 "uuid": "88da4226-98eb-5777-9976-b651d2909e23", 00:29:01.976 "is_configured": true, 00:29:01.976 "data_offset": 2048, 00:29:01.976 "data_size": 63488 00:29:01.976 }, 00:29:01.976 { 00:29:01.976 "name": null, 00:29:01.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.976 "is_configured": false, 00:29:01.976 "data_offset": 2048, 00:29:01.976 "data_size": 63488 00:29:01.976 }, 00:29:01.976 { 00:29:01.976 "name": "BaseBdev3", 00:29:01.976 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:01.976 "is_configured": true, 00:29:01.976 "data_offset": 2048, 00:29:01.976 "data_size": 63488 00:29:01.976 }, 00:29:01.976 { 00:29:01.976 "name": "BaseBdev4", 00:29:01.976 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:01.976 "is_configured": true, 00:29:01.976 "data_offset": 2048, 00:29:01.976 "data_size": 63488 00:29:01.976 } 00:29:01.976 ] 00:29:01.976 }' 00:29:01.976 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:01.976 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:01.976 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:02.237 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:02.237 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:02.237 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:02.237 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:02.237 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:02.237 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:02.237 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:02.237 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:02.237 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:02.237 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:02.237 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:02.237 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.237 07:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.237 07:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:02.237 "name": "raid_bdev1", 00:29:02.237 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:02.237 "strip_size_kb": 0, 00:29:02.237 "state": "online", 00:29:02.237 "raid_level": "raid1", 00:29:02.237 "superblock": true, 00:29:02.237 "num_base_bdevs": 4, 00:29:02.237 "num_base_bdevs_discovered": 3, 00:29:02.237 "num_base_bdevs_operational": 3, 00:29:02.237 "base_bdevs_list": [ 00:29:02.237 { 00:29:02.237 "name": "spare", 00:29:02.237 "uuid": "88da4226-98eb-5777-9976-b651d2909e23", 00:29:02.237 "is_configured": true, 00:29:02.237 "data_offset": 2048, 00:29:02.237 "data_size": 63488 00:29:02.237 }, 00:29:02.237 { 00:29:02.237 "name": null, 00:29:02.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.237 "is_configured": false, 00:29:02.237 "data_offset": 2048, 00:29:02.237 "data_size": 63488 00:29:02.237 }, 00:29:02.237 { 00:29:02.237 "name": "BaseBdev3", 00:29:02.237 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:02.237 "is_configured": true, 00:29:02.237 "data_offset": 2048, 00:29:02.237 "data_size": 63488 00:29:02.237 }, 00:29:02.237 { 00:29:02.237 "name": "BaseBdev4", 00:29:02.237 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:02.237 "is_configured": true, 00:29:02.237 "data_offset": 2048, 00:29:02.237 "data_size": 63488 00:29:02.237 } 00:29:02.237 ] 00:29:02.237 }' 00:29:02.237 07:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:02.237 07:39:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:02.825 07:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:03.095 [2024-07-12 07:39:36.824833] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:03.095 [2024-07-12 07:39:36.824883] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:03.095 [2024-07-12 07:39:36.825008] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:03.095 [2024-07-12 07:39:36.825122] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:03.095 [2024-07-12 07:39:36.825134] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:29:03.095 07:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.095 07:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:29:03.354 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:03.354 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:03.354 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:29:03.354 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:03.354 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:03.354 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:03.354 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:03.354 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:03.354 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:03.354 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:03.354 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:03.354 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:03.354 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:03.612 /dev/nbd0 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:03.612 1+0 records in 00:29:03.612 1+0 records out 00:29:03.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301546 s, 13.6 MB/s 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:03.612 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:03.871 /dev/nbd1 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:03.871 1+0 records in 00:29:03.871 1+0 records out 00:29:03.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355385 s, 11.5 MB/s 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:03.871 07:39:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:29:03.872 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:03.872 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:03.872 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:03.872 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:03.872 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:03.872 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:03.872 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:03.872 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:03.872 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:03.872 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:04.131 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:04.131 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:04.131 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:04.131 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:04.131 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:04.131 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:04.131 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:04.131 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:04.131 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:04.131 07:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:04.389 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:04.389 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:04.389 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:04.389 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:04.389 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:04.389 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:04.389 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:04.389 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:04.389 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:29:04.389 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:04.648 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:04.906 [2024-07-12 07:39:38.590387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:04.906 [2024-07-12 07:39:38.590471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:04.906 [2024-07-12 07:39:38.590504] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:29:04.906 [2024-07-12 07:39:38.590530] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:04.906 [2024-07-12 07:39:38.592776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:04.906 [2024-07-12 07:39:38.592836] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:04.906 [2024-07-12 07:39:38.592915] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:04.906 [2024-07-12 07:39:38.592975] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:04.906 [2024-07-12 07:39:38.593134] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:04.906 [2024-07-12 07:39:38.593225] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:04.906 spare 00:29:04.906 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:04.906 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:04.906 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:04.906 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:04.906 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:04.906 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:04.906 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:04.906 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:04.906 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:04.906 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:04.906 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.906 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.906 [2024-07-12 07:39:38.693322] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:29:04.906 [2024-07-12 07:39:38.693342] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:04.906 [2024-07-12 07:39:38.693490] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caee40 00:29:04.906 [2024-07-12 07:39:38.693863] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:29:04.906 [2024-07-12 07:39:38.693883] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:29:04.906 [2024-07-12 07:39:38.693976] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:05.165 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:05.165 "name": "raid_bdev1", 00:29:05.165 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:05.165 "strip_size_kb": 0, 00:29:05.165 "state": "online", 00:29:05.165 "raid_level": "raid1", 00:29:05.165 "superblock": true, 00:29:05.165 "num_base_bdevs": 4, 00:29:05.165 "num_base_bdevs_discovered": 3, 00:29:05.165 "num_base_bdevs_operational": 3, 00:29:05.165 "base_bdevs_list": [ 00:29:05.165 { 00:29:05.165 "name": "spare", 00:29:05.165 "uuid": "88da4226-98eb-5777-9976-b651d2909e23", 00:29:05.165 "is_configured": true, 00:29:05.165 "data_offset": 2048, 00:29:05.165 "data_size": 63488 00:29:05.165 }, 00:29:05.165 { 00:29:05.165 "name": null, 00:29:05.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.165 "is_configured": false, 00:29:05.165 "data_offset": 2048, 00:29:05.165 "data_size": 63488 00:29:05.165 }, 00:29:05.165 { 00:29:05.165 "name": "BaseBdev3", 00:29:05.165 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:05.165 "is_configured": true, 00:29:05.165 "data_offset": 2048, 00:29:05.165 "data_size": 63488 00:29:05.165 }, 00:29:05.165 { 00:29:05.165 "name": "BaseBdev4", 00:29:05.165 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:05.165 "is_configured": true, 00:29:05.165 "data_offset": 2048, 00:29:05.165 "data_size": 63488 00:29:05.165 } 00:29:05.165 ] 00:29:05.165 }' 00:29:05.165 07:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:05.165 07:39:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.732 07:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:05.732 07:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:05.732 07:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:05.732 07:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:05.732 07:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:05.732 07:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.732 07:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.990 07:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:05.990 "name": "raid_bdev1", 00:29:05.990 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:05.990 "strip_size_kb": 0, 00:29:05.990 "state": "online", 00:29:05.991 "raid_level": "raid1", 00:29:05.991 "superblock": true, 00:29:05.991 "num_base_bdevs": 4, 00:29:05.991 "num_base_bdevs_discovered": 3, 00:29:05.991 "num_base_bdevs_operational": 3, 00:29:05.991 "base_bdevs_list": [ 00:29:05.991 { 00:29:05.991 "name": "spare", 00:29:05.991 "uuid": "88da4226-98eb-5777-9976-b651d2909e23", 00:29:05.991 "is_configured": true, 00:29:05.991 "data_offset": 2048, 00:29:05.991 "data_size": 63488 00:29:05.991 }, 00:29:05.991 { 00:29:05.991 "name": null, 00:29:05.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.991 "is_configured": false, 00:29:05.991 "data_offset": 2048, 00:29:05.991 "data_size": 63488 00:29:05.991 }, 00:29:05.991 { 00:29:05.991 "name": "BaseBdev3", 00:29:05.991 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:05.991 "is_configured": true, 00:29:05.991 "data_offset": 2048, 00:29:05.991 "data_size": 63488 00:29:05.991 }, 00:29:05.991 { 00:29:05.991 "name": "BaseBdev4", 00:29:05.991 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:05.991 "is_configured": true, 00:29:05.991 "data_offset": 2048, 00:29:05.991 "data_size": 63488 00:29:05.991 } 00:29:05.991 ] 00:29:05.991 }' 00:29:05.991 07:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:05.991 07:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:05.991 07:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:05.991 07:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:05.991 07:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.991 07:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:06.249 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:29:06.249 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:06.508 [2024-07-12 07:39:40.174737] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:06.508 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:06.508 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:06.508 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:06.508 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:06.508 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:06.508 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:06.508 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:06.508 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:06.508 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:06.508 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:06.508 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.508 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.508 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:06.508 "name": "raid_bdev1", 00:29:06.508 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:06.508 "strip_size_kb": 0, 00:29:06.508 "state": "online", 00:29:06.508 "raid_level": "raid1", 00:29:06.508 "superblock": true, 00:29:06.508 "num_base_bdevs": 4, 00:29:06.508 "num_base_bdevs_discovered": 2, 00:29:06.508 "num_base_bdevs_operational": 2, 00:29:06.508 "base_bdevs_list": [ 00:29:06.508 { 00:29:06.508 "name": null, 00:29:06.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.508 "is_configured": false, 00:29:06.508 "data_offset": 2048, 00:29:06.508 "data_size": 63488 00:29:06.508 }, 00:29:06.508 { 00:29:06.508 "name": null, 00:29:06.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.508 "is_configured": false, 00:29:06.508 "data_offset": 2048, 00:29:06.508 "data_size": 63488 00:29:06.508 }, 00:29:06.508 { 00:29:06.508 "name": "BaseBdev3", 00:29:06.508 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:06.508 "is_configured": true, 00:29:06.508 "data_offset": 2048, 00:29:06.508 "data_size": 63488 00:29:06.508 }, 00:29:06.508 { 00:29:06.508 "name": "BaseBdev4", 00:29:06.508 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:06.508 "is_configured": true, 00:29:06.508 "data_offset": 2048, 00:29:06.508 "data_size": 63488 00:29:06.508 } 00:29:06.508 ] 00:29:06.508 }' 00:29:06.508 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:06.508 07:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.443 07:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:07.443 [2024-07-12 07:39:41.122932] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:07.443 [2024-07-12 07:39:41.123133] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:07.443 [2024-07-12 07:39:41.123148] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:07.443 [2024-07-12 07:39:41.123227] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:07.443 [2024-07-12 07:39:41.126448] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caefe0 00:29:07.443 [2024-07-12 07:39:41.128482] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:07.443 07:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:29:08.380 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:08.380 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:08.380 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:08.380 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:08.380 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:08.380 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:08.380 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.640 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:08.640 "name": "raid_bdev1", 00:29:08.640 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:08.640 "strip_size_kb": 0, 00:29:08.640 "state": "online", 00:29:08.640 "raid_level": "raid1", 00:29:08.640 "superblock": true, 00:29:08.640 "num_base_bdevs": 4, 00:29:08.640 "num_base_bdevs_discovered": 3, 00:29:08.640 "num_base_bdevs_operational": 3, 00:29:08.640 "process": { 00:29:08.640 "type": "rebuild", 00:29:08.640 "target": "spare", 00:29:08.640 "progress": { 00:29:08.640 "blocks": 22528, 00:29:08.640 "percent": 35 00:29:08.640 } 00:29:08.640 }, 00:29:08.640 "base_bdevs_list": [ 00:29:08.640 { 00:29:08.640 "name": "spare", 00:29:08.640 "uuid": "88da4226-98eb-5777-9976-b651d2909e23", 00:29:08.640 "is_configured": true, 00:29:08.640 "data_offset": 2048, 00:29:08.640 "data_size": 63488 00:29:08.640 }, 00:29:08.640 { 00:29:08.640 "name": null, 00:29:08.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.640 "is_configured": false, 00:29:08.640 "data_offset": 2048, 00:29:08.640 "data_size": 63488 00:29:08.640 }, 00:29:08.640 { 00:29:08.640 "name": "BaseBdev3", 00:29:08.640 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:08.640 "is_configured": true, 00:29:08.640 "data_offset": 2048, 00:29:08.640 "data_size": 63488 00:29:08.640 }, 00:29:08.640 { 00:29:08.640 "name": "BaseBdev4", 00:29:08.640 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:08.640 "is_configured": true, 00:29:08.640 "data_offset": 2048, 00:29:08.640 "data_size": 63488 00:29:08.640 } 00:29:08.640 ] 00:29:08.640 }' 00:29:08.640 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:08.640 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:08.640 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:08.640 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:08.640 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:08.900 [2024-07-12 07:39:42.661375] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:08.900 [2024-07-12 07:39:42.736441] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:08.900 [2024-07-12 07:39:42.736512] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:08.900 [2024-07-12 07:39:42.736538] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:08.900 [2024-07-12 07:39:42.736547] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:08.900 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:08.900 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:08.900 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:08.900 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:08.900 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:08.900 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:08.900 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:08.900 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:08.900 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:08.900 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:08.900 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:08.900 07:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:09.160 07:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:09.160 "name": "raid_bdev1", 00:29:09.160 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:09.160 "strip_size_kb": 0, 00:29:09.160 "state": "online", 00:29:09.160 "raid_level": "raid1", 00:29:09.160 "superblock": true, 00:29:09.160 "num_base_bdevs": 4, 00:29:09.160 "num_base_bdevs_discovered": 2, 00:29:09.160 "num_base_bdevs_operational": 2, 00:29:09.160 "base_bdevs_list": [ 00:29:09.160 { 00:29:09.160 "name": null, 00:29:09.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.160 "is_configured": false, 00:29:09.160 "data_offset": 2048, 00:29:09.160 "data_size": 63488 00:29:09.160 }, 00:29:09.160 { 00:29:09.160 "name": null, 00:29:09.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.160 "is_configured": false, 00:29:09.160 "data_offset": 2048, 00:29:09.160 "data_size": 63488 00:29:09.160 }, 00:29:09.160 { 00:29:09.160 "name": "BaseBdev3", 00:29:09.160 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:09.160 "is_configured": true, 00:29:09.160 "data_offset": 2048, 00:29:09.160 "data_size": 63488 00:29:09.160 }, 00:29:09.160 { 00:29:09.160 "name": "BaseBdev4", 00:29:09.160 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:09.160 "is_configured": true, 00:29:09.160 "data_offset": 2048, 00:29:09.160 "data_size": 63488 00:29:09.160 } 00:29:09.160 ] 00:29:09.160 }' 00:29:09.160 07:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:09.160 07:39:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:09.727 07:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:09.986 [2024-07-12 07:39:43.807700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:09.986 [2024-07-12 07:39:43.807773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:09.986 [2024-07-12 07:39:43.807823] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:29:09.986 [2024-07-12 07:39:43.807844] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:09.986 [2024-07-12 07:39:43.808287] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:09.986 [2024-07-12 07:39:43.808325] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:09.986 [2024-07-12 07:39:43.808414] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:09.986 [2024-07-12 07:39:43.808427] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:09.986 [2024-07-12 07:39:43.808435] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:09.986 [2024-07-12 07:39:43.808475] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:09.986 [2024-07-12 07:39:43.811598] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caf320 00:29:09.986 [2024-07-12 07:39:43.813578] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:09.986 spare 00:29:09.986 07:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:29:11.364 07:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:11.364 07:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:11.364 07:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:11.364 07:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:11.364 07:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:11.364 07:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:11.364 07:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:11.364 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:11.364 "name": "raid_bdev1", 00:29:11.364 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:11.364 "strip_size_kb": 0, 00:29:11.364 "state": "online", 00:29:11.364 "raid_level": "raid1", 00:29:11.364 "superblock": true, 00:29:11.364 "num_base_bdevs": 4, 00:29:11.364 "num_base_bdevs_discovered": 3, 00:29:11.364 "num_base_bdevs_operational": 3, 00:29:11.364 "process": { 00:29:11.364 "type": "rebuild", 00:29:11.364 "target": "spare", 00:29:11.364 "progress": { 00:29:11.364 "blocks": 24576, 00:29:11.364 "percent": 38 00:29:11.364 } 00:29:11.364 }, 00:29:11.364 "base_bdevs_list": [ 00:29:11.364 { 00:29:11.364 "name": "spare", 00:29:11.364 "uuid": "88da4226-98eb-5777-9976-b651d2909e23", 00:29:11.364 "is_configured": true, 00:29:11.364 "data_offset": 2048, 00:29:11.364 "data_size": 63488 00:29:11.364 }, 00:29:11.364 { 00:29:11.364 "name": null, 00:29:11.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.364 "is_configured": false, 00:29:11.364 "data_offset": 2048, 00:29:11.364 "data_size": 63488 00:29:11.364 }, 00:29:11.364 { 00:29:11.364 "name": "BaseBdev3", 00:29:11.364 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:11.364 "is_configured": true, 00:29:11.364 "data_offset": 2048, 00:29:11.364 "data_size": 63488 00:29:11.364 }, 00:29:11.364 { 00:29:11.364 "name": "BaseBdev4", 00:29:11.364 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:11.364 "is_configured": true, 00:29:11.364 "data_offset": 2048, 00:29:11.364 "data_size": 63488 00:29:11.364 } 00:29:11.364 ] 00:29:11.364 }' 00:29:11.364 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:11.364 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:11.364 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:11.364 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:11.364 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:11.623 [2024-07-12 07:39:45.422518] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:11.881 [2024-07-12 07:39:45.522024] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:11.881 [2024-07-12 07:39:45.522088] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:11.881 [2024-07-12 07:39:45.522105] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:11.881 [2024-07-12 07:39:45.522112] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:11.881 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:11.882 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:11.882 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:11.882 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:11.882 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:11.882 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:11.882 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:11.882 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:11.882 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:11.882 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:11.882 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:11.882 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:11.882 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:11.882 "name": "raid_bdev1", 00:29:11.882 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:11.882 "strip_size_kb": 0, 00:29:11.882 "state": "online", 00:29:11.882 "raid_level": "raid1", 00:29:11.882 "superblock": true, 00:29:11.882 "num_base_bdevs": 4, 00:29:11.882 "num_base_bdevs_discovered": 2, 00:29:11.882 "num_base_bdevs_operational": 2, 00:29:11.882 "base_bdevs_list": [ 00:29:11.882 { 00:29:11.882 "name": null, 00:29:11.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.882 "is_configured": false, 00:29:11.882 "data_offset": 2048, 00:29:11.882 "data_size": 63488 00:29:11.882 }, 00:29:11.882 { 00:29:11.882 "name": null, 00:29:11.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.882 "is_configured": false, 00:29:11.882 "data_offset": 2048, 00:29:11.882 "data_size": 63488 00:29:11.882 }, 00:29:11.882 { 00:29:11.882 "name": "BaseBdev3", 00:29:11.882 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:11.882 "is_configured": true, 00:29:11.882 "data_offset": 2048, 00:29:11.882 "data_size": 63488 00:29:11.882 }, 00:29:11.882 { 00:29:11.882 "name": "BaseBdev4", 00:29:11.882 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:11.882 "is_configured": true, 00:29:11.882 "data_offset": 2048, 00:29:11.882 "data_size": 63488 00:29:11.882 } 00:29:11.882 ] 00:29:11.882 }' 00:29:11.882 07:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:11.882 07:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:12.818 07:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:12.818 07:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:12.818 07:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:12.818 07:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:12.818 07:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:12.818 07:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.818 07:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.818 07:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:12.818 "name": "raid_bdev1", 00:29:12.818 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:12.818 "strip_size_kb": 0, 00:29:12.818 "state": "online", 00:29:12.818 "raid_level": "raid1", 00:29:12.818 "superblock": true, 00:29:12.818 "num_base_bdevs": 4, 00:29:12.818 "num_base_bdevs_discovered": 2, 00:29:12.818 "num_base_bdevs_operational": 2, 00:29:12.818 "base_bdevs_list": [ 00:29:12.818 { 00:29:12.818 "name": null, 00:29:12.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.818 "is_configured": false, 00:29:12.818 "data_offset": 2048, 00:29:12.818 "data_size": 63488 00:29:12.818 }, 00:29:12.818 { 00:29:12.818 "name": null, 00:29:12.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.818 "is_configured": false, 00:29:12.818 "data_offset": 2048, 00:29:12.818 "data_size": 63488 00:29:12.818 }, 00:29:12.818 { 00:29:12.818 "name": "BaseBdev3", 00:29:12.818 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:12.818 "is_configured": true, 00:29:12.818 "data_offset": 2048, 00:29:12.818 "data_size": 63488 00:29:12.818 }, 00:29:12.818 { 00:29:12.818 "name": "BaseBdev4", 00:29:12.818 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:12.818 "is_configured": true, 00:29:12.818 "data_offset": 2048, 00:29:12.818 "data_size": 63488 00:29:12.818 } 00:29:12.818 ] 00:29:12.818 }' 00:29:12.818 07:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:12.818 07:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:12.818 07:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:12.818 07:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:12.818 07:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:13.077 07:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:13.336 [2024-07-12 07:39:46.979539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:13.336 [2024-07-12 07:39:46.979626] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.336 [2024-07-12 07:39:46.979681] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:29:13.336 [2024-07-12 07:39:46.979702] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.336 [2024-07-12 07:39:46.980113] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.336 [2024-07-12 07:39:46.980142] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:13.336 [2024-07-12 07:39:46.980218] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:13.336 [2024-07-12 07:39:46.980231] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:13.336 [2024-07-12 07:39:46.980239] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:13.336 BaseBdev1 00:29:13.336 07:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:29:14.272 07:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:14.272 07:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:14.272 07:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:14.272 07:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:14.272 07:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:14.272 07:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:14.272 07:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:14.272 07:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:14.272 07:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:14.272 07:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:14.272 07:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.272 07:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.531 07:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:14.531 "name": "raid_bdev1", 00:29:14.531 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:14.531 "strip_size_kb": 0, 00:29:14.531 "state": "online", 00:29:14.531 "raid_level": "raid1", 00:29:14.531 "superblock": true, 00:29:14.531 "num_base_bdevs": 4, 00:29:14.531 "num_base_bdevs_discovered": 2, 00:29:14.531 "num_base_bdevs_operational": 2, 00:29:14.531 "base_bdevs_list": [ 00:29:14.531 { 00:29:14.531 "name": null, 00:29:14.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:14.531 "is_configured": false, 00:29:14.531 "data_offset": 2048, 00:29:14.531 "data_size": 63488 00:29:14.531 }, 00:29:14.531 { 00:29:14.531 "name": null, 00:29:14.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:14.531 "is_configured": false, 00:29:14.531 "data_offset": 2048, 00:29:14.531 "data_size": 63488 00:29:14.531 }, 00:29:14.531 { 00:29:14.531 "name": "BaseBdev3", 00:29:14.531 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:14.531 "is_configured": true, 00:29:14.531 "data_offset": 2048, 00:29:14.531 "data_size": 63488 00:29:14.531 }, 00:29:14.531 { 00:29:14.531 "name": "BaseBdev4", 00:29:14.531 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:14.531 "is_configured": true, 00:29:14.531 "data_offset": 2048, 00:29:14.531 "data_size": 63488 00:29:14.531 } 00:29:14.531 ] 00:29:14.531 }' 00:29:14.531 07:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:14.531 07:39:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:15.099 07:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:15.099 07:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:15.099 07:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:15.099 07:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:15.099 07:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:15.099 07:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.099 07:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:15.358 "name": "raid_bdev1", 00:29:15.358 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:15.358 "strip_size_kb": 0, 00:29:15.358 "state": "online", 00:29:15.358 "raid_level": "raid1", 00:29:15.358 "superblock": true, 00:29:15.358 "num_base_bdevs": 4, 00:29:15.358 "num_base_bdevs_discovered": 2, 00:29:15.358 "num_base_bdevs_operational": 2, 00:29:15.358 "base_bdevs_list": [ 00:29:15.358 { 00:29:15.358 "name": null, 00:29:15.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.358 "is_configured": false, 00:29:15.358 "data_offset": 2048, 00:29:15.358 "data_size": 63488 00:29:15.358 }, 00:29:15.358 { 00:29:15.358 "name": null, 00:29:15.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.358 "is_configured": false, 00:29:15.358 "data_offset": 2048, 00:29:15.358 "data_size": 63488 00:29:15.358 }, 00:29:15.358 { 00:29:15.358 "name": "BaseBdev3", 00:29:15.358 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:15.358 "is_configured": true, 00:29:15.358 "data_offset": 2048, 00:29:15.358 "data_size": 63488 00:29:15.358 }, 00:29:15.358 { 00:29:15.358 "name": "BaseBdev4", 00:29:15.358 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:15.358 "is_configured": true, 00:29:15.358 "data_offset": 2048, 00:29:15.358 "data_size": 63488 00:29:15.358 } 00:29:15.358 ] 00:29:15.358 }' 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:15.358 07:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:15.617 [2024-07-12 07:39:49.347806] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:15.617 [2024-07-12 07:39:49.347953] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:15.617 [2024-07-12 07:39:49.347965] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:15.617 request: 00:29:15.617 { 00:29:15.617 "raid_bdev": "raid_bdev1", 00:29:15.617 "base_bdev": "BaseBdev1", 00:29:15.617 "method": "bdev_raid_add_base_bdev", 00:29:15.617 "req_id": 1 00:29:15.617 } 00:29:15.617 Got JSON-RPC error response 00:29:15.617 response: 00:29:15.617 { 00:29:15.617 "code": -22, 00:29:15.617 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:15.617 } 00:29:15.617 07:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:29:15.617 07:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:15.617 07:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:15.617 07:39:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:15.617 07:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:29:16.553 07:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:16.553 07:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:16.553 07:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:16.553 07:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:16.553 07:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:16.553 07:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:16.553 07:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:16.553 07:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:16.553 07:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:16.553 07:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:16.553 07:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.553 07:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.812 07:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:16.812 "name": "raid_bdev1", 00:29:16.812 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:16.812 "strip_size_kb": 0, 00:29:16.812 "state": "online", 00:29:16.812 "raid_level": "raid1", 00:29:16.812 "superblock": true, 00:29:16.812 "num_base_bdevs": 4, 00:29:16.812 "num_base_bdevs_discovered": 2, 00:29:16.812 "num_base_bdevs_operational": 2, 00:29:16.812 "base_bdevs_list": [ 00:29:16.812 { 00:29:16.812 "name": null, 00:29:16.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.812 "is_configured": false, 00:29:16.812 "data_offset": 2048, 00:29:16.812 "data_size": 63488 00:29:16.812 }, 00:29:16.812 { 00:29:16.812 "name": null, 00:29:16.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.812 "is_configured": false, 00:29:16.812 "data_offset": 2048, 00:29:16.812 "data_size": 63488 00:29:16.812 }, 00:29:16.812 { 00:29:16.812 "name": "BaseBdev3", 00:29:16.812 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:16.812 "is_configured": true, 00:29:16.812 "data_offset": 2048, 00:29:16.812 "data_size": 63488 00:29:16.812 }, 00:29:16.812 { 00:29:16.812 "name": "BaseBdev4", 00:29:16.812 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:16.812 "is_configured": true, 00:29:16.812 "data_offset": 2048, 00:29:16.812 "data_size": 63488 00:29:16.812 } 00:29:16.812 ] 00:29:16.812 }' 00:29:16.812 07:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:16.812 07:39:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:17.380 07:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:17.380 07:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:17.380 07:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:17.380 07:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:17.380 07:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:17.380 07:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.380 07:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.639 07:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:17.639 "name": "raid_bdev1", 00:29:17.639 "uuid": "388da74f-9d6f-408b-a105-9851f3a30db9", 00:29:17.639 "strip_size_kb": 0, 00:29:17.639 "state": "online", 00:29:17.639 "raid_level": "raid1", 00:29:17.639 "superblock": true, 00:29:17.639 "num_base_bdevs": 4, 00:29:17.639 "num_base_bdevs_discovered": 2, 00:29:17.639 "num_base_bdevs_operational": 2, 00:29:17.639 "base_bdevs_list": [ 00:29:17.639 { 00:29:17.639 "name": null, 00:29:17.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.639 "is_configured": false, 00:29:17.639 "data_offset": 2048, 00:29:17.639 "data_size": 63488 00:29:17.639 }, 00:29:17.639 { 00:29:17.639 "name": null, 00:29:17.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.639 "is_configured": false, 00:29:17.639 "data_offset": 2048, 00:29:17.639 "data_size": 63488 00:29:17.639 }, 00:29:17.639 { 00:29:17.639 "name": "BaseBdev3", 00:29:17.639 "uuid": "1411a87b-08a1-50e5-b44e-2edd35556550", 00:29:17.639 "is_configured": true, 00:29:17.639 "data_offset": 2048, 00:29:17.639 "data_size": 63488 00:29:17.639 }, 00:29:17.639 { 00:29:17.639 "name": "BaseBdev4", 00:29:17.639 "uuid": "77b61067-13b9-5c20-be22-af93b7a34994", 00:29:17.639 "is_configured": true, 00:29:17.639 "data_offset": 2048, 00:29:17.639 "data_size": 63488 00:29:17.639 } 00:29:17.639 ] 00:29:17.639 }' 00:29:17.639 07:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:17.639 07:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:17.639 07:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:17.898 07:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:17.898 07:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 157129 00:29:17.898 07:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 157129 ']' 00:29:17.898 07:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 157129 00:29:17.898 07:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:29:17.898 07:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:17.898 07:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 157129 00:29:17.898 07:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:17.898 07:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:17.898 killing process with pid 157129 00:29:17.898 07:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 157129' 00:29:17.898 07:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 157129 00:29:17.898 Received shutdown signal, test time was about 60.000000 seconds 00:29:17.898 00:29:17.898 Latency(us) 00:29:17.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.898 =================================================================================================================== 00:29:17.898 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:17.898 07:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 157129 00:29:17.898 [2024-07-12 07:39:51.594272] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:17.898 [2024-07-12 07:39:51.594378] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:17.898 [2024-07-12 07:39:51.594440] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:17.898 [2024-07-12 07:39:51.594450] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:29:17.898 [2024-07-12 07:39:51.642862] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:18.158 07:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:29:18.158 00:29:18.158 real 0m34.331s 00:29:18.158 user 0m50.490s 00:29:18.158 sys 0m5.728s 00:29:18.158 07:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:18.158 07:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:18.158 ************************************ 00:29:18.158 END TEST raid_rebuild_test_sb 00:29:18.158 ************************************ 00:29:18.158 07:39:51 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:29:18.158 07:39:51 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:29:18.158 07:39:51 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:18.158 07:39:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:18.158 ************************************ 00:29:18.158 START TEST raid_rebuild_test_io 00:29:18.158 ************************************ 00:29:18.158 07:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 false true true 00:29:18.158 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:18.158 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:18.158 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:29:18.158 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:29:18.158 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:18.158 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:18.158 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:18.158 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:18.158 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:18.158 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=158050 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 158050 /var/tmp/spdk-raid.sock 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@827 -- # '[' -z 158050 ']' 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:18.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:18.159 07:39:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:18.418 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:18.418 Zero copy mechanism will not be used. 00:29:18.418 [2024-07-12 07:39:52.075987] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:18.418 [2024-07-12 07:39:52.076242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158050 ] 00:29:18.418 [2024-07-12 07:39:52.233738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.418 [2024-07-12 07:39:52.281564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.678 [2024-07-12 07:39:52.325661] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:19.261 07:39:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:19.261 07:39:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # return 0 00:29:19.261 07:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:19.261 07:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:19.525 BaseBdev1_malloc 00:29:19.525 07:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:19.806 [2024-07-12 07:39:53.487479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:19.806 [2024-07-12 07:39:53.487588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:19.806 [2024-07-12 07:39:53.487636] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:29:19.806 [2024-07-12 07:39:53.487679] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:19.806 [2024-07-12 07:39:53.490047] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:19.806 [2024-07-12 07:39:53.490113] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:19.806 BaseBdev1 00:29:19.806 07:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:19.806 07:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:19.806 BaseBdev2_malloc 00:29:19.806 07:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:20.065 [2024-07-12 07:39:53.904244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:20.065 [2024-07-12 07:39:53.904303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:20.065 [2024-07-12 07:39:53.904336] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:29:20.065 [2024-07-12 07:39:53.904375] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:20.065 [2024-07-12 07:39:53.906580] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:20.065 [2024-07-12 07:39:53.906628] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:20.065 BaseBdev2 00:29:20.065 07:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:20.065 07:39:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:20.323 BaseBdev3_malloc 00:29:20.323 07:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:20.581 [2024-07-12 07:39:54.280402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:20.581 [2024-07-12 07:39:54.280461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:20.581 [2024-07-12 07:39:54.280499] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:20.581 [2024-07-12 07:39:54.280537] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:20.581 [2024-07-12 07:39:54.282708] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:20.581 [2024-07-12 07:39:54.282772] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:20.581 BaseBdev3 00:29:20.581 07:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:20.581 07:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:20.839 BaseBdev4_malloc 00:29:20.839 07:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:20.840 [2024-07-12 07:39:54.653112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:20.840 [2024-07-12 07:39:54.653183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:20.840 [2024-07-12 07:39:54.653213] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:20.840 [2024-07-12 07:39:54.653254] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:20.840 [2024-07-12 07:39:54.655526] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:20.840 [2024-07-12 07:39:54.655602] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:20.840 BaseBdev4 00:29:20.840 07:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:21.097 spare_malloc 00:29:21.097 07:39:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:21.355 spare_delay 00:29:21.355 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:21.614 [2024-07-12 07:39:55.253987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:21.614 [2024-07-12 07:39:55.254053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:21.614 [2024-07-12 07:39:55.254087] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:21.614 [2024-07-12 07:39:55.254125] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:21.614 [2024-07-12 07:39:55.256329] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:21.614 [2024-07-12 07:39:55.256397] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:21.614 spare 00:29:21.614 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:21.614 [2024-07-12 07:39:55.430092] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:21.614 [2024-07-12 07:39:55.432018] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:21.614 [2024-07-12 07:39:55.432085] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:21.614 [2024-07-12 07:39:55.432127] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:21.614 [2024-07-12 07:39:55.432219] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:29:21.614 [2024-07-12 07:39:55.432229] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:21.614 [2024-07-12 07:39:55.432355] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:29:21.614 [2024-07-12 07:39:55.432685] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:29:21.614 [2024-07-12 07:39:55.432705] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:29:21.614 [2024-07-12 07:39:55.432881] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:21.614 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:21.614 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:21.614 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:21.614 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:21.614 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:21.614 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:21.614 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:21.614 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:21.614 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:21.614 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:21.614 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.614 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.872 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:21.873 "name": "raid_bdev1", 00:29:21.873 "uuid": "19550c6a-93e8-4e7c-8334-456706c3400c", 00:29:21.873 "strip_size_kb": 0, 00:29:21.873 "state": "online", 00:29:21.873 "raid_level": "raid1", 00:29:21.873 "superblock": false, 00:29:21.873 "num_base_bdevs": 4, 00:29:21.873 "num_base_bdevs_discovered": 4, 00:29:21.873 "num_base_bdevs_operational": 4, 00:29:21.873 "base_bdevs_list": [ 00:29:21.873 { 00:29:21.873 "name": "BaseBdev1", 00:29:21.873 "uuid": "e8efcca3-42ed-55eb-8b20-b8ff130578d5", 00:29:21.873 "is_configured": true, 00:29:21.873 "data_offset": 0, 00:29:21.873 "data_size": 65536 00:29:21.873 }, 00:29:21.873 { 00:29:21.873 "name": "BaseBdev2", 00:29:21.873 "uuid": "7eba23f3-2db9-517a-bd26-d98d3f401d97", 00:29:21.873 "is_configured": true, 00:29:21.873 "data_offset": 0, 00:29:21.873 "data_size": 65536 00:29:21.873 }, 00:29:21.873 { 00:29:21.873 "name": "BaseBdev3", 00:29:21.873 "uuid": "48ed658b-2cc9-575d-b2c2-e4053b645df5", 00:29:21.873 "is_configured": true, 00:29:21.873 "data_offset": 0, 00:29:21.873 "data_size": 65536 00:29:21.873 }, 00:29:21.873 { 00:29:21.873 "name": "BaseBdev4", 00:29:21.873 "uuid": "f5d9faad-e2e1-5adc-a46c-166f56cfafc8", 00:29:21.873 "is_configured": true, 00:29:21.873 "data_offset": 0, 00:29:21.873 "data_size": 65536 00:29:21.873 } 00:29:21.873 ] 00:29:21.873 }' 00:29:21.873 07:39:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:21.873 07:39:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.441 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:22.441 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:22.699 [2024-07-12 07:39:56.430415] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:22.699 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:29:22.699 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:22.699 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.958 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:29:22.958 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:29:22.958 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:22.958 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:22.958 [2024-07-12 07:39:56.700140] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:29:22.958 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:22.958 Zero copy mechanism will not be used. 00:29:22.958 Running I/O for 60 seconds... 00:29:22.958 [2024-07-12 07:39:56.793973] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:22.958 [2024-07-12 07:39:56.804557] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:29:22.958 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:22.958 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:22.958 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:22.958 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:22.958 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:22.958 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:22.958 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:22.958 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:22.958 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:22.958 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:23.217 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.217 07:39:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:23.217 07:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:23.217 "name": "raid_bdev1", 00:29:23.217 "uuid": "19550c6a-93e8-4e7c-8334-456706c3400c", 00:29:23.217 "strip_size_kb": 0, 00:29:23.217 "state": "online", 00:29:23.217 "raid_level": "raid1", 00:29:23.217 "superblock": false, 00:29:23.217 "num_base_bdevs": 4, 00:29:23.217 "num_base_bdevs_discovered": 3, 00:29:23.217 "num_base_bdevs_operational": 3, 00:29:23.217 "base_bdevs_list": [ 00:29:23.217 { 00:29:23.217 "name": null, 00:29:23.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.217 "is_configured": false, 00:29:23.217 "data_offset": 0, 00:29:23.217 "data_size": 65536 00:29:23.217 }, 00:29:23.217 { 00:29:23.217 "name": "BaseBdev2", 00:29:23.217 "uuid": "7eba23f3-2db9-517a-bd26-d98d3f401d97", 00:29:23.217 "is_configured": true, 00:29:23.217 "data_offset": 0, 00:29:23.217 "data_size": 65536 00:29:23.217 }, 00:29:23.217 { 00:29:23.217 "name": "BaseBdev3", 00:29:23.217 "uuid": "48ed658b-2cc9-575d-b2c2-e4053b645df5", 00:29:23.217 "is_configured": true, 00:29:23.217 "data_offset": 0, 00:29:23.217 "data_size": 65536 00:29:23.217 }, 00:29:23.217 { 00:29:23.217 "name": "BaseBdev4", 00:29:23.217 "uuid": "f5d9faad-e2e1-5adc-a46c-166f56cfafc8", 00:29:23.217 "is_configured": true, 00:29:23.217 "data_offset": 0, 00:29:23.217 "data_size": 65536 00:29:23.217 } 00:29:23.217 ] 00:29:23.217 }' 00:29:23.217 07:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:23.217 07:39:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.785 07:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:24.044 [2024-07-12 07:39:57.694860] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:24.044 [2024-07-12 07:39:57.742740] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:29:24.044 [2024-07-12 07:39:57.744912] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:24.044 07:39:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:24.044 [2024-07-12 07:39:57.866395] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:24.044 [2024-07-12 07:39:57.867539] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:24.303 [2024-07-12 07:39:58.074722] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:24.303 [2024-07-12 07:39:58.075291] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:24.561 [2024-07-12 07:39:58.411114] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:24.819 [2024-07-12 07:39:58.632497] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:25.077 07:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:25.077 07:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:25.077 07:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:25.077 07:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:25.077 07:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:25.077 07:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.077 07:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.336 07:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:25.336 "name": "raid_bdev1", 00:29:25.336 "uuid": "19550c6a-93e8-4e7c-8334-456706c3400c", 00:29:25.336 "strip_size_kb": 0, 00:29:25.336 "state": "online", 00:29:25.336 "raid_level": "raid1", 00:29:25.336 "superblock": false, 00:29:25.336 "num_base_bdevs": 4, 00:29:25.336 "num_base_bdevs_discovered": 4, 00:29:25.336 "num_base_bdevs_operational": 4, 00:29:25.336 "process": { 00:29:25.336 "type": "rebuild", 00:29:25.336 "target": "spare", 00:29:25.336 "progress": { 00:29:25.336 "blocks": 12288, 00:29:25.336 "percent": 18 00:29:25.336 } 00:29:25.336 }, 00:29:25.336 "base_bdevs_list": [ 00:29:25.336 { 00:29:25.336 "name": "spare", 00:29:25.336 "uuid": "dfea28ef-4960-5dcd-9250-a8d889571c8e", 00:29:25.336 "is_configured": true, 00:29:25.336 "data_offset": 0, 00:29:25.336 "data_size": 65536 00:29:25.336 }, 00:29:25.336 { 00:29:25.336 "name": "BaseBdev2", 00:29:25.336 "uuid": "7eba23f3-2db9-517a-bd26-d98d3f401d97", 00:29:25.336 "is_configured": true, 00:29:25.336 "data_offset": 0, 00:29:25.336 "data_size": 65536 00:29:25.336 }, 00:29:25.336 { 00:29:25.336 "name": "BaseBdev3", 00:29:25.336 "uuid": "48ed658b-2cc9-575d-b2c2-e4053b645df5", 00:29:25.336 "is_configured": true, 00:29:25.336 "data_offset": 0, 00:29:25.336 "data_size": 65536 00:29:25.336 }, 00:29:25.336 { 00:29:25.336 "name": "BaseBdev4", 00:29:25.336 "uuid": "f5d9faad-e2e1-5adc-a46c-166f56cfafc8", 00:29:25.336 "is_configured": true, 00:29:25.336 "data_offset": 0, 00:29:25.336 "data_size": 65536 00:29:25.336 } 00:29:25.336 ] 00:29:25.336 }' 00:29:25.336 07:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:25.336 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:25.336 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:25.336 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:25.336 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:25.595 [2024-07-12 07:39:59.288372] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:25.595 [2024-07-12 07:39:59.292991] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:25.595 [2024-07-12 07:39:59.293364] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:25.595 [2024-07-12 07:39:59.294262] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:25.595 [2024-07-12 07:39:59.309082] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:25.595 [2024-07-12 07:39:59.309129] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:25.595 [2024-07-12 07:39:59.309141] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:25.595 [2024-07-12 07:39:59.332336] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:29:25.595 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:25.595 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:25.595 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:25.595 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:25.595 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:25.595 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:25.595 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:25.595 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:25.595 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:25.595 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:25.595 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.595 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.854 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:25.854 "name": "raid_bdev1", 00:29:25.854 "uuid": "19550c6a-93e8-4e7c-8334-456706c3400c", 00:29:25.854 "strip_size_kb": 0, 00:29:25.854 "state": "online", 00:29:25.854 "raid_level": "raid1", 00:29:25.854 "superblock": false, 00:29:25.854 "num_base_bdevs": 4, 00:29:25.854 "num_base_bdevs_discovered": 3, 00:29:25.854 "num_base_bdevs_operational": 3, 00:29:25.854 "base_bdevs_list": [ 00:29:25.854 { 00:29:25.854 "name": null, 00:29:25.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:25.854 "is_configured": false, 00:29:25.854 "data_offset": 0, 00:29:25.854 "data_size": 65536 00:29:25.854 }, 00:29:25.854 { 00:29:25.854 "name": "BaseBdev2", 00:29:25.854 "uuid": "7eba23f3-2db9-517a-bd26-d98d3f401d97", 00:29:25.854 "is_configured": true, 00:29:25.854 "data_offset": 0, 00:29:25.854 "data_size": 65536 00:29:25.854 }, 00:29:25.854 { 00:29:25.854 "name": "BaseBdev3", 00:29:25.854 "uuid": "48ed658b-2cc9-575d-b2c2-e4053b645df5", 00:29:25.854 "is_configured": true, 00:29:25.854 "data_offset": 0, 00:29:25.854 "data_size": 65536 00:29:25.854 }, 00:29:25.854 { 00:29:25.854 "name": "BaseBdev4", 00:29:25.854 "uuid": "f5d9faad-e2e1-5adc-a46c-166f56cfafc8", 00:29:25.854 "is_configured": true, 00:29:25.854 "data_offset": 0, 00:29:25.854 "data_size": 65536 00:29:25.854 } 00:29:25.854 ] 00:29:25.854 }' 00:29:25.854 07:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:25.854 07:39:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.422 07:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:26.422 07:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:26.422 07:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:26.422 07:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:26.422 07:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:26.422 07:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.422 07:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.681 07:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:26.681 "name": "raid_bdev1", 00:29:26.681 "uuid": "19550c6a-93e8-4e7c-8334-456706c3400c", 00:29:26.681 "strip_size_kb": 0, 00:29:26.681 "state": "online", 00:29:26.681 "raid_level": "raid1", 00:29:26.681 "superblock": false, 00:29:26.681 "num_base_bdevs": 4, 00:29:26.681 "num_base_bdevs_discovered": 3, 00:29:26.681 "num_base_bdevs_operational": 3, 00:29:26.681 "base_bdevs_list": [ 00:29:26.681 { 00:29:26.681 "name": null, 00:29:26.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:26.681 "is_configured": false, 00:29:26.681 "data_offset": 0, 00:29:26.681 "data_size": 65536 00:29:26.681 }, 00:29:26.681 { 00:29:26.681 "name": "BaseBdev2", 00:29:26.681 "uuid": "7eba23f3-2db9-517a-bd26-d98d3f401d97", 00:29:26.681 "is_configured": true, 00:29:26.681 "data_offset": 0, 00:29:26.681 "data_size": 65536 00:29:26.681 }, 00:29:26.681 { 00:29:26.681 "name": "BaseBdev3", 00:29:26.681 "uuid": "48ed658b-2cc9-575d-b2c2-e4053b645df5", 00:29:26.681 "is_configured": true, 00:29:26.681 "data_offset": 0, 00:29:26.681 "data_size": 65536 00:29:26.681 }, 00:29:26.681 { 00:29:26.681 "name": "BaseBdev4", 00:29:26.681 "uuid": "f5d9faad-e2e1-5adc-a46c-166f56cfafc8", 00:29:26.681 "is_configured": true, 00:29:26.681 "data_offset": 0, 00:29:26.681 "data_size": 65536 00:29:26.681 } 00:29:26.681 ] 00:29:26.681 }' 00:29:26.681 07:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:26.681 07:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:26.681 07:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:26.681 07:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:26.681 07:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:26.940 [2024-07-12 07:40:00.609787] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:26.940 07:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:26.940 [2024-07-12 07:40:00.699159] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ae0 00:29:26.940 [2024-07-12 07:40:00.701664] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:27.199 [2024-07-12 07:40:00.827482] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:27.199 [2024-07-12 07:40:00.828077] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:27.199 [2024-07-12 07:40:01.039610] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:27.199 [2024-07-12 07:40:01.040421] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:27.766 [2024-07-12 07:40:01.392101] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:27.766 [2024-07-12 07:40:01.618938] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:27.766 [2024-07-12 07:40:01.619305] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:28.024 07:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:28.024 07:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:28.024 07:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:28.024 07:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:28.025 07:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:28.025 07:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.025 07:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.284 07:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:28.284 "name": "raid_bdev1", 00:29:28.284 "uuid": "19550c6a-93e8-4e7c-8334-456706c3400c", 00:29:28.284 "strip_size_kb": 0, 00:29:28.284 "state": "online", 00:29:28.284 "raid_level": "raid1", 00:29:28.284 "superblock": false, 00:29:28.284 "num_base_bdevs": 4, 00:29:28.284 "num_base_bdevs_discovered": 4, 00:29:28.284 "num_base_bdevs_operational": 4, 00:29:28.284 "process": { 00:29:28.284 "type": "rebuild", 00:29:28.284 "target": "spare", 00:29:28.284 "progress": { 00:29:28.284 "blocks": 12288, 00:29:28.284 "percent": 18 00:29:28.284 } 00:29:28.284 }, 00:29:28.284 "base_bdevs_list": [ 00:29:28.284 { 00:29:28.284 "name": "spare", 00:29:28.284 "uuid": "dfea28ef-4960-5dcd-9250-a8d889571c8e", 00:29:28.284 "is_configured": true, 00:29:28.284 "data_offset": 0, 00:29:28.284 "data_size": 65536 00:29:28.284 }, 00:29:28.284 { 00:29:28.284 "name": "BaseBdev2", 00:29:28.284 "uuid": "7eba23f3-2db9-517a-bd26-d98d3f401d97", 00:29:28.284 "is_configured": true, 00:29:28.284 "data_offset": 0, 00:29:28.284 "data_size": 65536 00:29:28.284 }, 00:29:28.284 { 00:29:28.284 "name": "BaseBdev3", 00:29:28.284 "uuid": "48ed658b-2cc9-575d-b2c2-e4053b645df5", 00:29:28.284 "is_configured": true, 00:29:28.284 "data_offset": 0, 00:29:28.284 "data_size": 65536 00:29:28.284 }, 00:29:28.284 { 00:29:28.284 "name": "BaseBdev4", 00:29:28.284 "uuid": "f5d9faad-e2e1-5adc-a46c-166f56cfafc8", 00:29:28.284 "is_configured": true, 00:29:28.284 "data_offset": 0, 00:29:28.284 "data_size": 65536 00:29:28.284 } 00:29:28.284 ] 00:29:28.284 }' 00:29:28.284 07:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:28.284 [2024-07-12 07:40:01.971158] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:28.284 07:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:28.284 07:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:28.284 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:28.284 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:29:28.284 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:29:28.284 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:28.284 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:29:28.284 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:28.543 [2024-07-12 07:40:02.191227] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:28.543 [2024-07-12 07:40:02.286709] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:28.543 [2024-07-12 07:40:02.411471] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:29:28.543 [2024-07-12 07:40:02.411516] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002ae0 00:29:28.801 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:29:28.801 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:29:28.801 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:28.801 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:28.801 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:28.801 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:28.801 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:28.801 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.801 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.801 [2024-07-12 07:40:02.645832] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:28.801 [2024-07-12 07:40:02.646468] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:29.060 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:29.060 "name": "raid_bdev1", 00:29:29.060 "uuid": "19550c6a-93e8-4e7c-8334-456706c3400c", 00:29:29.060 "strip_size_kb": 0, 00:29:29.060 "state": "online", 00:29:29.060 "raid_level": "raid1", 00:29:29.060 "superblock": false, 00:29:29.060 "num_base_bdevs": 4, 00:29:29.060 "num_base_bdevs_discovered": 3, 00:29:29.060 "num_base_bdevs_operational": 3, 00:29:29.060 "process": { 00:29:29.060 "type": "rebuild", 00:29:29.060 "target": "spare", 00:29:29.060 "progress": { 00:29:29.060 "blocks": 22528, 00:29:29.060 "percent": 34 00:29:29.060 } 00:29:29.060 }, 00:29:29.060 "base_bdevs_list": [ 00:29:29.060 { 00:29:29.060 "name": "spare", 00:29:29.060 "uuid": "dfea28ef-4960-5dcd-9250-a8d889571c8e", 00:29:29.060 "is_configured": true, 00:29:29.060 "data_offset": 0, 00:29:29.060 "data_size": 65536 00:29:29.060 }, 00:29:29.060 { 00:29:29.060 "name": null, 00:29:29.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:29.060 "is_configured": false, 00:29:29.060 "data_offset": 0, 00:29:29.060 "data_size": 65536 00:29:29.060 }, 00:29:29.060 { 00:29:29.060 "name": "BaseBdev3", 00:29:29.060 "uuid": "48ed658b-2cc9-575d-b2c2-e4053b645df5", 00:29:29.060 "is_configured": true, 00:29:29.060 "data_offset": 0, 00:29:29.060 "data_size": 65536 00:29:29.060 }, 00:29:29.060 { 00:29:29.060 "name": "BaseBdev4", 00:29:29.060 "uuid": "f5d9faad-e2e1-5adc-a46c-166f56cfafc8", 00:29:29.060 "is_configured": true, 00:29:29.060 "data_offset": 0, 00:29:29.060 "data_size": 65536 00:29:29.060 } 00:29:29.060 ] 00:29:29.060 }' 00:29:29.060 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:29.060 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:29.060 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:29.060 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:29.060 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=913 00:29:29.060 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:29.060 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:29.060 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:29.060 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:29.060 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:29.060 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:29.060 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.060 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:29.317 [2024-07-12 07:40:02.981228] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:29:29.317 [2024-07-12 07:40:02.981794] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:29:29.317 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:29.317 "name": "raid_bdev1", 00:29:29.317 "uuid": "19550c6a-93e8-4e7c-8334-456706c3400c", 00:29:29.317 "strip_size_kb": 0, 00:29:29.317 "state": "online", 00:29:29.317 "raid_level": "raid1", 00:29:29.317 "superblock": false, 00:29:29.317 "num_base_bdevs": 4, 00:29:29.317 "num_base_bdevs_discovered": 3, 00:29:29.317 "num_base_bdevs_operational": 3, 00:29:29.317 "process": { 00:29:29.317 "type": "rebuild", 00:29:29.317 "target": "spare", 00:29:29.317 "progress": { 00:29:29.317 "blocks": 24576, 00:29:29.317 "percent": 37 00:29:29.317 } 00:29:29.317 }, 00:29:29.317 "base_bdevs_list": [ 00:29:29.317 { 00:29:29.317 "name": "spare", 00:29:29.317 "uuid": "dfea28ef-4960-5dcd-9250-a8d889571c8e", 00:29:29.317 "is_configured": true, 00:29:29.317 "data_offset": 0, 00:29:29.317 "data_size": 65536 00:29:29.317 }, 00:29:29.317 { 00:29:29.317 "name": null, 00:29:29.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:29.317 "is_configured": false, 00:29:29.317 "data_offset": 0, 00:29:29.317 "data_size": 65536 00:29:29.317 }, 00:29:29.317 { 00:29:29.317 "name": "BaseBdev3", 00:29:29.317 "uuid": "48ed658b-2cc9-575d-b2c2-e4053b645df5", 00:29:29.317 "is_configured": true, 00:29:29.317 "data_offset": 0, 00:29:29.317 "data_size": 65536 00:29:29.317 }, 00:29:29.317 { 00:29:29.317 "name": "BaseBdev4", 00:29:29.317 "uuid": "f5d9faad-e2e1-5adc-a46c-166f56cfafc8", 00:29:29.317 "is_configured": true, 00:29:29.317 "data_offset": 0, 00:29:29.317 "data_size": 65536 00:29:29.317 } 00:29:29.317 ] 00:29:29.317 }' 00:29:29.317 07:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:29.317 07:40:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:29.317 07:40:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:29.317 07:40:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:29.317 07:40:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:29.317 [2024-07-12 07:40:03.193355] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:29:29.317 [2024-07-12 07:40:03.193623] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:29:29.575 [2024-07-12 07:40:03.453553] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:30.142 [2024-07-12 07:40:03.901388] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:29:30.401 07:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:30.401 07:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:30.401 07:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:30.401 07:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:30.401 07:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:30.401 07:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:30.401 07:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:30.401 07:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:30.401 [2024-07-12 07:40:04.115640] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:29:30.401 [2024-07-12 07:40:04.116170] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:29:30.401 07:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:30.401 "name": "raid_bdev1", 00:29:30.401 "uuid": "19550c6a-93e8-4e7c-8334-456706c3400c", 00:29:30.401 "strip_size_kb": 0, 00:29:30.401 "state": "online", 00:29:30.401 "raid_level": "raid1", 00:29:30.401 "superblock": false, 00:29:30.401 "num_base_bdevs": 4, 00:29:30.401 "num_base_bdevs_discovered": 3, 00:29:30.401 "num_base_bdevs_operational": 3, 00:29:30.401 "process": { 00:29:30.401 "type": "rebuild", 00:29:30.401 "target": "spare", 00:29:30.401 "progress": { 00:29:30.401 "blocks": 40960, 00:29:30.401 "percent": 62 00:29:30.401 } 00:29:30.401 }, 00:29:30.401 "base_bdevs_list": [ 00:29:30.401 { 00:29:30.401 "name": "spare", 00:29:30.401 "uuid": "dfea28ef-4960-5dcd-9250-a8d889571c8e", 00:29:30.401 "is_configured": true, 00:29:30.401 "data_offset": 0, 00:29:30.401 "data_size": 65536 00:29:30.401 }, 00:29:30.401 { 00:29:30.401 "name": null, 00:29:30.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:30.401 "is_configured": false, 00:29:30.401 "data_offset": 0, 00:29:30.401 "data_size": 65536 00:29:30.401 }, 00:29:30.401 { 00:29:30.401 "name": "BaseBdev3", 00:29:30.401 "uuid": "48ed658b-2cc9-575d-b2c2-e4053b645df5", 00:29:30.401 "is_configured": true, 00:29:30.401 "data_offset": 0, 00:29:30.401 "data_size": 65536 00:29:30.401 }, 00:29:30.401 { 00:29:30.401 "name": "BaseBdev4", 00:29:30.401 "uuid": "f5d9faad-e2e1-5adc-a46c-166f56cfafc8", 00:29:30.401 "is_configured": true, 00:29:30.401 "data_offset": 0, 00:29:30.401 "data_size": 65536 00:29:30.401 } 00:29:30.401 ] 00:29:30.401 }' 00:29:30.401 07:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:30.660 07:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:30.660 07:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:30.660 07:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:30.660 07:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:30.660 [2024-07-12 07:40:04.434555] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:29:30.919 [2024-07-12 07:40:04.643855] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:29:31.500 [2024-07-12 07:40:05.085875] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:29:31.500 07:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:31.500 07:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:31.500 07:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:31.500 07:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:31.500 07:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:31.500 07:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:31.500 07:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:31.500 07:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:31.782 07:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:31.782 "name": "raid_bdev1", 00:29:31.782 "uuid": "19550c6a-93e8-4e7c-8334-456706c3400c", 00:29:31.782 "strip_size_kb": 0, 00:29:31.782 "state": "online", 00:29:31.782 "raid_level": "raid1", 00:29:31.782 "superblock": false, 00:29:31.782 "num_base_bdevs": 4, 00:29:31.782 "num_base_bdevs_discovered": 3, 00:29:31.782 "num_base_bdevs_operational": 3, 00:29:31.782 "process": { 00:29:31.782 "type": "rebuild", 00:29:31.782 "target": "spare", 00:29:31.782 "progress": { 00:29:31.782 "blocks": 59392, 00:29:31.782 "percent": 90 00:29:31.782 } 00:29:31.782 }, 00:29:31.782 "base_bdevs_list": [ 00:29:31.782 { 00:29:31.782 "name": "spare", 00:29:31.782 "uuid": "dfea28ef-4960-5dcd-9250-a8d889571c8e", 00:29:31.782 "is_configured": true, 00:29:31.782 "data_offset": 0, 00:29:31.782 "data_size": 65536 00:29:31.782 }, 00:29:31.782 { 00:29:31.782 "name": null, 00:29:31.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:31.782 "is_configured": false, 00:29:31.782 "data_offset": 0, 00:29:31.782 "data_size": 65536 00:29:31.782 }, 00:29:31.782 { 00:29:31.782 "name": "BaseBdev3", 00:29:31.782 "uuid": "48ed658b-2cc9-575d-b2c2-e4053b645df5", 00:29:31.782 "is_configured": true, 00:29:31.782 "data_offset": 0, 00:29:31.782 "data_size": 65536 00:29:31.782 }, 00:29:31.782 { 00:29:31.782 "name": "BaseBdev4", 00:29:31.782 "uuid": "f5d9faad-e2e1-5adc-a46c-166f56cfafc8", 00:29:31.782 "is_configured": true, 00:29:31.782 "data_offset": 0, 00:29:31.782 "data_size": 65536 00:29:31.782 } 00:29:31.782 ] 00:29:31.782 }' 00:29:31.782 07:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:32.051 07:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:32.051 07:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:32.051 07:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:32.051 07:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:32.051 [2024-07-12 07:40:05.847263] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:32.326 [2024-07-12 07:40:05.953039] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:32.326 [2024-07-12 07:40:05.957771] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:32.892 07:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:32.892 07:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:32.892 07:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:32.892 07:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:32.892 07:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:32.892 07:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:32.892 07:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.892 07:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.151 07:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:33.151 "name": "raid_bdev1", 00:29:33.151 "uuid": "19550c6a-93e8-4e7c-8334-456706c3400c", 00:29:33.151 "strip_size_kb": 0, 00:29:33.151 "state": "online", 00:29:33.152 "raid_level": "raid1", 00:29:33.152 "superblock": false, 00:29:33.152 "num_base_bdevs": 4, 00:29:33.152 "num_base_bdevs_discovered": 3, 00:29:33.152 "num_base_bdevs_operational": 3, 00:29:33.152 "base_bdevs_list": [ 00:29:33.152 { 00:29:33.152 "name": "spare", 00:29:33.152 "uuid": "dfea28ef-4960-5dcd-9250-a8d889571c8e", 00:29:33.152 "is_configured": true, 00:29:33.152 "data_offset": 0, 00:29:33.152 "data_size": 65536 00:29:33.152 }, 00:29:33.152 { 00:29:33.152 "name": null, 00:29:33.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:33.152 "is_configured": false, 00:29:33.152 "data_offset": 0, 00:29:33.152 "data_size": 65536 00:29:33.152 }, 00:29:33.152 { 00:29:33.152 "name": "BaseBdev3", 00:29:33.152 "uuid": "48ed658b-2cc9-575d-b2c2-e4053b645df5", 00:29:33.152 "is_configured": true, 00:29:33.152 "data_offset": 0, 00:29:33.152 "data_size": 65536 00:29:33.152 }, 00:29:33.152 { 00:29:33.152 "name": "BaseBdev4", 00:29:33.152 "uuid": "f5d9faad-e2e1-5adc-a46c-166f56cfafc8", 00:29:33.152 "is_configured": true, 00:29:33.152 "data_offset": 0, 00:29:33.152 "data_size": 65536 00:29:33.152 } 00:29:33.152 ] 00:29:33.152 }' 00:29:33.152 07:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:33.152 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:33.152 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:33.410 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:33.411 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:29:33.411 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:33.411 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:33.411 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:33.411 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:33.411 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:33.411 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:33.411 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.411 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:33.411 "name": "raid_bdev1", 00:29:33.411 "uuid": "19550c6a-93e8-4e7c-8334-456706c3400c", 00:29:33.411 "strip_size_kb": 0, 00:29:33.411 "state": "online", 00:29:33.411 "raid_level": "raid1", 00:29:33.411 "superblock": false, 00:29:33.411 "num_base_bdevs": 4, 00:29:33.411 "num_base_bdevs_discovered": 3, 00:29:33.411 "num_base_bdevs_operational": 3, 00:29:33.411 "base_bdevs_list": [ 00:29:33.411 { 00:29:33.411 "name": "spare", 00:29:33.411 "uuid": "dfea28ef-4960-5dcd-9250-a8d889571c8e", 00:29:33.411 "is_configured": true, 00:29:33.411 "data_offset": 0, 00:29:33.411 "data_size": 65536 00:29:33.411 }, 00:29:33.411 { 00:29:33.411 "name": null, 00:29:33.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:33.411 "is_configured": false, 00:29:33.411 "data_offset": 0, 00:29:33.411 "data_size": 65536 00:29:33.411 }, 00:29:33.411 { 00:29:33.411 "name": "BaseBdev3", 00:29:33.411 "uuid": "48ed658b-2cc9-575d-b2c2-e4053b645df5", 00:29:33.411 "is_configured": true, 00:29:33.411 "data_offset": 0, 00:29:33.411 "data_size": 65536 00:29:33.411 }, 00:29:33.411 { 00:29:33.411 "name": "BaseBdev4", 00:29:33.411 "uuid": "f5d9faad-e2e1-5adc-a46c-166f56cfafc8", 00:29:33.411 "is_configured": true, 00:29:33.411 "data_offset": 0, 00:29:33.411 "data_size": 65536 00:29:33.411 } 00:29:33.411 ] 00:29:33.411 }' 00:29:33.411 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:33.411 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:33.411 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:33.670 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:33.670 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:33.670 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:33.670 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:33.670 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:33.670 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:33.670 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:33.670 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:33.670 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:33.670 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:33.670 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:33.670 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.670 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:33.929 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:33.929 "name": "raid_bdev1", 00:29:33.929 "uuid": "19550c6a-93e8-4e7c-8334-456706c3400c", 00:29:33.929 "strip_size_kb": 0, 00:29:33.929 "state": "online", 00:29:33.929 "raid_level": "raid1", 00:29:33.929 "superblock": false, 00:29:33.929 "num_base_bdevs": 4, 00:29:33.929 "num_base_bdevs_discovered": 3, 00:29:33.929 "num_base_bdevs_operational": 3, 00:29:33.929 "base_bdevs_list": [ 00:29:33.929 { 00:29:33.929 "name": "spare", 00:29:33.929 "uuid": "dfea28ef-4960-5dcd-9250-a8d889571c8e", 00:29:33.929 "is_configured": true, 00:29:33.929 "data_offset": 0, 00:29:33.929 "data_size": 65536 00:29:33.929 }, 00:29:33.929 { 00:29:33.929 "name": null, 00:29:33.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:33.929 "is_configured": false, 00:29:33.929 "data_offset": 0, 00:29:33.929 "data_size": 65536 00:29:33.929 }, 00:29:33.929 { 00:29:33.929 "name": "BaseBdev3", 00:29:33.929 "uuid": "48ed658b-2cc9-575d-b2c2-e4053b645df5", 00:29:33.929 "is_configured": true, 00:29:33.929 "data_offset": 0, 00:29:33.929 "data_size": 65536 00:29:33.929 }, 00:29:33.929 { 00:29:33.929 "name": "BaseBdev4", 00:29:33.929 "uuid": "f5d9faad-e2e1-5adc-a46c-166f56cfafc8", 00:29:33.929 "is_configured": true, 00:29:33.929 "data_offset": 0, 00:29:33.929 "data_size": 65536 00:29:33.929 } 00:29:33.929 ] 00:29:33.929 }' 00:29:33.929 07:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:33.929 07:40:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:34.496 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:34.755 [2024-07-12 07:40:08.409040] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:34.755 [2024-07-12 07:40:08.409084] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:34.755 00:29:34.755 Latency(us) 00:29:34.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.755 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:34.755 raid_bdev1 : 11.76 99.93 299.78 0.00 0.00 13769.93 308.18 111848.11 00:29:34.755 =================================================================================================================== 00:29:34.755 Total : 99.93 299.78 0.00 0.00 13769.93 308.18 111848.11 00:29:34.755 [2024-07-12 07:40:08.465590] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:34.755 [2024-07-12 07:40:08.465642] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:34.755 0 00:29:34.755 [2024-07-12 07:40:08.465779] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:34.755 [2024-07-12 07:40:08.465791] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:29:34.755 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:29:34.755 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.013 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:35.013 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:35.013 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:35.013 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:29:35.013 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:35.013 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:35.013 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:35.013 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:35.013 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:35.013 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:35.013 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:35.013 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:35.013 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:29:35.272 /dev/nbd0 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:35.272 1+0 records in 00:29:35.272 1+0 records out 00:29:35.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353459 s, 11.6 MB/s 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:35.272 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:29:35.273 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:29:35.273 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:35.273 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:29:35.273 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:29:35.273 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:35.273 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:29:35.273 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:35.273 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:35.273 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:35.273 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:35.273 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:35.273 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:35.273 07:40:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:29:35.531 /dev/nbd1 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:35.531 1+0 records in 00:29:35.531 1+0 records out 00:29:35.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546759 s, 7.5 MB/s 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:35.531 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:29:35.532 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:35.532 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:35.532 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:35.532 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:35.532 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:35.532 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:35.532 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:35.532 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:35.532 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:35.532 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:35.790 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:29:36.049 /dev/nbd1 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:36.049 1+0 records in 00:29:36.049 1+0 records out 00:29:36.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308686 s, 13.3 MB/s 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:36.049 07:40:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:36.308 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:36.308 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:36.308 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:36.308 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:36.308 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:36.308 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:36.308 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:36.308 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:36.308 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:36.308 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:36.308 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:36.308 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:36.308 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:36.308 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:36.308 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:36.567 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:36.567 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:36.567 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:36.567 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:36.567 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:36.567 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:36.567 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:36.567 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:36.567 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:29:36.567 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 158050 00:29:36.567 07:40:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@946 -- # '[' -z 158050 ']' 00:29:36.567 07:40:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # kill -0 158050 00:29:36.567 07:40:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # uname 00:29:36.567 07:40:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:36.567 07:40:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 158050 00:29:36.568 07:40:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:36.568 07:40:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:36.568 killing process with pid 158050 00:29:36.568 07:40:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 158050' 00:29:36.568 07:40:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@965 -- # kill 158050 00:29:36.568 Received shutdown signal, test time was about 13.629810 seconds 00:29:36.568 00:29:36.568 Latency(us) 00:29:36.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.568 =================================================================================================================== 00:29:36.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:36.568 07:40:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # wait 158050 00:29:36.568 [2024-07-12 07:40:10.332199] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:36.568 [2024-07-12 07:40:10.416336] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:29:37.136 00:29:37.136 real 0m18.856s 00:29:37.136 user 0m28.485s 00:29:37.136 sys 0m3.023s 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.136 ************************************ 00:29:37.136 END TEST raid_rebuild_test_io 00:29:37.136 ************************************ 00:29:37.136 07:40:10 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:29:37.136 07:40:10 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:29:37.136 07:40:10 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:37.136 07:40:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:37.136 ************************************ 00:29:37.136 START TEST raid_rebuild_test_sb_io 00:29:37.136 ************************************ 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 true true true 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=158573 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 158573 /var/tmp/spdk-raid.sock 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@827 -- # '[' -z 158573 ']' 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:37.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:37.136 07:40:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.136 [2024-07-12 07:40:11.000076] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:37.136 [2024-07-12 07:40:11.000326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158573 ] 00:29:37.136 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:37.136 Zero copy mechanism will not be used. 00:29:37.395 [2024-07-12 07:40:11.159102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.395 [2024-07-12 07:40:11.245608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.654 [2024-07-12 07:40:11.328210] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:38.222 07:40:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:38.222 07:40:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # return 0 00:29:38.222 07:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:38.222 07:40:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:38.481 BaseBdev1_malloc 00:29:38.481 07:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:38.740 [2024-07-12 07:40:12.377003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:38.740 [2024-07-12 07:40:12.377150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:38.740 [2024-07-12 07:40:12.377204] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:29:38.740 [2024-07-12 07:40:12.377275] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:38.740 [2024-07-12 07:40:12.380421] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:38.740 [2024-07-12 07:40:12.380482] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:38.740 BaseBdev1 00:29:38.740 07:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:38.740 07:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:38.999 BaseBdev2_malloc 00:29:38.999 07:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:38.999 [2024-07-12 07:40:12.821315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:38.999 [2024-07-12 07:40:12.821427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:38.999 [2024-07-12 07:40:12.821474] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:29:38.999 [2024-07-12 07:40:12.821520] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:38.999 [2024-07-12 07:40:12.824248] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:38.999 [2024-07-12 07:40:12.824297] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:38.999 BaseBdev2 00:29:38.999 07:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:38.999 07:40:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:39.276 BaseBdev3_malloc 00:29:39.276 07:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:39.536 [2024-07-12 07:40:13.218356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:39.536 [2024-07-12 07:40:13.218467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.536 [2024-07-12 07:40:13.218529] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:39.536 [2024-07-12 07:40:13.218575] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.536 [2024-07-12 07:40:13.221269] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.536 [2024-07-12 07:40:13.221362] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:39.536 BaseBdev3 00:29:39.536 07:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:39.536 07:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:39.536 BaseBdev4_malloc 00:29:39.794 07:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:39.794 [2024-07-12 07:40:13.602449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:39.794 [2024-07-12 07:40:13.602589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.794 [2024-07-12 07:40:13.602642] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:39.794 [2024-07-12 07:40:13.602692] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.794 [2024-07-12 07:40:13.605443] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.794 [2024-07-12 07:40:13.605525] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:39.794 BaseBdev4 00:29:39.794 07:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:40.053 spare_malloc 00:29:40.053 07:40:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:40.311 spare_delay 00:29:40.311 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:40.569 [2024-07-12 07:40:14.270773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:40.569 [2024-07-12 07:40:14.270893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.569 [2024-07-12 07:40:14.270934] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:40.569 [2024-07-12 07:40:14.270993] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.569 [2024-07-12 07:40:14.273851] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.569 [2024-07-12 07:40:14.273927] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:40.569 spare 00:29:40.569 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:40.828 [2024-07-12 07:40:14.506921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:40.828 [2024-07-12 07:40:14.509426] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:40.828 [2024-07-12 07:40:14.509500] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:40.828 [2024-07-12 07:40:14.509545] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:40.828 [2024-07-12 07:40:14.509801] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:29:40.828 [2024-07-12 07:40:14.509819] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:40.828 [2024-07-12 07:40:14.509998] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:29:40.828 [2024-07-12 07:40:14.510453] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:29:40.828 [2024-07-12 07:40:14.510472] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:29:40.828 [2024-07-12 07:40:14.510628] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:40.828 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:40.828 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:40.828 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:40.828 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:40.828 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:40.828 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:40.828 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:40.828 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:40.828 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:40.828 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:40.828 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.828 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.087 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:41.087 "name": "raid_bdev1", 00:29:41.087 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:41.087 "strip_size_kb": 0, 00:29:41.087 "state": "online", 00:29:41.087 "raid_level": "raid1", 00:29:41.087 "superblock": true, 00:29:41.087 "num_base_bdevs": 4, 00:29:41.087 "num_base_bdevs_discovered": 4, 00:29:41.087 "num_base_bdevs_operational": 4, 00:29:41.087 "base_bdevs_list": [ 00:29:41.087 { 00:29:41.087 "name": "BaseBdev1", 00:29:41.087 "uuid": "c0365974-7a48-53c0-a10c-f1ea3052e027", 00:29:41.087 "is_configured": true, 00:29:41.087 "data_offset": 2048, 00:29:41.087 "data_size": 63488 00:29:41.087 }, 00:29:41.087 { 00:29:41.087 "name": "BaseBdev2", 00:29:41.087 "uuid": "b01d95d0-2c1f-5f84-8d78-31df1554c32c", 00:29:41.087 "is_configured": true, 00:29:41.087 "data_offset": 2048, 00:29:41.087 "data_size": 63488 00:29:41.087 }, 00:29:41.087 { 00:29:41.087 "name": "BaseBdev3", 00:29:41.087 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:41.087 "is_configured": true, 00:29:41.087 "data_offset": 2048, 00:29:41.087 "data_size": 63488 00:29:41.087 }, 00:29:41.087 { 00:29:41.087 "name": "BaseBdev4", 00:29:41.087 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:41.087 "is_configured": true, 00:29:41.087 "data_offset": 2048, 00:29:41.087 "data_size": 63488 00:29:41.087 } 00:29:41.087 ] 00:29:41.087 }' 00:29:41.087 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:41.087 07:40:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:41.346 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:41.346 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:41.605 [2024-07-12 07:40:15.343277] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:41.605 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:29:41.605 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.605 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:41.865 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:29:41.865 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:29:41.865 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:41.865 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:41.865 [2024-07-12 07:40:15.722831] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:29:41.865 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:41.865 Zero copy mechanism will not be used. 00:29:41.865 Running I/O for 60 seconds... 00:29:42.124 [2024-07-12 07:40:15.907677] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:42.124 [2024-07-12 07:40:15.918526] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:29:42.124 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:42.124 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:42.124 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:42.124 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:42.124 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:42.124 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:42.124 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:42.124 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:42.124 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:42.124 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:42.124 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.124 07:40:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:42.383 07:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:42.383 "name": "raid_bdev1", 00:29:42.383 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:42.383 "strip_size_kb": 0, 00:29:42.383 "state": "online", 00:29:42.383 "raid_level": "raid1", 00:29:42.383 "superblock": true, 00:29:42.383 "num_base_bdevs": 4, 00:29:42.383 "num_base_bdevs_discovered": 3, 00:29:42.383 "num_base_bdevs_operational": 3, 00:29:42.383 "base_bdevs_list": [ 00:29:42.383 { 00:29:42.383 "name": null, 00:29:42.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:42.383 "is_configured": false, 00:29:42.383 "data_offset": 2048, 00:29:42.383 "data_size": 63488 00:29:42.383 }, 00:29:42.383 { 00:29:42.383 "name": "BaseBdev2", 00:29:42.383 "uuid": "b01d95d0-2c1f-5f84-8d78-31df1554c32c", 00:29:42.383 "is_configured": true, 00:29:42.383 "data_offset": 2048, 00:29:42.383 "data_size": 63488 00:29:42.383 }, 00:29:42.383 { 00:29:42.383 "name": "BaseBdev3", 00:29:42.383 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:42.383 "is_configured": true, 00:29:42.383 "data_offset": 2048, 00:29:42.383 "data_size": 63488 00:29:42.383 }, 00:29:42.383 { 00:29:42.383 "name": "BaseBdev4", 00:29:42.383 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:42.384 "is_configured": true, 00:29:42.384 "data_offset": 2048, 00:29:42.384 "data_size": 63488 00:29:42.384 } 00:29:42.384 ] 00:29:42.384 }' 00:29:42.384 07:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:42.384 07:40:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:42.951 07:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:43.210 [2024-07-12 07:40:16.914333] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:43.210 07:40:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:43.210 [2024-07-12 07:40:16.970803] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:29:43.210 [2024-07-12 07:40:16.973403] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:43.211 [2024-07-12 07:40:17.092448] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:43.469 [2024-07-12 07:40:17.094168] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:43.469 [2024-07-12 07:40:17.306282] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:43.469 [2024-07-12 07:40:17.307176] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:44.038 [2024-07-12 07:40:17.642915] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:44.038 [2024-07-12 07:40:17.766638] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:44.297 07:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:44.297 07:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:44.297 07:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:44.297 07:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:44.297 07:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:44.297 07:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.297 07:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.297 [2024-07-12 07:40:18.106718] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:44.297 [2024-07-12 07:40:18.108406] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:44.556 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:44.556 "name": "raid_bdev1", 00:29:44.556 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:44.556 "strip_size_kb": 0, 00:29:44.556 "state": "online", 00:29:44.556 "raid_level": "raid1", 00:29:44.556 "superblock": true, 00:29:44.556 "num_base_bdevs": 4, 00:29:44.556 "num_base_bdevs_discovered": 4, 00:29:44.556 "num_base_bdevs_operational": 4, 00:29:44.556 "process": { 00:29:44.556 "type": "rebuild", 00:29:44.556 "target": "spare", 00:29:44.556 "progress": { 00:29:44.556 "blocks": 14336, 00:29:44.556 "percent": 22 00:29:44.556 } 00:29:44.556 }, 00:29:44.556 "base_bdevs_list": [ 00:29:44.556 { 00:29:44.556 "name": "spare", 00:29:44.556 "uuid": "5a0505b2-1c1c-580c-9931-9cc505b758f0", 00:29:44.556 "is_configured": true, 00:29:44.556 "data_offset": 2048, 00:29:44.556 "data_size": 63488 00:29:44.556 }, 00:29:44.556 { 00:29:44.556 "name": "BaseBdev2", 00:29:44.556 "uuid": "b01d95d0-2c1f-5f84-8d78-31df1554c32c", 00:29:44.556 "is_configured": true, 00:29:44.556 "data_offset": 2048, 00:29:44.556 "data_size": 63488 00:29:44.556 }, 00:29:44.556 { 00:29:44.557 "name": "BaseBdev3", 00:29:44.557 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:44.557 "is_configured": true, 00:29:44.557 "data_offset": 2048, 00:29:44.557 "data_size": 63488 00:29:44.557 }, 00:29:44.557 { 00:29:44.557 "name": "BaseBdev4", 00:29:44.557 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:44.557 "is_configured": true, 00:29:44.557 "data_offset": 2048, 00:29:44.557 "data_size": 63488 00:29:44.557 } 00:29:44.557 ] 00:29:44.557 }' 00:29:44.557 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:44.557 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:44.557 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:44.557 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:44.557 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:44.557 [2024-07-12 07:40:18.321619] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:44.816 [2024-07-12 07:40:18.546261] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:44.816 [2024-07-12 07:40:18.546611] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:44.816 [2024-07-12 07:40:18.570170] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:44.816 [2024-07-12 07:40:18.581045] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:44.816 [2024-07-12 07:40:18.581090] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:44.816 [2024-07-12 07:40:18.581102] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:44.816 [2024-07-12 07:40:18.614873] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:29:44.816 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:44.816 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:44.816 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:44.816 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:44.816 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:44.816 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:44.816 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:44.816 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:44.816 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:44.816 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:44.816 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.816 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.075 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:45.075 "name": "raid_bdev1", 00:29:45.075 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:45.075 "strip_size_kb": 0, 00:29:45.075 "state": "online", 00:29:45.075 "raid_level": "raid1", 00:29:45.075 "superblock": true, 00:29:45.075 "num_base_bdevs": 4, 00:29:45.075 "num_base_bdevs_discovered": 3, 00:29:45.075 "num_base_bdevs_operational": 3, 00:29:45.075 "base_bdevs_list": [ 00:29:45.075 { 00:29:45.075 "name": null, 00:29:45.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.075 "is_configured": false, 00:29:45.075 "data_offset": 2048, 00:29:45.075 "data_size": 63488 00:29:45.075 }, 00:29:45.075 { 00:29:45.075 "name": "BaseBdev2", 00:29:45.075 "uuid": "b01d95d0-2c1f-5f84-8d78-31df1554c32c", 00:29:45.075 "is_configured": true, 00:29:45.075 "data_offset": 2048, 00:29:45.075 "data_size": 63488 00:29:45.075 }, 00:29:45.075 { 00:29:45.075 "name": "BaseBdev3", 00:29:45.075 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:45.075 "is_configured": true, 00:29:45.075 "data_offset": 2048, 00:29:45.075 "data_size": 63488 00:29:45.075 }, 00:29:45.075 { 00:29:45.075 "name": "BaseBdev4", 00:29:45.075 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:45.075 "is_configured": true, 00:29:45.075 "data_offset": 2048, 00:29:45.075 "data_size": 63488 00:29:45.075 } 00:29:45.075 ] 00:29:45.075 }' 00:29:45.075 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:45.075 07:40:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:46.011 07:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:46.011 07:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:46.011 07:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:46.011 07:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:46.011 07:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:46.011 07:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:46.011 07:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.011 07:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:46.011 "name": "raid_bdev1", 00:29:46.011 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:46.011 "strip_size_kb": 0, 00:29:46.011 "state": "online", 00:29:46.011 "raid_level": "raid1", 00:29:46.011 "superblock": true, 00:29:46.011 "num_base_bdevs": 4, 00:29:46.011 "num_base_bdevs_discovered": 3, 00:29:46.011 "num_base_bdevs_operational": 3, 00:29:46.011 "base_bdevs_list": [ 00:29:46.011 { 00:29:46.011 "name": null, 00:29:46.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.011 "is_configured": false, 00:29:46.011 "data_offset": 2048, 00:29:46.011 "data_size": 63488 00:29:46.011 }, 00:29:46.011 { 00:29:46.011 "name": "BaseBdev2", 00:29:46.011 "uuid": "b01d95d0-2c1f-5f84-8d78-31df1554c32c", 00:29:46.011 "is_configured": true, 00:29:46.011 "data_offset": 2048, 00:29:46.011 "data_size": 63488 00:29:46.011 }, 00:29:46.011 { 00:29:46.011 "name": "BaseBdev3", 00:29:46.011 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:46.011 "is_configured": true, 00:29:46.011 "data_offset": 2048, 00:29:46.011 "data_size": 63488 00:29:46.011 }, 00:29:46.011 { 00:29:46.011 "name": "BaseBdev4", 00:29:46.011 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:46.011 "is_configured": true, 00:29:46.011 "data_offset": 2048, 00:29:46.011 "data_size": 63488 00:29:46.011 } 00:29:46.011 ] 00:29:46.011 }' 00:29:46.011 07:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:46.011 07:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:46.011 07:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:46.270 07:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:46.270 07:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:46.529 [2024-07-12 07:40:20.155843] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:46.529 07:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:46.529 [2024-07-12 07:40:20.212499] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ae0 00:29:46.529 [2024-07-12 07:40:20.214952] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:46.529 [2024-07-12 07:40:20.329189] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:46.529 [2024-07-12 07:40:20.330672] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:46.788 [2024-07-12 07:40:20.554970] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:46.788 [2024-07-12 07:40:20.555307] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:47.047 [2024-07-12 07:40:20.885791] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:47.047 [2024-07-12 07:40:20.887366] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:47.307 [2024-07-12 07:40:21.104890] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:47.307 [2024-07-12 07:40:21.105676] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:47.572 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:47.572 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:47.572 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:47.572 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:47.572 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:47.572 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.572 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.572 [2024-07-12 07:40:21.444940] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:47.572 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:47.572 "name": "raid_bdev1", 00:29:47.572 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:47.572 "strip_size_kb": 0, 00:29:47.572 "state": "online", 00:29:47.572 "raid_level": "raid1", 00:29:47.572 "superblock": true, 00:29:47.572 "num_base_bdevs": 4, 00:29:47.572 "num_base_bdevs_discovered": 4, 00:29:47.572 "num_base_bdevs_operational": 4, 00:29:47.572 "process": { 00:29:47.572 "type": "rebuild", 00:29:47.572 "target": "spare", 00:29:47.572 "progress": { 00:29:47.572 "blocks": 12288, 00:29:47.572 "percent": 19 00:29:47.572 } 00:29:47.572 }, 00:29:47.572 "base_bdevs_list": [ 00:29:47.572 { 00:29:47.572 "name": "spare", 00:29:47.572 "uuid": "5a0505b2-1c1c-580c-9931-9cc505b758f0", 00:29:47.572 "is_configured": true, 00:29:47.572 "data_offset": 2048, 00:29:47.572 "data_size": 63488 00:29:47.572 }, 00:29:47.572 { 00:29:47.572 "name": "BaseBdev2", 00:29:47.572 "uuid": "b01d95d0-2c1f-5f84-8d78-31df1554c32c", 00:29:47.572 "is_configured": true, 00:29:47.572 "data_offset": 2048, 00:29:47.572 "data_size": 63488 00:29:47.572 }, 00:29:47.572 { 00:29:47.572 "name": "BaseBdev3", 00:29:47.572 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:47.572 "is_configured": true, 00:29:47.572 "data_offset": 2048, 00:29:47.572 "data_size": 63488 00:29:47.572 }, 00:29:47.572 { 00:29:47.572 "name": "BaseBdev4", 00:29:47.572 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:47.572 "is_configured": true, 00:29:47.572 "data_offset": 2048, 00:29:47.572 "data_size": 63488 00:29:47.572 } 00:29:47.572 ] 00:29:47.572 }' 00:29:47.572 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:47.830 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:47.830 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:47.830 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:47.830 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:29:47.830 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:29:47.830 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:29:47.830 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:29:47.830 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:47.830 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:29:47.830 07:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:47.830 [2024-07-12 07:40:21.573505] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:48.088 [2024-07-12 07:40:21.768632] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:48.346 [2024-07-12 07:40:22.012380] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:29:48.346 [2024-07-12 07:40:22.012427] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002ae0 00:29:48.346 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:29:48.346 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:29:48.346 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:48.346 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:48.346 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:48.346 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:48.346 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:48.346 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.346 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.346 [2024-07-12 07:40:22.124642] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:48.605 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:48.605 "name": "raid_bdev1", 00:29:48.605 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:48.605 "strip_size_kb": 0, 00:29:48.605 "state": "online", 00:29:48.605 "raid_level": "raid1", 00:29:48.605 "superblock": true, 00:29:48.605 "num_base_bdevs": 4, 00:29:48.605 "num_base_bdevs_discovered": 3, 00:29:48.605 "num_base_bdevs_operational": 3, 00:29:48.605 "process": { 00:29:48.605 "type": "rebuild", 00:29:48.605 "target": "spare", 00:29:48.605 "progress": { 00:29:48.605 "blocks": 20480, 00:29:48.605 "percent": 32 00:29:48.605 } 00:29:48.605 }, 00:29:48.605 "base_bdevs_list": [ 00:29:48.605 { 00:29:48.605 "name": "spare", 00:29:48.605 "uuid": "5a0505b2-1c1c-580c-9931-9cc505b758f0", 00:29:48.605 "is_configured": true, 00:29:48.605 "data_offset": 2048, 00:29:48.605 "data_size": 63488 00:29:48.605 }, 00:29:48.605 { 00:29:48.605 "name": null, 00:29:48.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.605 "is_configured": false, 00:29:48.605 "data_offset": 2048, 00:29:48.605 "data_size": 63488 00:29:48.605 }, 00:29:48.605 { 00:29:48.605 "name": "BaseBdev3", 00:29:48.605 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:48.605 "is_configured": true, 00:29:48.605 "data_offset": 2048, 00:29:48.605 "data_size": 63488 00:29:48.605 }, 00:29:48.605 { 00:29:48.605 "name": "BaseBdev4", 00:29:48.605 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:48.605 "is_configured": true, 00:29:48.605 "data_offset": 2048, 00:29:48.605 "data_size": 63488 00:29:48.605 } 00:29:48.605 ] 00:29:48.605 }' 00:29:48.605 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:48.605 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:48.605 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:48.605 [2024-07-12 07:40:22.336986] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:48.605 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:48.605 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=933 00:29:48.605 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:48.605 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:48.605 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:48.605 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:48.605 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:48.605 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:48.605 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.605 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.864 [2024-07-12 07:40:22.561345] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:29:48.864 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:48.864 "name": "raid_bdev1", 00:29:48.864 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:48.864 "strip_size_kb": 0, 00:29:48.864 "state": "online", 00:29:48.864 "raid_level": "raid1", 00:29:48.864 "superblock": true, 00:29:48.864 "num_base_bdevs": 4, 00:29:48.864 "num_base_bdevs_discovered": 3, 00:29:48.864 "num_base_bdevs_operational": 3, 00:29:48.864 "process": { 00:29:48.864 "type": "rebuild", 00:29:48.864 "target": "spare", 00:29:48.864 "progress": { 00:29:48.864 "blocks": 24576, 00:29:48.864 "percent": 38 00:29:48.864 } 00:29:48.864 }, 00:29:48.864 "base_bdevs_list": [ 00:29:48.864 { 00:29:48.864 "name": "spare", 00:29:48.864 "uuid": "5a0505b2-1c1c-580c-9931-9cc505b758f0", 00:29:48.864 "is_configured": true, 00:29:48.864 "data_offset": 2048, 00:29:48.864 "data_size": 63488 00:29:48.864 }, 00:29:48.864 { 00:29:48.864 "name": null, 00:29:48.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.864 "is_configured": false, 00:29:48.864 "data_offset": 2048, 00:29:48.864 "data_size": 63488 00:29:48.864 }, 00:29:48.864 { 00:29:48.864 "name": "BaseBdev3", 00:29:48.864 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:48.864 "is_configured": true, 00:29:48.864 "data_offset": 2048, 00:29:48.864 "data_size": 63488 00:29:48.864 }, 00:29:48.864 { 00:29:48.864 "name": "BaseBdev4", 00:29:48.864 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:48.864 "is_configured": true, 00:29:48.864 "data_offset": 2048, 00:29:48.864 "data_size": 63488 00:29:48.864 } 00:29:48.864 ] 00:29:48.864 }' 00:29:48.864 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:48.864 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:48.864 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:48.864 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:48.864 07:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:48.864 [2024-07-12 07:40:22.685271] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:29:49.429 [2024-07-12 07:40:23.037815] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:49.429 [2024-07-12 07:40:23.254497] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:29:49.429 [2024-07-12 07:40:23.255053] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:29:49.995 [2024-07-12 07:40:23.573095] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:29:49.995 07:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:49.995 07:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:49.995 07:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:49.995 07:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:49.995 07:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:49.995 07:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:49.995 07:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.995 07:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:49.995 [2024-07-12 07:40:23.682906] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:29:49.995 07:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:49.995 "name": "raid_bdev1", 00:29:49.995 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:49.995 "strip_size_kb": 0, 00:29:49.995 "state": "online", 00:29:49.995 "raid_level": "raid1", 00:29:49.995 "superblock": true, 00:29:49.995 "num_base_bdevs": 4, 00:29:49.995 "num_base_bdevs_discovered": 3, 00:29:49.995 "num_base_bdevs_operational": 3, 00:29:49.995 "process": { 00:29:49.995 "type": "rebuild", 00:29:49.995 "target": "spare", 00:29:49.995 "progress": { 00:29:49.995 "blocks": 40960, 00:29:49.995 "percent": 64 00:29:49.995 } 00:29:49.995 }, 00:29:49.995 "base_bdevs_list": [ 00:29:49.995 { 00:29:49.995 "name": "spare", 00:29:49.995 "uuid": "5a0505b2-1c1c-580c-9931-9cc505b758f0", 00:29:49.995 "is_configured": true, 00:29:49.995 "data_offset": 2048, 00:29:49.995 "data_size": 63488 00:29:49.995 }, 00:29:49.995 { 00:29:49.995 "name": null, 00:29:49.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.995 "is_configured": false, 00:29:49.995 "data_offset": 2048, 00:29:49.995 "data_size": 63488 00:29:49.995 }, 00:29:49.995 { 00:29:49.995 "name": "BaseBdev3", 00:29:49.995 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:49.995 "is_configured": true, 00:29:49.995 "data_offset": 2048, 00:29:49.995 "data_size": 63488 00:29:49.995 }, 00:29:49.995 { 00:29:49.995 "name": "BaseBdev4", 00:29:49.995 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:49.995 "is_configured": true, 00:29:49.995 "data_offset": 2048, 00:29:49.995 "data_size": 63488 00:29:49.995 } 00:29:49.995 ] 00:29:49.995 }' 00:29:49.995 07:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:49.995 07:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:49.995 07:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:50.254 07:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:50.254 07:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:51.191 07:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:51.191 07:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:51.191 07:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:51.191 07:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:51.191 07:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:51.191 07:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:51.191 07:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.191 07:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.191 [2024-07-12 07:40:25.032774] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:51.450 [2024-07-12 07:40:25.132757] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:51.450 [2024-07-12 07:40:25.142733] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:51.450 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:51.450 "name": "raid_bdev1", 00:29:51.450 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:51.450 "strip_size_kb": 0, 00:29:51.450 "state": "online", 00:29:51.450 "raid_level": "raid1", 00:29:51.450 "superblock": true, 00:29:51.450 "num_base_bdevs": 4, 00:29:51.450 "num_base_bdevs_discovered": 3, 00:29:51.450 "num_base_bdevs_operational": 3, 00:29:51.450 "base_bdevs_list": [ 00:29:51.450 { 00:29:51.450 "name": "spare", 00:29:51.450 "uuid": "5a0505b2-1c1c-580c-9931-9cc505b758f0", 00:29:51.450 "is_configured": true, 00:29:51.450 "data_offset": 2048, 00:29:51.450 "data_size": 63488 00:29:51.450 }, 00:29:51.450 { 00:29:51.450 "name": null, 00:29:51.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.450 "is_configured": false, 00:29:51.450 "data_offset": 2048, 00:29:51.450 "data_size": 63488 00:29:51.450 }, 00:29:51.450 { 00:29:51.450 "name": "BaseBdev3", 00:29:51.450 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:51.450 "is_configured": true, 00:29:51.450 "data_offset": 2048, 00:29:51.450 "data_size": 63488 00:29:51.450 }, 00:29:51.450 { 00:29:51.450 "name": "BaseBdev4", 00:29:51.450 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:51.450 "is_configured": true, 00:29:51.450 "data_offset": 2048, 00:29:51.450 "data_size": 63488 00:29:51.450 } 00:29:51.450 ] 00:29:51.450 }' 00:29:51.450 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:51.450 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:51.450 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:51.450 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:51.450 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:29:51.450 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:51.450 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:51.450 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:51.450 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:51.450 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:51.450 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.450 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.709 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:51.709 "name": "raid_bdev1", 00:29:51.709 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:51.709 "strip_size_kb": 0, 00:29:51.709 "state": "online", 00:29:51.709 "raid_level": "raid1", 00:29:51.709 "superblock": true, 00:29:51.709 "num_base_bdevs": 4, 00:29:51.709 "num_base_bdevs_discovered": 3, 00:29:51.709 "num_base_bdevs_operational": 3, 00:29:51.709 "base_bdevs_list": [ 00:29:51.709 { 00:29:51.709 "name": "spare", 00:29:51.709 "uuid": "5a0505b2-1c1c-580c-9931-9cc505b758f0", 00:29:51.709 "is_configured": true, 00:29:51.709 "data_offset": 2048, 00:29:51.709 "data_size": 63488 00:29:51.709 }, 00:29:51.709 { 00:29:51.709 "name": null, 00:29:51.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.709 "is_configured": false, 00:29:51.709 "data_offset": 2048, 00:29:51.709 "data_size": 63488 00:29:51.709 }, 00:29:51.709 { 00:29:51.709 "name": "BaseBdev3", 00:29:51.709 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:51.709 "is_configured": true, 00:29:51.709 "data_offset": 2048, 00:29:51.709 "data_size": 63488 00:29:51.709 }, 00:29:51.709 { 00:29:51.709 "name": "BaseBdev4", 00:29:51.709 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:51.709 "is_configured": true, 00:29:51.709 "data_offset": 2048, 00:29:51.709 "data_size": 63488 00:29:51.709 } 00:29:51.709 ] 00:29:51.709 }' 00:29:51.709 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:51.968 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:51.968 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:51.968 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:51.968 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:51.968 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:51.968 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:51.968 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:51.968 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:51.968 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:51.968 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:51.968 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:51.968 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:51.968 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:51.968 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.968 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:52.227 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:52.227 "name": "raid_bdev1", 00:29:52.227 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:52.227 "strip_size_kb": 0, 00:29:52.227 "state": "online", 00:29:52.227 "raid_level": "raid1", 00:29:52.227 "superblock": true, 00:29:52.227 "num_base_bdevs": 4, 00:29:52.227 "num_base_bdevs_discovered": 3, 00:29:52.227 "num_base_bdevs_operational": 3, 00:29:52.227 "base_bdevs_list": [ 00:29:52.227 { 00:29:52.227 "name": "spare", 00:29:52.227 "uuid": "5a0505b2-1c1c-580c-9931-9cc505b758f0", 00:29:52.227 "is_configured": true, 00:29:52.227 "data_offset": 2048, 00:29:52.227 "data_size": 63488 00:29:52.227 }, 00:29:52.227 { 00:29:52.227 "name": null, 00:29:52.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.227 "is_configured": false, 00:29:52.227 "data_offset": 2048, 00:29:52.227 "data_size": 63488 00:29:52.227 }, 00:29:52.227 { 00:29:52.227 "name": "BaseBdev3", 00:29:52.227 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:52.227 "is_configured": true, 00:29:52.227 "data_offset": 2048, 00:29:52.227 "data_size": 63488 00:29:52.227 }, 00:29:52.227 { 00:29:52.227 "name": "BaseBdev4", 00:29:52.228 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:52.228 "is_configured": true, 00:29:52.228 "data_offset": 2048, 00:29:52.228 "data_size": 63488 00:29:52.228 } 00:29:52.228 ] 00:29:52.228 }' 00:29:52.228 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:52.228 07:40:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:52.796 07:40:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:53.056 [2024-07-12 07:40:26.772082] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:53.056 [2024-07-12 07:40:26.772132] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:53.056 00:29:53.056 Latency(us) 00:29:53.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.056 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:53.056 raid_bdev1 : 11.14 106.71 320.13 0.00 0.00 12724.45 298.42 118339.29 00:29:53.056 =================================================================================================================== 00:29:53.056 Total : 106.71 320.13 0.00 0.00 12724.45 298.42 118339.29 00:29:53.056 [2024-07-12 07:40:26.871621] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:53.056 [2024-07-12 07:40:26.871662] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:53.056 [2024-07-12 07:40:26.871793] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:53.056 [2024-07-12 07:40:26.871804] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:29:53.056 0 00:29:53.056 07:40:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:29:53.056 07:40:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.315 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:53.315 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:53.315 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:53.315 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:29:53.315 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:53.315 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:53.315 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:53.315 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:53.315 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:53.315 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:53.315 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:53.315 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:53.315 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:29:53.583 /dev/nbd0 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:53.583 1+0 records in 00:29:53.583 1+0 records out 00:29:53.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449496 s, 9.1 MB/s 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:53.583 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:53.875 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:29:54.134 /dev/nbd1 00:29:54.134 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:54.134 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:54.134 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:29:54.134 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:29:54.134 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:54.134 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:54.134 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:29:54.134 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:29:54.134 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:54.134 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:54.134 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:54.134 1+0 records in 00:29:54.134 1+0 records out 00:29:54.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471184 s, 8.7 MB/s 00:29:54.134 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:54.134 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:29:54.135 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:54.135 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:54.135 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:29:54.135 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:54.135 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:54.135 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:54.135 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:54.135 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:54.135 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:54.135 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:54.135 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:54.135 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:54.135 07:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:54.394 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:29:54.652 /dev/nbd1 00:29:54.652 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:54.652 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:54.652 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:29:54.652 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:29:54.652 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:54.652 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:54.653 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:29:54.653 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:29:54.653 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:54.653 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:54.653 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:54.653 1+0 records in 00:29:54.653 1+0 records out 00:29:54.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055267 s, 7.4 MB/s 00:29:54.653 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:54.653 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:29:54.653 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:54.653 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:54.653 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:29:54.653 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:54.653 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:54.653 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:54.912 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:54.912 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:54.912 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:54.912 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:54.912 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:54.912 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:54.912 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:55.171 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:55.171 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:55.171 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:55.171 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:55.171 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:55.171 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:55.171 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:55.171 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:55.171 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:55.171 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:55.171 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:55.171 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:55.171 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:55.171 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:55.171 07:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:55.430 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:55.430 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:55.430 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:55.430 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:55.430 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:55.430 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:55.430 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:55.430 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:55.430 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:29:55.430 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:55.689 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:55.689 [2024-07-12 07:40:29.538277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:55.689 [2024-07-12 07:40:29.538570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.689 [2024-07-12 07:40:29.538659] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:29:55.689 [2024-07-12 07:40:29.538758] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.689 [2024-07-12 07:40:29.541600] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.689 [2024-07-12 07:40:29.541794] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:55.689 [2024-07-12 07:40:29.541997] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:55.689 [2024-07-12 07:40:29.542144] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:55.689 [2024-07-12 07:40:29.542454] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:55.689 [2024-07-12 07:40:29.542761] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:55.689 spare 00:29:55.689 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:55.689 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:55.689 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:55.689 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:55.689 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:55.689 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:55.689 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:55.689 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:55.689 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:55.689 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:55.689 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:55.689 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:55.949 [2024-07-12 07:40:29.642975] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:29:55.949 [2024-07-12 07:40:29.643140] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:55.949 [2024-07-12 07:40:29.643397] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033af0 00:29:55.949 [2024-07-12 07:40:29.644087] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:29:55.949 [2024-07-12 07:40:29.644189] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:29:55.949 [2024-07-12 07:40:29.644419] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:55.949 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:55.949 "name": "raid_bdev1", 00:29:55.949 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:55.949 "strip_size_kb": 0, 00:29:55.949 "state": "online", 00:29:55.949 "raid_level": "raid1", 00:29:55.949 "superblock": true, 00:29:55.949 "num_base_bdevs": 4, 00:29:55.949 "num_base_bdevs_discovered": 3, 00:29:55.949 "num_base_bdevs_operational": 3, 00:29:55.949 "base_bdevs_list": [ 00:29:55.949 { 00:29:55.949 "name": "spare", 00:29:55.949 "uuid": "5a0505b2-1c1c-580c-9931-9cc505b758f0", 00:29:55.949 "is_configured": true, 00:29:55.949 "data_offset": 2048, 00:29:55.949 "data_size": 63488 00:29:55.949 }, 00:29:55.949 { 00:29:55.949 "name": null, 00:29:55.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.949 "is_configured": false, 00:29:55.949 "data_offset": 2048, 00:29:55.949 "data_size": 63488 00:29:55.949 }, 00:29:55.949 { 00:29:55.949 "name": "BaseBdev3", 00:29:55.949 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:55.949 "is_configured": true, 00:29:55.949 "data_offset": 2048, 00:29:55.949 "data_size": 63488 00:29:55.949 }, 00:29:55.949 { 00:29:55.949 "name": "BaseBdev4", 00:29:55.949 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:55.949 "is_configured": true, 00:29:55.949 "data_offset": 2048, 00:29:55.949 "data_size": 63488 00:29:55.949 } 00:29:55.949 ] 00:29:55.949 }' 00:29:55.949 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:55.949 07:40:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:56.517 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:56.517 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:56.517 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:56.517 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:56.517 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:56.517 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.517 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.776 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:56.776 "name": "raid_bdev1", 00:29:56.776 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:56.776 "strip_size_kb": 0, 00:29:56.776 "state": "online", 00:29:56.776 "raid_level": "raid1", 00:29:56.776 "superblock": true, 00:29:56.776 "num_base_bdevs": 4, 00:29:56.776 "num_base_bdevs_discovered": 3, 00:29:56.776 "num_base_bdevs_operational": 3, 00:29:56.776 "base_bdevs_list": [ 00:29:56.776 { 00:29:56.776 "name": "spare", 00:29:56.776 "uuid": "5a0505b2-1c1c-580c-9931-9cc505b758f0", 00:29:56.776 "is_configured": true, 00:29:56.776 "data_offset": 2048, 00:29:56.776 "data_size": 63488 00:29:56.776 }, 00:29:56.776 { 00:29:56.776 "name": null, 00:29:56.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.776 "is_configured": false, 00:29:56.776 "data_offset": 2048, 00:29:56.776 "data_size": 63488 00:29:56.776 }, 00:29:56.776 { 00:29:56.776 "name": "BaseBdev3", 00:29:56.776 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:56.776 "is_configured": true, 00:29:56.776 "data_offset": 2048, 00:29:56.776 "data_size": 63488 00:29:56.776 }, 00:29:56.776 { 00:29:56.776 "name": "BaseBdev4", 00:29:56.776 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:56.776 "is_configured": true, 00:29:56.776 "data_offset": 2048, 00:29:56.776 "data_size": 63488 00:29:56.776 } 00:29:56.776 ] 00:29:56.776 }' 00:29:56.776 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:56.776 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:56.776 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:56.776 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:56.776 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:56.776 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.035 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:29:57.035 07:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:57.295 [2024-07-12 07:40:31.075065] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:57.295 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:57.295 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:57.295 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:57.295 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:57.295 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:57.295 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:57.295 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:57.295 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:57.295 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:57.295 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:57.295 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.295 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:57.553 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:57.553 "name": "raid_bdev1", 00:29:57.553 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:57.553 "strip_size_kb": 0, 00:29:57.553 "state": "online", 00:29:57.553 "raid_level": "raid1", 00:29:57.553 "superblock": true, 00:29:57.553 "num_base_bdevs": 4, 00:29:57.553 "num_base_bdevs_discovered": 2, 00:29:57.553 "num_base_bdevs_operational": 2, 00:29:57.553 "base_bdevs_list": [ 00:29:57.553 { 00:29:57.553 "name": null, 00:29:57.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.553 "is_configured": false, 00:29:57.553 "data_offset": 2048, 00:29:57.553 "data_size": 63488 00:29:57.553 }, 00:29:57.553 { 00:29:57.553 "name": null, 00:29:57.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.553 "is_configured": false, 00:29:57.553 "data_offset": 2048, 00:29:57.553 "data_size": 63488 00:29:57.553 }, 00:29:57.553 { 00:29:57.553 "name": "BaseBdev3", 00:29:57.553 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:57.553 "is_configured": true, 00:29:57.553 "data_offset": 2048, 00:29:57.553 "data_size": 63488 00:29:57.553 }, 00:29:57.553 { 00:29:57.553 "name": "BaseBdev4", 00:29:57.553 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:57.553 "is_configured": true, 00:29:57.553 "data_offset": 2048, 00:29:57.553 "data_size": 63488 00:29:57.553 } 00:29:57.553 ] 00:29:57.553 }' 00:29:57.553 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:57.553 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:58.118 07:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:58.376 [2024-07-12 07:40:32.071405] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:58.376 [2024-07-12 07:40:32.071829] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:58.376 [2024-07-12 07:40:32.071952] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:58.376 [2024-07-12 07:40:32.072058] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:58.376 [2024-07-12 07:40:32.079133] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033c90 00:29:58.376 [2024-07-12 07:40:32.081691] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:58.376 07:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:29:59.312 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:59.312 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:59.312 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:59.312 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:59.312 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:59.312 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:59.312 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.571 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:59.571 "name": "raid_bdev1", 00:29:59.571 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:29:59.571 "strip_size_kb": 0, 00:29:59.571 "state": "online", 00:29:59.571 "raid_level": "raid1", 00:29:59.571 "superblock": true, 00:29:59.571 "num_base_bdevs": 4, 00:29:59.571 "num_base_bdevs_discovered": 3, 00:29:59.571 "num_base_bdevs_operational": 3, 00:29:59.571 "process": { 00:29:59.571 "type": "rebuild", 00:29:59.571 "target": "spare", 00:29:59.571 "progress": { 00:29:59.571 "blocks": 24576, 00:29:59.571 "percent": 38 00:29:59.571 } 00:29:59.571 }, 00:29:59.571 "base_bdevs_list": [ 00:29:59.571 { 00:29:59.571 "name": "spare", 00:29:59.571 "uuid": "5a0505b2-1c1c-580c-9931-9cc505b758f0", 00:29:59.571 "is_configured": true, 00:29:59.571 "data_offset": 2048, 00:29:59.571 "data_size": 63488 00:29:59.571 }, 00:29:59.571 { 00:29:59.571 "name": null, 00:29:59.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.571 "is_configured": false, 00:29:59.571 "data_offset": 2048, 00:29:59.571 "data_size": 63488 00:29:59.571 }, 00:29:59.571 { 00:29:59.571 "name": "BaseBdev3", 00:29:59.571 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:29:59.571 "is_configured": true, 00:29:59.571 "data_offset": 2048, 00:29:59.571 "data_size": 63488 00:29:59.571 }, 00:29:59.571 { 00:29:59.571 "name": "BaseBdev4", 00:29:59.571 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:29:59.571 "is_configured": true, 00:29:59.571 "data_offset": 2048, 00:29:59.571 "data_size": 63488 00:29:59.571 } 00:29:59.571 ] 00:29:59.571 }' 00:29:59.571 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:59.571 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:59.571 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:59.571 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:59.571 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:59.830 [2024-07-12 07:40:33.688725] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:59.830 [2024-07-12 07:40:33.693187] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:59.830 [2024-07-12 07:40:33.693412] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:59.830 [2024-07-12 07:40:33.693519] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:59.830 [2024-07-12 07:40:33.693556] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:00.089 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:00.089 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:00.089 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:00.089 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:00.089 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:00.089 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:00.089 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:00.089 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:00.089 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:00.089 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:00.089 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.089 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.089 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:00.089 "name": "raid_bdev1", 00:30:00.089 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:30:00.089 "strip_size_kb": 0, 00:30:00.089 "state": "online", 00:30:00.089 "raid_level": "raid1", 00:30:00.089 "superblock": true, 00:30:00.089 "num_base_bdevs": 4, 00:30:00.089 "num_base_bdevs_discovered": 2, 00:30:00.089 "num_base_bdevs_operational": 2, 00:30:00.089 "base_bdevs_list": [ 00:30:00.089 { 00:30:00.089 "name": null, 00:30:00.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.089 "is_configured": false, 00:30:00.089 "data_offset": 2048, 00:30:00.089 "data_size": 63488 00:30:00.089 }, 00:30:00.089 { 00:30:00.089 "name": null, 00:30:00.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.089 "is_configured": false, 00:30:00.089 "data_offset": 2048, 00:30:00.089 "data_size": 63488 00:30:00.089 }, 00:30:00.089 { 00:30:00.089 "name": "BaseBdev3", 00:30:00.089 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:30:00.089 "is_configured": true, 00:30:00.089 "data_offset": 2048, 00:30:00.089 "data_size": 63488 00:30:00.089 }, 00:30:00.089 { 00:30:00.089 "name": "BaseBdev4", 00:30:00.089 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:30:00.089 "is_configured": true, 00:30:00.089 "data_offset": 2048, 00:30:00.089 "data_size": 63488 00:30:00.089 } 00:30:00.089 ] 00:30:00.089 }' 00:30:00.089 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:00.089 07:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:00.657 07:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:00.915 [2024-07-12 07:40:34.649111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:00.915 [2024-07-12 07:40:34.649420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:00.915 [2024-07-12 07:40:34.649506] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:30:00.915 [2024-07-12 07:40:34.649615] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:00.915 [2024-07-12 07:40:34.650204] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:00.915 [2024-07-12 07:40:34.650348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:00.915 [2024-07-12 07:40:34.650564] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:00.915 [2024-07-12 07:40:34.650657] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:00.915 [2024-07-12 07:40:34.650730] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:00.915 [2024-07-12 07:40:34.650824] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:00.915 [2024-07-12 07:40:34.658005] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033fd0 00:30:00.915 spare 00:30:00.915 [2024-07-12 07:40:34.660580] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:00.915 07:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:30:01.851 07:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:01.851 07:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:01.851 07:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:01.851 07:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:01.851 07:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:01.851 07:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:01.851 07:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.110 07:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:02.110 "name": "raid_bdev1", 00:30:02.110 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:30:02.110 "strip_size_kb": 0, 00:30:02.110 "state": "online", 00:30:02.110 "raid_level": "raid1", 00:30:02.110 "superblock": true, 00:30:02.110 "num_base_bdevs": 4, 00:30:02.110 "num_base_bdevs_discovered": 3, 00:30:02.110 "num_base_bdevs_operational": 3, 00:30:02.110 "process": { 00:30:02.110 "type": "rebuild", 00:30:02.110 "target": "spare", 00:30:02.110 "progress": { 00:30:02.110 "blocks": 24576, 00:30:02.110 "percent": 38 00:30:02.110 } 00:30:02.110 }, 00:30:02.110 "base_bdevs_list": [ 00:30:02.110 { 00:30:02.110 "name": "spare", 00:30:02.110 "uuid": "5a0505b2-1c1c-580c-9931-9cc505b758f0", 00:30:02.110 "is_configured": true, 00:30:02.110 "data_offset": 2048, 00:30:02.110 "data_size": 63488 00:30:02.110 }, 00:30:02.110 { 00:30:02.110 "name": null, 00:30:02.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.110 "is_configured": false, 00:30:02.110 "data_offset": 2048, 00:30:02.110 "data_size": 63488 00:30:02.110 }, 00:30:02.110 { 00:30:02.110 "name": "BaseBdev3", 00:30:02.110 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:30:02.110 "is_configured": true, 00:30:02.110 "data_offset": 2048, 00:30:02.110 "data_size": 63488 00:30:02.110 }, 00:30:02.110 { 00:30:02.110 "name": "BaseBdev4", 00:30:02.110 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:30:02.110 "is_configured": true, 00:30:02.110 "data_offset": 2048, 00:30:02.110 "data_size": 63488 00:30:02.110 } 00:30:02.110 ] 00:30:02.110 }' 00:30:02.110 07:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:02.110 07:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:02.110 07:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:02.369 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:02.369 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:02.369 [2024-07-12 07:40:36.231446] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:02.627 [2024-07-12 07:40:36.272018] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:02.627 [2024-07-12 07:40:36.272259] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:02.627 [2024-07-12 07:40:36.272314] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:02.627 [2024-07-12 07:40:36.272388] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:02.627 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:02.627 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:02.627 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:02.627 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:02.627 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:02.627 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:02.627 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:02.627 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:02.627 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:02.627 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:02.627 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:02.627 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.627 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:02.627 "name": "raid_bdev1", 00:30:02.627 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:30:02.627 "strip_size_kb": 0, 00:30:02.627 "state": "online", 00:30:02.627 "raid_level": "raid1", 00:30:02.627 "superblock": true, 00:30:02.627 "num_base_bdevs": 4, 00:30:02.627 "num_base_bdevs_discovered": 2, 00:30:02.627 "num_base_bdevs_operational": 2, 00:30:02.627 "base_bdevs_list": [ 00:30:02.627 { 00:30:02.627 "name": null, 00:30:02.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.627 "is_configured": false, 00:30:02.627 "data_offset": 2048, 00:30:02.627 "data_size": 63488 00:30:02.627 }, 00:30:02.627 { 00:30:02.627 "name": null, 00:30:02.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.627 "is_configured": false, 00:30:02.627 "data_offset": 2048, 00:30:02.627 "data_size": 63488 00:30:02.627 }, 00:30:02.627 { 00:30:02.627 "name": "BaseBdev3", 00:30:02.627 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:30:02.627 "is_configured": true, 00:30:02.627 "data_offset": 2048, 00:30:02.627 "data_size": 63488 00:30:02.627 }, 00:30:02.627 { 00:30:02.627 "name": "BaseBdev4", 00:30:02.627 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:30:02.627 "is_configured": true, 00:30:02.627 "data_offset": 2048, 00:30:02.627 "data_size": 63488 00:30:02.627 } 00:30:02.627 ] 00:30:02.627 }' 00:30:02.627 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:02.627 07:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:03.193 07:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:03.193 07:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:03.193 07:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:03.193 07:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:03.193 07:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:03.193 07:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.193 07:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.759 07:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:03.759 "name": "raid_bdev1", 00:30:03.759 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:30:03.759 "strip_size_kb": 0, 00:30:03.759 "state": "online", 00:30:03.759 "raid_level": "raid1", 00:30:03.759 "superblock": true, 00:30:03.759 "num_base_bdevs": 4, 00:30:03.759 "num_base_bdevs_discovered": 2, 00:30:03.759 "num_base_bdevs_operational": 2, 00:30:03.759 "base_bdevs_list": [ 00:30:03.759 { 00:30:03.759 "name": null, 00:30:03.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.759 "is_configured": false, 00:30:03.759 "data_offset": 2048, 00:30:03.759 "data_size": 63488 00:30:03.759 }, 00:30:03.759 { 00:30:03.759 "name": null, 00:30:03.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.759 "is_configured": false, 00:30:03.759 "data_offset": 2048, 00:30:03.759 "data_size": 63488 00:30:03.759 }, 00:30:03.759 { 00:30:03.759 "name": "BaseBdev3", 00:30:03.759 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:30:03.759 "is_configured": true, 00:30:03.759 "data_offset": 2048, 00:30:03.759 "data_size": 63488 00:30:03.759 }, 00:30:03.759 { 00:30:03.759 "name": "BaseBdev4", 00:30:03.759 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:30:03.759 "is_configured": true, 00:30:03.759 "data_offset": 2048, 00:30:03.759 "data_size": 63488 00:30:03.759 } 00:30:03.759 ] 00:30:03.759 }' 00:30:03.759 07:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:03.760 07:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:03.760 07:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:03.760 07:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:03.760 07:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:03.760 07:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:04.018 [2024-07-12 07:40:37.768127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:04.018 [2024-07-12 07:40:37.768464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:04.018 [2024-07-12 07:40:37.768561] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:30:04.018 [2024-07-12 07:40:37.768671] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:04.018 [2024-07-12 07:40:37.769243] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:04.018 [2024-07-12 07:40:37.769417] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:04.018 [2024-07-12 07:40:37.769601] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:04.018 [2024-07-12 07:40:37.769687] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:04.018 [2024-07-12 07:40:37.769757] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:04.018 BaseBdev1 00:30:04.018 07:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:30:04.954 07:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:04.954 07:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:04.954 07:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:04.954 07:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:04.954 07:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:04.954 07:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:04.954 07:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:04.954 07:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:04.954 07:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:04.954 07:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:04.954 07:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:04.954 07:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.212 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:05.212 "name": "raid_bdev1", 00:30:05.212 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:30:05.212 "strip_size_kb": 0, 00:30:05.212 "state": "online", 00:30:05.212 "raid_level": "raid1", 00:30:05.212 "superblock": true, 00:30:05.212 "num_base_bdevs": 4, 00:30:05.212 "num_base_bdevs_discovered": 2, 00:30:05.212 "num_base_bdevs_operational": 2, 00:30:05.212 "base_bdevs_list": [ 00:30:05.212 { 00:30:05.212 "name": null, 00:30:05.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.212 "is_configured": false, 00:30:05.213 "data_offset": 2048, 00:30:05.213 "data_size": 63488 00:30:05.213 }, 00:30:05.213 { 00:30:05.213 "name": null, 00:30:05.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.213 "is_configured": false, 00:30:05.213 "data_offset": 2048, 00:30:05.213 "data_size": 63488 00:30:05.213 }, 00:30:05.213 { 00:30:05.213 "name": "BaseBdev3", 00:30:05.213 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:30:05.213 "is_configured": true, 00:30:05.213 "data_offset": 2048, 00:30:05.213 "data_size": 63488 00:30:05.213 }, 00:30:05.213 { 00:30:05.213 "name": "BaseBdev4", 00:30:05.213 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:30:05.213 "is_configured": true, 00:30:05.213 "data_offset": 2048, 00:30:05.213 "data_size": 63488 00:30:05.213 } 00:30:05.213 ] 00:30:05.213 }' 00:30:05.213 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:05.213 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:05.781 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:05.781 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:05.781 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:05.781 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:05.781 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:06.040 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.040 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.040 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:06.040 "name": "raid_bdev1", 00:30:06.040 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:30:06.040 "strip_size_kb": 0, 00:30:06.040 "state": "online", 00:30:06.040 "raid_level": "raid1", 00:30:06.040 "superblock": true, 00:30:06.040 "num_base_bdevs": 4, 00:30:06.040 "num_base_bdevs_discovered": 2, 00:30:06.040 "num_base_bdevs_operational": 2, 00:30:06.040 "base_bdevs_list": [ 00:30:06.040 { 00:30:06.040 "name": null, 00:30:06.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.040 "is_configured": false, 00:30:06.040 "data_offset": 2048, 00:30:06.040 "data_size": 63488 00:30:06.040 }, 00:30:06.040 { 00:30:06.040 "name": null, 00:30:06.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.040 "is_configured": false, 00:30:06.040 "data_offset": 2048, 00:30:06.040 "data_size": 63488 00:30:06.040 }, 00:30:06.040 { 00:30:06.040 "name": "BaseBdev3", 00:30:06.040 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:30:06.040 "is_configured": true, 00:30:06.040 "data_offset": 2048, 00:30:06.040 "data_size": 63488 00:30:06.040 }, 00:30:06.040 { 00:30:06.040 "name": "BaseBdev4", 00:30:06.040 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:30:06.040 "is_configured": true, 00:30:06.040 "data_offset": 2048, 00:30:06.040 "data_size": 63488 00:30:06.040 } 00:30:06.040 ] 00:30:06.040 }' 00:30:06.040 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:06.040 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:06.040 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:06.300 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:06.300 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:06.300 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:30:06.300 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:06.300 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:06.300 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:06.300 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:06.300 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:06.300 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:06.300 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:06.300 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:06.300 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:06.300 07:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:06.300 [2024-07-12 07:40:40.124437] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:06.300 [2024-07-12 07:40:40.124878] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:06.300 [2024-07-12 07:40:40.124974] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:06.300 request: 00:30:06.300 { 00:30:06.300 "raid_bdev": "raid_bdev1", 00:30:06.300 "base_bdev": "BaseBdev1", 00:30:06.300 "method": "bdev_raid_add_base_bdev", 00:30:06.300 "req_id": 1 00:30:06.300 } 00:30:06.300 Got JSON-RPC error response 00:30:06.300 response: 00:30:06.300 { 00:30:06.300 "code": -22, 00:30:06.300 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:06.300 } 00:30:06.300 07:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:30:06.300 07:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:06.300 07:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:06.300 07:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:06.300 07:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:30:07.694 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:07.694 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:07.694 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:07.694 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:07.694 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:07.694 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:07.694 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:07.694 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:07.694 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:07.694 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:07.694 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:07.694 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.694 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:07.694 "name": "raid_bdev1", 00:30:07.694 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:30:07.694 "strip_size_kb": 0, 00:30:07.694 "state": "online", 00:30:07.694 "raid_level": "raid1", 00:30:07.694 "superblock": true, 00:30:07.694 "num_base_bdevs": 4, 00:30:07.694 "num_base_bdevs_discovered": 2, 00:30:07.694 "num_base_bdevs_operational": 2, 00:30:07.694 "base_bdevs_list": [ 00:30:07.694 { 00:30:07.694 "name": null, 00:30:07.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.694 "is_configured": false, 00:30:07.694 "data_offset": 2048, 00:30:07.694 "data_size": 63488 00:30:07.694 }, 00:30:07.694 { 00:30:07.694 "name": null, 00:30:07.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.694 "is_configured": false, 00:30:07.694 "data_offset": 2048, 00:30:07.694 "data_size": 63488 00:30:07.694 }, 00:30:07.694 { 00:30:07.694 "name": "BaseBdev3", 00:30:07.694 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:30:07.694 "is_configured": true, 00:30:07.694 "data_offset": 2048, 00:30:07.694 "data_size": 63488 00:30:07.694 }, 00:30:07.694 { 00:30:07.694 "name": "BaseBdev4", 00:30:07.694 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:30:07.694 "is_configured": true, 00:30:07.694 "data_offset": 2048, 00:30:07.694 "data_size": 63488 00:30:07.694 } 00:30:07.694 ] 00:30:07.694 }' 00:30:07.694 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:07.694 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:08.262 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:08.262 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:08.262 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:08.262 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:08.262 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:08.262 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:08.262 07:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:08.520 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:08.520 "name": "raid_bdev1", 00:30:08.520 "uuid": "fa2d5774-e51c-4772-b707-86309e99e5c3", 00:30:08.520 "strip_size_kb": 0, 00:30:08.520 "state": "online", 00:30:08.520 "raid_level": "raid1", 00:30:08.520 "superblock": true, 00:30:08.520 "num_base_bdevs": 4, 00:30:08.520 "num_base_bdevs_discovered": 2, 00:30:08.520 "num_base_bdevs_operational": 2, 00:30:08.520 "base_bdevs_list": [ 00:30:08.520 { 00:30:08.520 "name": null, 00:30:08.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:08.520 "is_configured": false, 00:30:08.520 "data_offset": 2048, 00:30:08.520 "data_size": 63488 00:30:08.520 }, 00:30:08.520 { 00:30:08.520 "name": null, 00:30:08.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:08.520 "is_configured": false, 00:30:08.520 "data_offset": 2048, 00:30:08.520 "data_size": 63488 00:30:08.520 }, 00:30:08.520 { 00:30:08.520 "name": "BaseBdev3", 00:30:08.520 "uuid": "4850a5c6-c309-5619-ba0c-c1c6229c140d", 00:30:08.520 "is_configured": true, 00:30:08.520 "data_offset": 2048, 00:30:08.520 "data_size": 63488 00:30:08.520 }, 00:30:08.520 { 00:30:08.520 "name": "BaseBdev4", 00:30:08.520 "uuid": "97a7ad56-6afb-516c-ba33-65ff83bb0b38", 00:30:08.520 "is_configured": true, 00:30:08.520 "data_offset": 2048, 00:30:08.520 "data_size": 63488 00:30:08.520 } 00:30:08.520 ] 00:30:08.520 }' 00:30:08.520 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:08.520 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:08.520 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:08.520 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:08.520 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 158573 00:30:08.521 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@946 -- # '[' -z 158573 ']' 00:30:08.521 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # kill -0 158573 00:30:08.521 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # uname 00:30:08.521 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:08.521 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 158573 00:30:08.521 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:08.521 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:08.521 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 158573' 00:30:08.521 killing process with pid 158573 00:30:08.521 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@965 -- # kill 158573 00:30:08.521 Received shutdown signal, test time was about 26.636590 seconds 00:30:08.521 00:30:08.521 Latency(us) 00:30:08.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.521 =================================================================================================================== 00:30:08.521 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:08.521 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # wait 158573 00:30:08.521 [2024-07-12 07:40:42.362432] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:08.521 [2024-07-12 07:40:42.362606] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:08.521 [2024-07-12 07:40:42.362719] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:08.521 [2024-07-12 07:40:42.362868] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:30:08.779 [2024-07-12 07:40:42.445502] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:09.038 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:30:09.038 00:30:09.038 real 0m31.950s 00:30:09.038 user 0m50.017s 00:30:09.038 sys 0m5.100s 00:30:09.038 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:09.038 07:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:09.038 ************************************ 00:30:09.038 END TEST raid_rebuild_test_sb_io 00:30:09.038 ************************************ 00:30:09.297 07:40:42 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' y == y ']' 00:30:09.297 07:40:42 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:30:09.297 07:40:42 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:30:09.297 07:40:42 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:30:09.297 07:40:42 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:09.297 07:40:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:09.297 ************************************ 00:30:09.297 START TEST raid5f_state_function_test 00:30:09.297 ************************************ 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 3 false 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=159467 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 159467' 00:30:09.297 Process raid pid: 159467 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 159467 /var/tmp/spdk-raid.sock 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 159467 ']' 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:09.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:09.297 07:40:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:09.297 [2024-07-12 07:40:43.037021] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:09.297 [2024-07-12 07:40:43.037602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.556 [2024-07-12 07:40:43.194347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.556 [2024-07-12 07:40:43.271675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.556 [2024-07-12 07:40:43.355223] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:10.123 07:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:10.123 07:40:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:30:10.123 07:40:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:10.382 [2024-07-12 07:40:44.029200] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:10.382 [2024-07-12 07:40:44.029567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:10.382 [2024-07-12 07:40:44.029753] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:10.382 [2024-07-12 07:40:44.029867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:10.382 [2024-07-12 07:40:44.029950] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:10.382 [2024-07-12 07:40:44.030026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:10.382 07:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:10.382 07:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:10.382 07:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:10.382 07:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:10.382 07:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:10.382 07:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:10.382 07:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:10.382 07:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:10.382 07:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:10.382 07:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:10.382 07:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:10.382 07:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.640 07:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:10.640 "name": "Existed_Raid", 00:30:10.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.640 "strip_size_kb": 64, 00:30:10.641 "state": "configuring", 00:30:10.641 "raid_level": "raid5f", 00:30:10.641 "superblock": false, 00:30:10.641 "num_base_bdevs": 3, 00:30:10.641 "num_base_bdevs_discovered": 0, 00:30:10.641 "num_base_bdevs_operational": 3, 00:30:10.641 "base_bdevs_list": [ 00:30:10.641 { 00:30:10.641 "name": "BaseBdev1", 00:30:10.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.641 "is_configured": false, 00:30:10.641 "data_offset": 0, 00:30:10.641 "data_size": 0 00:30:10.641 }, 00:30:10.641 { 00:30:10.641 "name": "BaseBdev2", 00:30:10.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.641 "is_configured": false, 00:30:10.641 "data_offset": 0, 00:30:10.641 "data_size": 0 00:30:10.641 }, 00:30:10.641 { 00:30:10.641 "name": "BaseBdev3", 00:30:10.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.641 "is_configured": false, 00:30:10.641 "data_offset": 0, 00:30:10.641 "data_size": 0 00:30:10.641 } 00:30:10.641 ] 00:30:10.641 }' 00:30:10.641 07:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:10.641 07:40:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.208 07:40:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:11.208 [2024-07-12 07:40:45.077204] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:11.208 [2024-07-12 07:40:45.077489] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:30:11.467 07:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:11.467 [2024-07-12 07:40:45.333173] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:11.467 [2024-07-12 07:40:45.333383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:11.467 [2024-07-12 07:40:45.333497] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:11.467 [2024-07-12 07:40:45.333597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:11.467 [2024-07-12 07:40:45.333663] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:11.467 [2024-07-12 07:40:45.333715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:11.735 07:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:11.735 [2024-07-12 07:40:45.518369] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:11.735 BaseBdev1 00:30:11.735 07:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:30:11.735 07:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:30:11.735 07:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:11.735 07:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:11.735 07:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:11.735 07:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:11.735 07:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:11.992 07:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:12.251 [ 00:30:12.251 { 00:30:12.251 "name": "BaseBdev1", 00:30:12.251 "aliases": [ 00:30:12.251 "91c9cfae-c0ca-4a34-9d02-6e600d48d26c" 00:30:12.251 ], 00:30:12.251 "product_name": "Malloc disk", 00:30:12.251 "block_size": 512, 00:30:12.251 "num_blocks": 65536, 00:30:12.251 "uuid": "91c9cfae-c0ca-4a34-9d02-6e600d48d26c", 00:30:12.251 "assigned_rate_limits": { 00:30:12.251 "rw_ios_per_sec": 0, 00:30:12.251 "rw_mbytes_per_sec": 0, 00:30:12.251 "r_mbytes_per_sec": 0, 00:30:12.251 "w_mbytes_per_sec": 0 00:30:12.251 }, 00:30:12.251 "claimed": true, 00:30:12.251 "claim_type": "exclusive_write", 00:30:12.251 "zoned": false, 00:30:12.251 "supported_io_types": { 00:30:12.251 "read": true, 00:30:12.251 "write": true, 00:30:12.251 "unmap": true, 00:30:12.251 "write_zeroes": true, 00:30:12.251 "flush": true, 00:30:12.251 "reset": true, 00:30:12.251 "compare": false, 00:30:12.251 "compare_and_write": false, 00:30:12.251 "abort": true, 00:30:12.251 "nvme_admin": false, 00:30:12.251 "nvme_io": false 00:30:12.251 }, 00:30:12.251 "memory_domains": [ 00:30:12.251 { 00:30:12.251 "dma_device_id": "system", 00:30:12.251 "dma_device_type": 1 00:30:12.251 }, 00:30:12.251 { 00:30:12.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:12.251 "dma_device_type": 2 00:30:12.251 } 00:30:12.251 ], 00:30:12.251 "driver_specific": {} 00:30:12.251 } 00:30:12.251 ] 00:30:12.251 07:40:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:12.251 07:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:12.251 07:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:12.251 07:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:12.251 07:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:12.251 07:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:12.251 07:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:12.251 07:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:12.251 07:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:12.251 07:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:12.251 07:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:12.251 07:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.251 07:40:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:12.509 07:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:12.509 "name": "Existed_Raid", 00:30:12.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.509 "strip_size_kb": 64, 00:30:12.509 "state": "configuring", 00:30:12.509 "raid_level": "raid5f", 00:30:12.509 "superblock": false, 00:30:12.509 "num_base_bdevs": 3, 00:30:12.509 "num_base_bdevs_discovered": 1, 00:30:12.509 "num_base_bdevs_operational": 3, 00:30:12.509 "base_bdevs_list": [ 00:30:12.509 { 00:30:12.509 "name": "BaseBdev1", 00:30:12.509 "uuid": "91c9cfae-c0ca-4a34-9d02-6e600d48d26c", 00:30:12.509 "is_configured": true, 00:30:12.509 "data_offset": 0, 00:30:12.509 "data_size": 65536 00:30:12.509 }, 00:30:12.509 { 00:30:12.509 "name": "BaseBdev2", 00:30:12.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.509 "is_configured": false, 00:30:12.509 "data_offset": 0, 00:30:12.509 "data_size": 0 00:30:12.509 }, 00:30:12.509 { 00:30:12.509 "name": "BaseBdev3", 00:30:12.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.509 "is_configured": false, 00:30:12.509 "data_offset": 0, 00:30:12.509 "data_size": 0 00:30:12.509 } 00:30:12.509 ] 00:30:12.509 }' 00:30:12.509 07:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:12.509 07:40:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.076 07:40:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:13.335 [2024-07-12 07:40:47.022632] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:13.335 [2024-07-12 07:40:47.022802] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:30:13.335 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:13.593 [2024-07-12 07:40:47.266716] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:13.593 [2024-07-12 07:40:47.268855] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:13.593 [2024-07-12 07:40:47.269028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:13.593 [2024-07-12 07:40:47.269113] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:13.593 [2024-07-12 07:40:47.269167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:13.593 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:30:13.593 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:13.593 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:13.593 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:13.593 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:13.593 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:13.593 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:13.593 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:13.593 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:13.593 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:13.593 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:13.593 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:13.593 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:13.593 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:13.851 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:13.851 "name": "Existed_Raid", 00:30:13.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:13.851 "strip_size_kb": 64, 00:30:13.851 "state": "configuring", 00:30:13.851 "raid_level": "raid5f", 00:30:13.851 "superblock": false, 00:30:13.851 "num_base_bdevs": 3, 00:30:13.851 "num_base_bdevs_discovered": 1, 00:30:13.851 "num_base_bdevs_operational": 3, 00:30:13.851 "base_bdevs_list": [ 00:30:13.851 { 00:30:13.851 "name": "BaseBdev1", 00:30:13.851 "uuid": "91c9cfae-c0ca-4a34-9d02-6e600d48d26c", 00:30:13.851 "is_configured": true, 00:30:13.851 "data_offset": 0, 00:30:13.851 "data_size": 65536 00:30:13.851 }, 00:30:13.851 { 00:30:13.851 "name": "BaseBdev2", 00:30:13.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:13.851 "is_configured": false, 00:30:13.851 "data_offset": 0, 00:30:13.851 "data_size": 0 00:30:13.851 }, 00:30:13.851 { 00:30:13.851 "name": "BaseBdev3", 00:30:13.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:13.851 "is_configured": false, 00:30:13.851 "data_offset": 0, 00:30:13.851 "data_size": 0 00:30:13.851 } 00:30:13.851 ] 00:30:13.851 }' 00:30:13.851 07:40:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:13.851 07:40:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.418 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:14.419 [2024-07-12 07:40:48.297214] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:14.419 BaseBdev2 00:30:14.678 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:30:14.678 07:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:30:14.678 07:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:14.678 07:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:14.678 07:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:14.678 07:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:14.678 07:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:14.678 07:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:14.937 [ 00:30:14.937 { 00:30:14.937 "name": "BaseBdev2", 00:30:14.937 "aliases": [ 00:30:14.937 "1d5b394a-1a35-48ff-8f2a-4ce868cc7316" 00:30:14.937 ], 00:30:14.937 "product_name": "Malloc disk", 00:30:14.937 "block_size": 512, 00:30:14.937 "num_blocks": 65536, 00:30:14.937 "uuid": "1d5b394a-1a35-48ff-8f2a-4ce868cc7316", 00:30:14.937 "assigned_rate_limits": { 00:30:14.937 "rw_ios_per_sec": 0, 00:30:14.937 "rw_mbytes_per_sec": 0, 00:30:14.937 "r_mbytes_per_sec": 0, 00:30:14.937 "w_mbytes_per_sec": 0 00:30:14.937 }, 00:30:14.937 "claimed": true, 00:30:14.937 "claim_type": "exclusive_write", 00:30:14.937 "zoned": false, 00:30:14.937 "supported_io_types": { 00:30:14.937 "read": true, 00:30:14.937 "write": true, 00:30:14.937 "unmap": true, 00:30:14.937 "write_zeroes": true, 00:30:14.937 "flush": true, 00:30:14.937 "reset": true, 00:30:14.937 "compare": false, 00:30:14.937 "compare_and_write": false, 00:30:14.937 "abort": true, 00:30:14.937 "nvme_admin": false, 00:30:14.937 "nvme_io": false 00:30:14.937 }, 00:30:14.937 "memory_domains": [ 00:30:14.937 { 00:30:14.937 "dma_device_id": "system", 00:30:14.937 "dma_device_type": 1 00:30:14.937 }, 00:30:14.937 { 00:30:14.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:14.937 "dma_device_type": 2 00:30:14.937 } 00:30:14.937 ], 00:30:14.937 "driver_specific": {} 00:30:14.937 } 00:30:14.937 ] 00:30:14.937 07:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:14.937 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:14.937 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:14.937 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:14.937 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:14.937 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:14.937 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:14.937 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:14.937 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:14.937 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:14.937 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:14.937 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:14.937 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:14.937 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:14.937 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:15.196 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:15.196 "name": "Existed_Raid", 00:30:15.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.196 "strip_size_kb": 64, 00:30:15.196 "state": "configuring", 00:30:15.196 "raid_level": "raid5f", 00:30:15.196 "superblock": false, 00:30:15.196 "num_base_bdevs": 3, 00:30:15.196 "num_base_bdevs_discovered": 2, 00:30:15.196 "num_base_bdevs_operational": 3, 00:30:15.196 "base_bdevs_list": [ 00:30:15.196 { 00:30:15.196 "name": "BaseBdev1", 00:30:15.196 "uuid": "91c9cfae-c0ca-4a34-9d02-6e600d48d26c", 00:30:15.196 "is_configured": true, 00:30:15.196 "data_offset": 0, 00:30:15.196 "data_size": 65536 00:30:15.196 }, 00:30:15.196 { 00:30:15.196 "name": "BaseBdev2", 00:30:15.196 "uuid": "1d5b394a-1a35-48ff-8f2a-4ce868cc7316", 00:30:15.196 "is_configured": true, 00:30:15.196 "data_offset": 0, 00:30:15.196 "data_size": 65536 00:30:15.196 }, 00:30:15.196 { 00:30:15.196 "name": "BaseBdev3", 00:30:15.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.196 "is_configured": false, 00:30:15.196 "data_offset": 0, 00:30:15.196 "data_size": 0 00:30:15.196 } 00:30:15.196 ] 00:30:15.196 }' 00:30:15.196 07:40:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:15.196 07:40:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:15.764 07:40:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:16.023 [2024-07-12 07:40:49.696474] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:16.023 [2024-07-12 07:40:49.696703] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:30:16.023 [2024-07-12 07:40:49.696745] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:30:16.023 [2024-07-12 07:40:49.696978] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:30:16.023 [2024-07-12 07:40:49.697748] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:30:16.023 [2024-07-12 07:40:49.697866] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:30:16.023 [2024-07-12 07:40:49.698130] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:16.023 BaseBdev3 00:30:16.023 07:40:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:30:16.023 07:40:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:30:16.023 07:40:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:16.023 07:40:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:16.023 07:40:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:16.023 07:40:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:16.023 07:40:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:16.023 07:40:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:16.283 [ 00:30:16.283 { 00:30:16.283 "name": "BaseBdev3", 00:30:16.283 "aliases": [ 00:30:16.283 "ea286e8a-8453-46c7-8efd-b08b4743e9e0" 00:30:16.283 ], 00:30:16.283 "product_name": "Malloc disk", 00:30:16.283 "block_size": 512, 00:30:16.283 "num_blocks": 65536, 00:30:16.283 "uuid": "ea286e8a-8453-46c7-8efd-b08b4743e9e0", 00:30:16.283 "assigned_rate_limits": { 00:30:16.283 "rw_ios_per_sec": 0, 00:30:16.283 "rw_mbytes_per_sec": 0, 00:30:16.283 "r_mbytes_per_sec": 0, 00:30:16.283 "w_mbytes_per_sec": 0 00:30:16.283 }, 00:30:16.283 "claimed": true, 00:30:16.283 "claim_type": "exclusive_write", 00:30:16.283 "zoned": false, 00:30:16.283 "supported_io_types": { 00:30:16.283 "read": true, 00:30:16.283 "write": true, 00:30:16.283 "unmap": true, 00:30:16.283 "write_zeroes": true, 00:30:16.283 "flush": true, 00:30:16.283 "reset": true, 00:30:16.283 "compare": false, 00:30:16.283 "compare_and_write": false, 00:30:16.283 "abort": true, 00:30:16.283 "nvme_admin": false, 00:30:16.283 "nvme_io": false 00:30:16.283 }, 00:30:16.283 "memory_domains": [ 00:30:16.283 { 00:30:16.283 "dma_device_id": "system", 00:30:16.283 "dma_device_type": 1 00:30:16.283 }, 00:30:16.283 { 00:30:16.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:16.283 "dma_device_type": 2 00:30:16.283 } 00:30:16.283 ], 00:30:16.283 "driver_specific": {} 00:30:16.283 } 00:30:16.283 ] 00:30:16.283 07:40:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:16.283 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:16.283 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:16.283 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:16.283 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:16.283 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:16.283 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:16.283 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:16.283 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:16.283 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:16.283 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:16.283 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:16.283 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:16.283 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:16.283 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:16.543 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:16.543 "name": "Existed_Raid", 00:30:16.543 "uuid": "4209df48-73c6-4602-930f-036b36f87b73", 00:30:16.543 "strip_size_kb": 64, 00:30:16.543 "state": "online", 00:30:16.543 "raid_level": "raid5f", 00:30:16.543 "superblock": false, 00:30:16.543 "num_base_bdevs": 3, 00:30:16.543 "num_base_bdevs_discovered": 3, 00:30:16.543 "num_base_bdevs_operational": 3, 00:30:16.543 "base_bdevs_list": [ 00:30:16.543 { 00:30:16.543 "name": "BaseBdev1", 00:30:16.543 "uuid": "91c9cfae-c0ca-4a34-9d02-6e600d48d26c", 00:30:16.543 "is_configured": true, 00:30:16.543 "data_offset": 0, 00:30:16.543 "data_size": 65536 00:30:16.543 }, 00:30:16.543 { 00:30:16.543 "name": "BaseBdev2", 00:30:16.543 "uuid": "1d5b394a-1a35-48ff-8f2a-4ce868cc7316", 00:30:16.543 "is_configured": true, 00:30:16.543 "data_offset": 0, 00:30:16.543 "data_size": 65536 00:30:16.543 }, 00:30:16.543 { 00:30:16.543 "name": "BaseBdev3", 00:30:16.543 "uuid": "ea286e8a-8453-46c7-8efd-b08b4743e9e0", 00:30:16.543 "is_configured": true, 00:30:16.543 "data_offset": 0, 00:30:16.543 "data_size": 65536 00:30:16.543 } 00:30:16.543 ] 00:30:16.543 }' 00:30:16.543 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:16.543 07:40:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:17.111 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:30:17.111 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:17.111 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:17.111 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:17.111 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:17.111 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:30:17.111 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:17.111 07:40:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:17.370 [2024-07-12 07:40:51.085013] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:17.370 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:17.370 "name": "Existed_Raid", 00:30:17.370 "aliases": [ 00:30:17.370 "4209df48-73c6-4602-930f-036b36f87b73" 00:30:17.370 ], 00:30:17.370 "product_name": "Raid Volume", 00:30:17.370 "block_size": 512, 00:30:17.370 "num_blocks": 131072, 00:30:17.370 "uuid": "4209df48-73c6-4602-930f-036b36f87b73", 00:30:17.370 "assigned_rate_limits": { 00:30:17.370 "rw_ios_per_sec": 0, 00:30:17.370 "rw_mbytes_per_sec": 0, 00:30:17.370 "r_mbytes_per_sec": 0, 00:30:17.370 "w_mbytes_per_sec": 0 00:30:17.370 }, 00:30:17.370 "claimed": false, 00:30:17.370 "zoned": false, 00:30:17.370 "supported_io_types": { 00:30:17.370 "read": true, 00:30:17.370 "write": true, 00:30:17.370 "unmap": false, 00:30:17.370 "write_zeroes": true, 00:30:17.370 "flush": false, 00:30:17.370 "reset": true, 00:30:17.370 "compare": false, 00:30:17.370 "compare_and_write": false, 00:30:17.370 "abort": false, 00:30:17.370 "nvme_admin": false, 00:30:17.370 "nvme_io": false 00:30:17.370 }, 00:30:17.370 "driver_specific": { 00:30:17.370 "raid": { 00:30:17.370 "uuid": "4209df48-73c6-4602-930f-036b36f87b73", 00:30:17.370 "strip_size_kb": 64, 00:30:17.370 "state": "online", 00:30:17.370 "raid_level": "raid5f", 00:30:17.370 "superblock": false, 00:30:17.370 "num_base_bdevs": 3, 00:30:17.370 "num_base_bdevs_discovered": 3, 00:30:17.370 "num_base_bdevs_operational": 3, 00:30:17.370 "base_bdevs_list": [ 00:30:17.370 { 00:30:17.370 "name": "BaseBdev1", 00:30:17.370 "uuid": "91c9cfae-c0ca-4a34-9d02-6e600d48d26c", 00:30:17.370 "is_configured": true, 00:30:17.370 "data_offset": 0, 00:30:17.370 "data_size": 65536 00:30:17.370 }, 00:30:17.370 { 00:30:17.370 "name": "BaseBdev2", 00:30:17.370 "uuid": "1d5b394a-1a35-48ff-8f2a-4ce868cc7316", 00:30:17.370 "is_configured": true, 00:30:17.370 "data_offset": 0, 00:30:17.370 "data_size": 65536 00:30:17.370 }, 00:30:17.370 { 00:30:17.370 "name": "BaseBdev3", 00:30:17.370 "uuid": "ea286e8a-8453-46c7-8efd-b08b4743e9e0", 00:30:17.370 "is_configured": true, 00:30:17.370 "data_offset": 0, 00:30:17.370 "data_size": 65536 00:30:17.370 } 00:30:17.370 ] 00:30:17.370 } 00:30:17.370 } 00:30:17.370 }' 00:30:17.370 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:17.370 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:30:17.370 BaseBdev2 00:30:17.370 BaseBdev3' 00:30:17.370 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:17.370 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:17.370 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:30:17.629 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:17.629 "name": "BaseBdev1", 00:30:17.629 "aliases": [ 00:30:17.629 "91c9cfae-c0ca-4a34-9d02-6e600d48d26c" 00:30:17.629 ], 00:30:17.629 "product_name": "Malloc disk", 00:30:17.629 "block_size": 512, 00:30:17.629 "num_blocks": 65536, 00:30:17.629 "uuid": "91c9cfae-c0ca-4a34-9d02-6e600d48d26c", 00:30:17.629 "assigned_rate_limits": { 00:30:17.629 "rw_ios_per_sec": 0, 00:30:17.629 "rw_mbytes_per_sec": 0, 00:30:17.630 "r_mbytes_per_sec": 0, 00:30:17.630 "w_mbytes_per_sec": 0 00:30:17.630 }, 00:30:17.630 "claimed": true, 00:30:17.630 "claim_type": "exclusive_write", 00:30:17.630 "zoned": false, 00:30:17.630 "supported_io_types": { 00:30:17.630 "read": true, 00:30:17.630 "write": true, 00:30:17.630 "unmap": true, 00:30:17.630 "write_zeroes": true, 00:30:17.630 "flush": true, 00:30:17.630 "reset": true, 00:30:17.630 "compare": false, 00:30:17.630 "compare_and_write": false, 00:30:17.630 "abort": true, 00:30:17.630 "nvme_admin": false, 00:30:17.630 "nvme_io": false 00:30:17.630 }, 00:30:17.630 "memory_domains": [ 00:30:17.630 { 00:30:17.630 "dma_device_id": "system", 00:30:17.630 "dma_device_type": 1 00:30:17.630 }, 00:30:17.630 { 00:30:17.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:17.630 "dma_device_type": 2 00:30:17.630 } 00:30:17.630 ], 00:30:17.630 "driver_specific": {} 00:30:17.630 }' 00:30:17.630 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:17.630 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:17.630 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:17.630 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:17.630 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:17.630 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:17.630 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:17.630 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:17.888 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:17.888 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:17.888 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:17.888 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:17.888 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:17.888 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:17.888 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:18.147 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:18.147 "name": "BaseBdev2", 00:30:18.147 "aliases": [ 00:30:18.147 "1d5b394a-1a35-48ff-8f2a-4ce868cc7316" 00:30:18.147 ], 00:30:18.147 "product_name": "Malloc disk", 00:30:18.147 "block_size": 512, 00:30:18.147 "num_blocks": 65536, 00:30:18.147 "uuid": "1d5b394a-1a35-48ff-8f2a-4ce868cc7316", 00:30:18.147 "assigned_rate_limits": { 00:30:18.147 "rw_ios_per_sec": 0, 00:30:18.147 "rw_mbytes_per_sec": 0, 00:30:18.147 "r_mbytes_per_sec": 0, 00:30:18.147 "w_mbytes_per_sec": 0 00:30:18.147 }, 00:30:18.147 "claimed": true, 00:30:18.147 "claim_type": "exclusive_write", 00:30:18.147 "zoned": false, 00:30:18.147 "supported_io_types": { 00:30:18.147 "read": true, 00:30:18.147 "write": true, 00:30:18.147 "unmap": true, 00:30:18.147 "write_zeroes": true, 00:30:18.147 "flush": true, 00:30:18.147 "reset": true, 00:30:18.147 "compare": false, 00:30:18.147 "compare_and_write": false, 00:30:18.147 "abort": true, 00:30:18.147 "nvme_admin": false, 00:30:18.147 "nvme_io": false 00:30:18.147 }, 00:30:18.147 "memory_domains": [ 00:30:18.147 { 00:30:18.147 "dma_device_id": "system", 00:30:18.147 "dma_device_type": 1 00:30:18.147 }, 00:30:18.147 { 00:30:18.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:18.147 "dma_device_type": 2 00:30:18.147 } 00:30:18.147 ], 00:30:18.147 "driver_specific": {} 00:30:18.147 }' 00:30:18.147 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:18.147 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:18.147 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:18.147 07:40:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:18.406 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:18.406 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:18.406 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:18.406 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:18.406 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:18.406 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:18.406 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:18.406 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:18.406 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:18.406 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:18.406 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:18.710 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:18.710 "name": "BaseBdev3", 00:30:18.710 "aliases": [ 00:30:18.710 "ea286e8a-8453-46c7-8efd-b08b4743e9e0" 00:30:18.710 ], 00:30:18.710 "product_name": "Malloc disk", 00:30:18.710 "block_size": 512, 00:30:18.710 "num_blocks": 65536, 00:30:18.710 "uuid": "ea286e8a-8453-46c7-8efd-b08b4743e9e0", 00:30:18.710 "assigned_rate_limits": { 00:30:18.710 "rw_ios_per_sec": 0, 00:30:18.710 "rw_mbytes_per_sec": 0, 00:30:18.710 "r_mbytes_per_sec": 0, 00:30:18.710 "w_mbytes_per_sec": 0 00:30:18.710 }, 00:30:18.710 "claimed": true, 00:30:18.710 "claim_type": "exclusive_write", 00:30:18.710 "zoned": false, 00:30:18.710 "supported_io_types": { 00:30:18.710 "read": true, 00:30:18.710 "write": true, 00:30:18.710 "unmap": true, 00:30:18.710 "write_zeroes": true, 00:30:18.710 "flush": true, 00:30:18.710 "reset": true, 00:30:18.710 "compare": false, 00:30:18.710 "compare_and_write": false, 00:30:18.710 "abort": true, 00:30:18.710 "nvme_admin": false, 00:30:18.710 "nvme_io": false 00:30:18.710 }, 00:30:18.710 "memory_domains": [ 00:30:18.710 { 00:30:18.710 "dma_device_id": "system", 00:30:18.710 "dma_device_type": 1 00:30:18.710 }, 00:30:18.710 { 00:30:18.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:18.710 "dma_device_type": 2 00:30:18.710 } 00:30:18.710 ], 00:30:18.710 "driver_specific": {} 00:30:18.710 }' 00:30:18.710 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:18.710 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:18.991 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:18.991 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:18.991 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:18.991 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:18.991 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:18.991 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:18.991 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:18.992 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:18.992 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:18.992 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:18.992 07:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:19.250 [2024-07-12 07:40:53.109148] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:19.508 "name": "Existed_Raid", 00:30:19.508 "uuid": "4209df48-73c6-4602-930f-036b36f87b73", 00:30:19.508 "strip_size_kb": 64, 00:30:19.508 "state": "online", 00:30:19.508 "raid_level": "raid5f", 00:30:19.508 "superblock": false, 00:30:19.508 "num_base_bdevs": 3, 00:30:19.508 "num_base_bdevs_discovered": 2, 00:30:19.508 "num_base_bdevs_operational": 2, 00:30:19.508 "base_bdevs_list": [ 00:30:19.508 { 00:30:19.508 "name": null, 00:30:19.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:19.508 "is_configured": false, 00:30:19.508 "data_offset": 0, 00:30:19.508 "data_size": 65536 00:30:19.508 }, 00:30:19.508 { 00:30:19.508 "name": "BaseBdev2", 00:30:19.508 "uuid": "1d5b394a-1a35-48ff-8f2a-4ce868cc7316", 00:30:19.508 "is_configured": true, 00:30:19.508 "data_offset": 0, 00:30:19.508 "data_size": 65536 00:30:19.508 }, 00:30:19.508 { 00:30:19.508 "name": "BaseBdev3", 00:30:19.508 "uuid": "ea286e8a-8453-46c7-8efd-b08b4743e9e0", 00:30:19.508 "is_configured": true, 00:30:19.508 "data_offset": 0, 00:30:19.508 "data_size": 65536 00:30:19.508 } 00:30:19.508 ] 00:30:19.508 }' 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:19.508 07:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.443 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:30:20.443 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:20.443 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:20.443 07:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.443 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:20.443 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:20.443 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:20.702 [2024-07-12 07:40:54.412857] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:20.702 [2024-07-12 07:40:54.413070] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:20.702 [2024-07-12 07:40:54.424107] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:20.702 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:20.702 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:20.702 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:20.702 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.961 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:20.961 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:20.961 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:30:20.961 [2024-07-12 07:40:54.764198] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:20.961 [2024-07-12 07:40:54.764386] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:30:20.961 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:20.961 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:20.961 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.961 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:30:21.220 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:30:21.220 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:30:21.220 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:30:21.220 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:30:21.220 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:21.220 07:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:21.479 BaseBdev2 00:30:21.479 07:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:30:21.479 07:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:30:21.479 07:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:21.479 07:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:21.479 07:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:21.479 07:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:21.479 07:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:21.738 07:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:21.738 [ 00:30:21.738 { 00:30:21.738 "name": "BaseBdev2", 00:30:21.738 "aliases": [ 00:30:21.738 "eaa19477-a0b7-46fd-bd81-16e4f3df8126" 00:30:21.738 ], 00:30:21.738 "product_name": "Malloc disk", 00:30:21.738 "block_size": 512, 00:30:21.738 "num_blocks": 65536, 00:30:21.738 "uuid": "eaa19477-a0b7-46fd-bd81-16e4f3df8126", 00:30:21.738 "assigned_rate_limits": { 00:30:21.738 "rw_ios_per_sec": 0, 00:30:21.738 "rw_mbytes_per_sec": 0, 00:30:21.738 "r_mbytes_per_sec": 0, 00:30:21.738 "w_mbytes_per_sec": 0 00:30:21.738 }, 00:30:21.738 "claimed": false, 00:30:21.738 "zoned": false, 00:30:21.738 "supported_io_types": { 00:30:21.738 "read": true, 00:30:21.738 "write": true, 00:30:21.738 "unmap": true, 00:30:21.738 "write_zeroes": true, 00:30:21.738 "flush": true, 00:30:21.738 "reset": true, 00:30:21.738 "compare": false, 00:30:21.738 "compare_and_write": false, 00:30:21.738 "abort": true, 00:30:21.738 "nvme_admin": false, 00:30:21.738 "nvme_io": false 00:30:21.738 }, 00:30:21.738 "memory_domains": [ 00:30:21.738 { 00:30:21.738 "dma_device_id": "system", 00:30:21.738 "dma_device_type": 1 00:30:21.738 }, 00:30:21.738 { 00:30:21.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:21.738 "dma_device_type": 2 00:30:21.738 } 00:30:21.738 ], 00:30:21.738 "driver_specific": {} 00:30:21.738 } 00:30:21.738 ] 00:30:21.995 07:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:21.995 07:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:21.995 07:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:21.995 07:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:22.254 BaseBdev3 00:30:22.254 07:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:30:22.254 07:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:30:22.254 07:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:22.254 07:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:22.254 07:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:22.254 07:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:22.254 07:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:22.254 07:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:22.511 [ 00:30:22.511 { 00:30:22.511 "name": "BaseBdev3", 00:30:22.511 "aliases": [ 00:30:22.511 "6187c35c-c716-4f88-bd8e-e02add3830e1" 00:30:22.511 ], 00:30:22.511 "product_name": "Malloc disk", 00:30:22.511 "block_size": 512, 00:30:22.511 "num_blocks": 65536, 00:30:22.511 "uuid": "6187c35c-c716-4f88-bd8e-e02add3830e1", 00:30:22.511 "assigned_rate_limits": { 00:30:22.511 "rw_ios_per_sec": 0, 00:30:22.511 "rw_mbytes_per_sec": 0, 00:30:22.511 "r_mbytes_per_sec": 0, 00:30:22.511 "w_mbytes_per_sec": 0 00:30:22.511 }, 00:30:22.511 "claimed": false, 00:30:22.511 "zoned": false, 00:30:22.511 "supported_io_types": { 00:30:22.511 "read": true, 00:30:22.511 "write": true, 00:30:22.511 "unmap": true, 00:30:22.511 "write_zeroes": true, 00:30:22.511 "flush": true, 00:30:22.511 "reset": true, 00:30:22.511 "compare": false, 00:30:22.511 "compare_and_write": false, 00:30:22.511 "abort": true, 00:30:22.511 "nvme_admin": false, 00:30:22.511 "nvme_io": false 00:30:22.511 }, 00:30:22.511 "memory_domains": [ 00:30:22.511 { 00:30:22.511 "dma_device_id": "system", 00:30:22.511 "dma_device_type": 1 00:30:22.511 }, 00:30:22.511 { 00:30:22.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:22.511 "dma_device_type": 2 00:30:22.511 } 00:30:22.511 ], 00:30:22.511 "driver_specific": {} 00:30:22.511 } 00:30:22.511 ] 00:30:22.511 07:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:22.511 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:22.511 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:22.511 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:22.770 [2024-07-12 07:40:56.401882] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:22.770 [2024-07-12 07:40:56.402185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:22.770 [2024-07-12 07:40:56.402296] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:22.770 [2024-07-12 07:40:56.404252] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:22.770 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:22.770 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:22.770 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:22.770 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:22.770 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:22.770 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:22.770 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:22.770 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:22.770 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:22.770 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:22.770 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:22.770 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:22.770 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:22.770 "name": "Existed_Raid", 00:30:22.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.770 "strip_size_kb": 64, 00:30:22.770 "state": "configuring", 00:30:22.770 "raid_level": "raid5f", 00:30:22.770 "superblock": false, 00:30:22.770 "num_base_bdevs": 3, 00:30:22.770 "num_base_bdevs_discovered": 2, 00:30:22.770 "num_base_bdevs_operational": 3, 00:30:22.770 "base_bdevs_list": [ 00:30:22.770 { 00:30:22.770 "name": "BaseBdev1", 00:30:22.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.770 "is_configured": false, 00:30:22.770 "data_offset": 0, 00:30:22.770 "data_size": 0 00:30:22.770 }, 00:30:22.770 { 00:30:22.770 "name": "BaseBdev2", 00:30:22.770 "uuid": "eaa19477-a0b7-46fd-bd81-16e4f3df8126", 00:30:22.770 "is_configured": true, 00:30:22.770 "data_offset": 0, 00:30:22.770 "data_size": 65536 00:30:22.770 }, 00:30:22.770 { 00:30:22.770 "name": "BaseBdev3", 00:30:22.770 "uuid": "6187c35c-c716-4f88-bd8e-e02add3830e1", 00:30:22.770 "is_configured": true, 00:30:22.770 "data_offset": 0, 00:30:22.770 "data_size": 65536 00:30:22.770 } 00:30:22.770 ] 00:30:22.770 }' 00:30:22.770 07:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:22.770 07:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:23.337 07:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:23.596 [2024-07-12 07:40:57.294044] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:23.596 07:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:23.596 07:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:23.596 07:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:23.596 07:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:23.596 07:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:23.596 07:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:23.596 07:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:23.596 07:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:23.596 07:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:23.596 07:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:23.596 07:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.596 07:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:23.855 07:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:23.855 "name": "Existed_Raid", 00:30:23.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.855 "strip_size_kb": 64, 00:30:23.855 "state": "configuring", 00:30:23.855 "raid_level": "raid5f", 00:30:23.855 "superblock": false, 00:30:23.855 "num_base_bdevs": 3, 00:30:23.855 "num_base_bdevs_discovered": 1, 00:30:23.855 "num_base_bdevs_operational": 3, 00:30:23.855 "base_bdevs_list": [ 00:30:23.855 { 00:30:23.855 "name": "BaseBdev1", 00:30:23.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.855 "is_configured": false, 00:30:23.855 "data_offset": 0, 00:30:23.855 "data_size": 0 00:30:23.855 }, 00:30:23.855 { 00:30:23.855 "name": null, 00:30:23.855 "uuid": "eaa19477-a0b7-46fd-bd81-16e4f3df8126", 00:30:23.855 "is_configured": false, 00:30:23.855 "data_offset": 0, 00:30:23.855 "data_size": 65536 00:30:23.855 }, 00:30:23.855 { 00:30:23.855 "name": "BaseBdev3", 00:30:23.855 "uuid": "6187c35c-c716-4f88-bd8e-e02add3830e1", 00:30:23.855 "is_configured": true, 00:30:23.855 "data_offset": 0, 00:30:23.855 "data_size": 65536 00:30:23.855 } 00:30:23.855 ] 00:30:23.855 }' 00:30:23.855 07:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:23.855 07:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.423 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.423 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:24.682 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:30:24.682 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:24.682 [2024-07-12 07:40:58.508976] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:24.682 BaseBdev1 00:30:24.682 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:30:24.682 07:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:30:24.682 07:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:24.682 07:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:24.682 07:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:24.682 07:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:24.682 07:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:24.941 07:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:25.200 [ 00:30:25.200 { 00:30:25.200 "name": "BaseBdev1", 00:30:25.200 "aliases": [ 00:30:25.200 "bcf9db28-3884-4883-b39e-f790ab5905e5" 00:30:25.200 ], 00:30:25.200 "product_name": "Malloc disk", 00:30:25.200 "block_size": 512, 00:30:25.200 "num_blocks": 65536, 00:30:25.200 "uuid": "bcf9db28-3884-4883-b39e-f790ab5905e5", 00:30:25.200 "assigned_rate_limits": { 00:30:25.200 "rw_ios_per_sec": 0, 00:30:25.200 "rw_mbytes_per_sec": 0, 00:30:25.200 "r_mbytes_per_sec": 0, 00:30:25.200 "w_mbytes_per_sec": 0 00:30:25.200 }, 00:30:25.200 "claimed": true, 00:30:25.200 "claim_type": "exclusive_write", 00:30:25.200 "zoned": false, 00:30:25.200 "supported_io_types": { 00:30:25.200 "read": true, 00:30:25.200 "write": true, 00:30:25.200 "unmap": true, 00:30:25.200 "write_zeroes": true, 00:30:25.201 "flush": true, 00:30:25.201 "reset": true, 00:30:25.201 "compare": false, 00:30:25.201 "compare_and_write": false, 00:30:25.201 "abort": true, 00:30:25.201 "nvme_admin": false, 00:30:25.201 "nvme_io": false 00:30:25.201 }, 00:30:25.201 "memory_domains": [ 00:30:25.201 { 00:30:25.201 "dma_device_id": "system", 00:30:25.201 "dma_device_type": 1 00:30:25.201 }, 00:30:25.201 { 00:30:25.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:25.201 "dma_device_type": 2 00:30:25.201 } 00:30:25.201 ], 00:30:25.201 "driver_specific": {} 00:30:25.201 } 00:30:25.201 ] 00:30:25.201 07:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:25.201 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:25.201 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:25.201 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:25.201 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:25.201 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:25.201 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:25.201 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:25.201 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:25.201 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:25.201 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:25.201 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.201 07:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:25.460 07:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:25.460 "name": "Existed_Raid", 00:30:25.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.460 "strip_size_kb": 64, 00:30:25.460 "state": "configuring", 00:30:25.460 "raid_level": "raid5f", 00:30:25.460 "superblock": false, 00:30:25.460 "num_base_bdevs": 3, 00:30:25.460 "num_base_bdevs_discovered": 2, 00:30:25.460 "num_base_bdevs_operational": 3, 00:30:25.460 "base_bdevs_list": [ 00:30:25.460 { 00:30:25.460 "name": "BaseBdev1", 00:30:25.460 "uuid": "bcf9db28-3884-4883-b39e-f790ab5905e5", 00:30:25.460 "is_configured": true, 00:30:25.460 "data_offset": 0, 00:30:25.460 "data_size": 65536 00:30:25.460 }, 00:30:25.460 { 00:30:25.460 "name": null, 00:30:25.460 "uuid": "eaa19477-a0b7-46fd-bd81-16e4f3df8126", 00:30:25.460 "is_configured": false, 00:30:25.460 "data_offset": 0, 00:30:25.460 "data_size": 65536 00:30:25.460 }, 00:30:25.460 { 00:30:25.460 "name": "BaseBdev3", 00:30:25.460 "uuid": "6187c35c-c716-4f88-bd8e-e02add3830e1", 00:30:25.460 "is_configured": true, 00:30:25.460 "data_offset": 0, 00:30:25.460 "data_size": 65536 00:30:25.460 } 00:30:25.460 ] 00:30:25.460 }' 00:30:25.460 07:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:25.460 07:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.028 07:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.028 07:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:26.287 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:30:26.287 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:30:26.546 [2024-07-12 07:41:00.249335] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:26.546 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:26.546 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:26.546 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:26.546 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:26.546 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:26.546 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:26.546 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:26.546 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:26.546 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:26.546 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:26.547 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.547 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:26.805 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:26.805 "name": "Existed_Raid", 00:30:26.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.805 "strip_size_kb": 64, 00:30:26.805 "state": "configuring", 00:30:26.805 "raid_level": "raid5f", 00:30:26.805 "superblock": false, 00:30:26.805 "num_base_bdevs": 3, 00:30:26.805 "num_base_bdevs_discovered": 1, 00:30:26.805 "num_base_bdevs_operational": 3, 00:30:26.805 "base_bdevs_list": [ 00:30:26.805 { 00:30:26.805 "name": "BaseBdev1", 00:30:26.805 "uuid": "bcf9db28-3884-4883-b39e-f790ab5905e5", 00:30:26.805 "is_configured": true, 00:30:26.805 "data_offset": 0, 00:30:26.805 "data_size": 65536 00:30:26.805 }, 00:30:26.805 { 00:30:26.805 "name": null, 00:30:26.805 "uuid": "eaa19477-a0b7-46fd-bd81-16e4f3df8126", 00:30:26.805 "is_configured": false, 00:30:26.805 "data_offset": 0, 00:30:26.805 "data_size": 65536 00:30:26.805 }, 00:30:26.805 { 00:30:26.805 "name": null, 00:30:26.805 "uuid": "6187c35c-c716-4f88-bd8e-e02add3830e1", 00:30:26.805 "is_configured": false, 00:30:26.805 "data_offset": 0, 00:30:26.805 "data_size": 65536 00:30:26.805 } 00:30:26.805 ] 00:30:26.805 }' 00:30:26.805 07:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:26.805 07:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.373 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.373 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:27.632 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:30:27.632 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:27.632 [2024-07-12 07:41:01.425539] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:27.632 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:27.632 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:27.632 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:27.632 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:27.632 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:27.632 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:27.632 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:27.632 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:27.632 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:27.632 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:27.632 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:27.632 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.891 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:27.891 "name": "Existed_Raid", 00:30:27.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.891 "strip_size_kb": 64, 00:30:27.891 "state": "configuring", 00:30:27.891 "raid_level": "raid5f", 00:30:27.891 "superblock": false, 00:30:27.891 "num_base_bdevs": 3, 00:30:27.891 "num_base_bdevs_discovered": 2, 00:30:27.891 "num_base_bdevs_operational": 3, 00:30:27.891 "base_bdevs_list": [ 00:30:27.891 { 00:30:27.891 "name": "BaseBdev1", 00:30:27.891 "uuid": "bcf9db28-3884-4883-b39e-f790ab5905e5", 00:30:27.891 "is_configured": true, 00:30:27.891 "data_offset": 0, 00:30:27.891 "data_size": 65536 00:30:27.891 }, 00:30:27.891 { 00:30:27.891 "name": null, 00:30:27.891 "uuid": "eaa19477-a0b7-46fd-bd81-16e4f3df8126", 00:30:27.891 "is_configured": false, 00:30:27.891 "data_offset": 0, 00:30:27.891 "data_size": 65536 00:30:27.891 }, 00:30:27.891 { 00:30:27.891 "name": "BaseBdev3", 00:30:27.891 "uuid": "6187c35c-c716-4f88-bd8e-e02add3830e1", 00:30:27.891 "is_configured": true, 00:30:27.891 "data_offset": 0, 00:30:27.891 "data_size": 65536 00:30:27.891 } 00:30:27.891 ] 00:30:27.891 }' 00:30:27.891 07:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:27.891 07:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.457 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:28.457 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.716 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:30:28.716 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:28.975 [2024-07-12 07:41:02.674849] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:28.975 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:28.975 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:28.975 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:28.975 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:28.975 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:28.975 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:28.975 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:28.975 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:28.975 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:28.975 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:28.975 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:28.975 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.246 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:29.246 "name": "Existed_Raid", 00:30:29.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.246 "strip_size_kb": 64, 00:30:29.246 "state": "configuring", 00:30:29.246 "raid_level": "raid5f", 00:30:29.246 "superblock": false, 00:30:29.246 "num_base_bdevs": 3, 00:30:29.246 "num_base_bdevs_discovered": 1, 00:30:29.246 "num_base_bdevs_operational": 3, 00:30:29.246 "base_bdevs_list": [ 00:30:29.246 { 00:30:29.246 "name": null, 00:30:29.246 "uuid": "bcf9db28-3884-4883-b39e-f790ab5905e5", 00:30:29.246 "is_configured": false, 00:30:29.246 "data_offset": 0, 00:30:29.246 "data_size": 65536 00:30:29.246 }, 00:30:29.246 { 00:30:29.246 "name": null, 00:30:29.246 "uuid": "eaa19477-a0b7-46fd-bd81-16e4f3df8126", 00:30:29.246 "is_configured": false, 00:30:29.246 "data_offset": 0, 00:30:29.246 "data_size": 65536 00:30:29.246 }, 00:30:29.246 { 00:30:29.246 "name": "BaseBdev3", 00:30:29.246 "uuid": "6187c35c-c716-4f88-bd8e-e02add3830e1", 00:30:29.246 "is_configured": true, 00:30:29.246 "data_offset": 0, 00:30:29.246 "data_size": 65536 00:30:29.246 } 00:30:29.246 ] 00:30:29.246 }' 00:30:29.246 07:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:29.246 07:41:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.815 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:29.815 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.815 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:30:29.815 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:30.074 [2024-07-12 07:41:03.936846] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:30.074 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:30.074 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:30.074 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:30.074 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:30.074 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:30.074 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:30.074 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:30.074 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:30.074 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:30.074 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:30.333 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:30.333 07:41:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.333 07:41:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:30.333 "name": "Existed_Raid", 00:30:30.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.333 "strip_size_kb": 64, 00:30:30.333 "state": "configuring", 00:30:30.333 "raid_level": "raid5f", 00:30:30.333 "superblock": false, 00:30:30.333 "num_base_bdevs": 3, 00:30:30.333 "num_base_bdevs_discovered": 2, 00:30:30.333 "num_base_bdevs_operational": 3, 00:30:30.333 "base_bdevs_list": [ 00:30:30.333 { 00:30:30.333 "name": null, 00:30:30.333 "uuid": "bcf9db28-3884-4883-b39e-f790ab5905e5", 00:30:30.333 "is_configured": false, 00:30:30.333 "data_offset": 0, 00:30:30.333 "data_size": 65536 00:30:30.333 }, 00:30:30.333 { 00:30:30.333 "name": "BaseBdev2", 00:30:30.333 "uuid": "eaa19477-a0b7-46fd-bd81-16e4f3df8126", 00:30:30.333 "is_configured": true, 00:30:30.333 "data_offset": 0, 00:30:30.333 "data_size": 65536 00:30:30.333 }, 00:30:30.333 { 00:30:30.333 "name": "BaseBdev3", 00:30:30.333 "uuid": "6187c35c-c716-4f88-bd8e-e02add3830e1", 00:30:30.333 "is_configured": true, 00:30:30.333 "data_offset": 0, 00:30:30.333 "data_size": 65536 00:30:30.333 } 00:30:30.333 ] 00:30:30.333 }' 00:30:30.333 07:41:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:30.333 07:41:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.901 07:41:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.901 07:41:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:31.159 07:41:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:30:31.159 07:41:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:31.159 07:41:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:31.418 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u bcf9db28-3884-4883-b39e-f790ab5905e5 00:30:31.677 [2024-07-12 07:41:05.366833] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:31.677 [2024-07-12 07:41:05.367046] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:30:31.677 [2024-07-12 07:41:05.367083] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:30:31.677 [2024-07-12 07:41:05.367227] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:30:31.677 [2024-07-12 07:41:05.367864] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:30:31.677 [2024-07-12 07:41:05.367975] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:30:31.677 [2024-07-12 07:41:05.368203] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:31.677 NewBaseBdev 00:30:31.677 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:30:31.677 07:41:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:30:31.677 07:41:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:31.677 07:41:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:31.677 07:41:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:31.677 07:41:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:31.677 07:41:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:31.936 07:41:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:31.936 [ 00:30:31.936 { 00:30:31.936 "name": "NewBaseBdev", 00:30:31.936 "aliases": [ 00:30:31.936 "bcf9db28-3884-4883-b39e-f790ab5905e5" 00:30:31.936 ], 00:30:31.936 "product_name": "Malloc disk", 00:30:31.936 "block_size": 512, 00:30:31.936 "num_blocks": 65536, 00:30:31.936 "uuid": "bcf9db28-3884-4883-b39e-f790ab5905e5", 00:30:31.936 "assigned_rate_limits": { 00:30:31.936 "rw_ios_per_sec": 0, 00:30:31.936 "rw_mbytes_per_sec": 0, 00:30:31.936 "r_mbytes_per_sec": 0, 00:30:31.936 "w_mbytes_per_sec": 0 00:30:31.936 }, 00:30:31.936 "claimed": true, 00:30:31.936 "claim_type": "exclusive_write", 00:30:31.936 "zoned": false, 00:30:31.936 "supported_io_types": { 00:30:31.936 "read": true, 00:30:31.936 "write": true, 00:30:31.936 "unmap": true, 00:30:31.936 "write_zeroes": true, 00:30:31.936 "flush": true, 00:30:31.936 "reset": true, 00:30:31.936 "compare": false, 00:30:31.936 "compare_and_write": false, 00:30:31.936 "abort": true, 00:30:31.936 "nvme_admin": false, 00:30:31.936 "nvme_io": false 00:30:31.936 }, 00:30:31.936 "memory_domains": [ 00:30:31.936 { 00:30:31.936 "dma_device_id": "system", 00:30:31.936 "dma_device_type": 1 00:30:31.936 }, 00:30:31.936 { 00:30:31.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:31.936 "dma_device_type": 2 00:30:31.936 } 00:30:31.936 ], 00:30:31.936 "driver_specific": {} 00:30:31.936 } 00:30:31.936 ] 00:30:31.936 07:41:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:31.936 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:31.936 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:31.936 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:31.936 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:31.936 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:31.936 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:31.936 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:31.936 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:31.936 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:31.936 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:31.936 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:31.936 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:32.195 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:32.195 "name": "Existed_Raid", 00:30:32.195 "uuid": "ba33120e-48a1-4a69-b4b4-2257ca462c47", 00:30:32.195 "strip_size_kb": 64, 00:30:32.195 "state": "online", 00:30:32.195 "raid_level": "raid5f", 00:30:32.195 "superblock": false, 00:30:32.195 "num_base_bdevs": 3, 00:30:32.195 "num_base_bdevs_discovered": 3, 00:30:32.195 "num_base_bdevs_operational": 3, 00:30:32.195 "base_bdevs_list": [ 00:30:32.195 { 00:30:32.195 "name": "NewBaseBdev", 00:30:32.195 "uuid": "bcf9db28-3884-4883-b39e-f790ab5905e5", 00:30:32.195 "is_configured": true, 00:30:32.195 "data_offset": 0, 00:30:32.195 "data_size": 65536 00:30:32.195 }, 00:30:32.195 { 00:30:32.195 "name": "BaseBdev2", 00:30:32.195 "uuid": "eaa19477-a0b7-46fd-bd81-16e4f3df8126", 00:30:32.195 "is_configured": true, 00:30:32.195 "data_offset": 0, 00:30:32.195 "data_size": 65536 00:30:32.195 }, 00:30:32.195 { 00:30:32.195 "name": "BaseBdev3", 00:30:32.195 "uuid": "6187c35c-c716-4f88-bd8e-e02add3830e1", 00:30:32.195 "is_configured": true, 00:30:32.195 "data_offset": 0, 00:30:32.195 "data_size": 65536 00:30:32.195 } 00:30:32.195 ] 00:30:32.195 }' 00:30:32.195 07:41:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:32.195 07:41:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.763 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:30:32.763 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:32.763 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:32.763 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:32.763 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:32.763 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:30:32.763 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:32.763 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:33.022 [2024-07-12 07:41:06.699222] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:33.022 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:33.022 "name": "Existed_Raid", 00:30:33.022 "aliases": [ 00:30:33.022 "ba33120e-48a1-4a69-b4b4-2257ca462c47" 00:30:33.022 ], 00:30:33.022 "product_name": "Raid Volume", 00:30:33.022 "block_size": 512, 00:30:33.022 "num_blocks": 131072, 00:30:33.022 "uuid": "ba33120e-48a1-4a69-b4b4-2257ca462c47", 00:30:33.022 "assigned_rate_limits": { 00:30:33.022 "rw_ios_per_sec": 0, 00:30:33.022 "rw_mbytes_per_sec": 0, 00:30:33.022 "r_mbytes_per_sec": 0, 00:30:33.022 "w_mbytes_per_sec": 0 00:30:33.022 }, 00:30:33.022 "claimed": false, 00:30:33.022 "zoned": false, 00:30:33.022 "supported_io_types": { 00:30:33.022 "read": true, 00:30:33.022 "write": true, 00:30:33.022 "unmap": false, 00:30:33.022 "write_zeroes": true, 00:30:33.022 "flush": false, 00:30:33.022 "reset": true, 00:30:33.022 "compare": false, 00:30:33.022 "compare_and_write": false, 00:30:33.022 "abort": false, 00:30:33.022 "nvme_admin": false, 00:30:33.022 "nvme_io": false 00:30:33.022 }, 00:30:33.022 "driver_specific": { 00:30:33.022 "raid": { 00:30:33.022 "uuid": "ba33120e-48a1-4a69-b4b4-2257ca462c47", 00:30:33.022 "strip_size_kb": 64, 00:30:33.022 "state": "online", 00:30:33.022 "raid_level": "raid5f", 00:30:33.022 "superblock": false, 00:30:33.022 "num_base_bdevs": 3, 00:30:33.022 "num_base_bdevs_discovered": 3, 00:30:33.022 "num_base_bdevs_operational": 3, 00:30:33.022 "base_bdevs_list": [ 00:30:33.022 { 00:30:33.022 "name": "NewBaseBdev", 00:30:33.022 "uuid": "bcf9db28-3884-4883-b39e-f790ab5905e5", 00:30:33.022 "is_configured": true, 00:30:33.022 "data_offset": 0, 00:30:33.022 "data_size": 65536 00:30:33.022 }, 00:30:33.022 { 00:30:33.022 "name": "BaseBdev2", 00:30:33.022 "uuid": "eaa19477-a0b7-46fd-bd81-16e4f3df8126", 00:30:33.022 "is_configured": true, 00:30:33.022 "data_offset": 0, 00:30:33.022 "data_size": 65536 00:30:33.022 }, 00:30:33.022 { 00:30:33.022 "name": "BaseBdev3", 00:30:33.022 "uuid": "6187c35c-c716-4f88-bd8e-e02add3830e1", 00:30:33.022 "is_configured": true, 00:30:33.022 "data_offset": 0, 00:30:33.022 "data_size": 65536 00:30:33.022 } 00:30:33.022 ] 00:30:33.022 } 00:30:33.022 } 00:30:33.022 }' 00:30:33.022 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:33.022 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:30:33.022 BaseBdev2 00:30:33.022 BaseBdev3' 00:30:33.022 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:33.022 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:30:33.022 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:33.282 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:33.282 "name": "NewBaseBdev", 00:30:33.282 "aliases": [ 00:30:33.282 "bcf9db28-3884-4883-b39e-f790ab5905e5" 00:30:33.282 ], 00:30:33.282 "product_name": "Malloc disk", 00:30:33.282 "block_size": 512, 00:30:33.282 "num_blocks": 65536, 00:30:33.282 "uuid": "bcf9db28-3884-4883-b39e-f790ab5905e5", 00:30:33.282 "assigned_rate_limits": { 00:30:33.282 "rw_ios_per_sec": 0, 00:30:33.282 "rw_mbytes_per_sec": 0, 00:30:33.282 "r_mbytes_per_sec": 0, 00:30:33.282 "w_mbytes_per_sec": 0 00:30:33.282 }, 00:30:33.282 "claimed": true, 00:30:33.282 "claim_type": "exclusive_write", 00:30:33.282 "zoned": false, 00:30:33.282 "supported_io_types": { 00:30:33.282 "read": true, 00:30:33.282 "write": true, 00:30:33.282 "unmap": true, 00:30:33.282 "write_zeroes": true, 00:30:33.282 "flush": true, 00:30:33.282 "reset": true, 00:30:33.282 "compare": false, 00:30:33.282 "compare_and_write": false, 00:30:33.282 "abort": true, 00:30:33.282 "nvme_admin": false, 00:30:33.282 "nvme_io": false 00:30:33.282 }, 00:30:33.282 "memory_domains": [ 00:30:33.282 { 00:30:33.282 "dma_device_id": "system", 00:30:33.282 "dma_device_type": 1 00:30:33.282 }, 00:30:33.282 { 00:30:33.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:33.282 "dma_device_type": 2 00:30:33.282 } 00:30:33.282 ], 00:30:33.282 "driver_specific": {} 00:30:33.282 }' 00:30:33.282 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:33.282 07:41:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:33.282 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:33.282 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:33.282 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:33.282 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:33.282 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:33.541 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:33.541 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:33.541 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:33.541 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:33.541 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:33.541 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:33.541 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:33.541 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:33.801 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:33.801 "name": "BaseBdev2", 00:30:33.801 "aliases": [ 00:30:33.801 "eaa19477-a0b7-46fd-bd81-16e4f3df8126" 00:30:33.801 ], 00:30:33.801 "product_name": "Malloc disk", 00:30:33.801 "block_size": 512, 00:30:33.801 "num_blocks": 65536, 00:30:33.801 "uuid": "eaa19477-a0b7-46fd-bd81-16e4f3df8126", 00:30:33.801 "assigned_rate_limits": { 00:30:33.801 "rw_ios_per_sec": 0, 00:30:33.801 "rw_mbytes_per_sec": 0, 00:30:33.801 "r_mbytes_per_sec": 0, 00:30:33.801 "w_mbytes_per_sec": 0 00:30:33.801 }, 00:30:33.801 "claimed": true, 00:30:33.801 "claim_type": "exclusive_write", 00:30:33.801 "zoned": false, 00:30:33.801 "supported_io_types": { 00:30:33.801 "read": true, 00:30:33.801 "write": true, 00:30:33.801 "unmap": true, 00:30:33.801 "write_zeroes": true, 00:30:33.801 "flush": true, 00:30:33.801 "reset": true, 00:30:33.801 "compare": false, 00:30:33.801 "compare_and_write": false, 00:30:33.801 "abort": true, 00:30:33.801 "nvme_admin": false, 00:30:33.801 "nvme_io": false 00:30:33.801 }, 00:30:33.801 "memory_domains": [ 00:30:33.801 { 00:30:33.801 "dma_device_id": "system", 00:30:33.801 "dma_device_type": 1 00:30:33.801 }, 00:30:33.801 { 00:30:33.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:33.801 "dma_device_type": 2 00:30:33.801 } 00:30:33.801 ], 00:30:33.801 "driver_specific": {} 00:30:33.801 }' 00:30:33.801 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:33.801 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:33.801 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:33.801 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:34.060 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:34.060 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:34.060 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:34.060 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:34.060 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:34.060 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:34.060 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:34.060 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:34.060 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:34.060 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:34.320 07:41:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:34.320 07:41:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:34.320 "name": "BaseBdev3", 00:30:34.320 "aliases": [ 00:30:34.320 "6187c35c-c716-4f88-bd8e-e02add3830e1" 00:30:34.320 ], 00:30:34.320 "product_name": "Malloc disk", 00:30:34.320 "block_size": 512, 00:30:34.320 "num_blocks": 65536, 00:30:34.320 "uuid": "6187c35c-c716-4f88-bd8e-e02add3830e1", 00:30:34.320 "assigned_rate_limits": { 00:30:34.320 "rw_ios_per_sec": 0, 00:30:34.320 "rw_mbytes_per_sec": 0, 00:30:34.320 "r_mbytes_per_sec": 0, 00:30:34.320 "w_mbytes_per_sec": 0 00:30:34.320 }, 00:30:34.320 "claimed": true, 00:30:34.320 "claim_type": "exclusive_write", 00:30:34.320 "zoned": false, 00:30:34.320 "supported_io_types": { 00:30:34.320 "read": true, 00:30:34.320 "write": true, 00:30:34.320 "unmap": true, 00:30:34.320 "write_zeroes": true, 00:30:34.320 "flush": true, 00:30:34.320 "reset": true, 00:30:34.320 "compare": false, 00:30:34.320 "compare_and_write": false, 00:30:34.320 "abort": true, 00:30:34.320 "nvme_admin": false, 00:30:34.320 "nvme_io": false 00:30:34.320 }, 00:30:34.320 "memory_domains": [ 00:30:34.320 { 00:30:34.320 "dma_device_id": "system", 00:30:34.320 "dma_device_type": 1 00:30:34.320 }, 00:30:34.320 { 00:30:34.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:34.320 "dma_device_type": 2 00:30:34.320 } 00:30:34.320 ], 00:30:34.320 "driver_specific": {} 00:30:34.320 }' 00:30:34.320 07:41:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:34.579 07:41:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:34.579 07:41:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:34.579 07:41:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:34.579 07:41:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:34.579 07:41:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:34.579 07:41:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:34.579 07:41:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:34.579 07:41:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:34.579 07:41:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:34.839 07:41:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:34.839 07:41:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:34.839 07:41:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:34.839 [2024-07-12 07:41:08.691441] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:34.839 [2024-07-12 07:41:08.691568] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:34.839 [2024-07-12 07:41:08.691779] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:34.839 [2024-07-12 07:41:08.692027] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:34.839 [2024-07-12 07:41:08.692130] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:30:34.839 07:41:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 159467 00:30:34.839 07:41:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 159467 ']' 00:30:34.839 07:41:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # kill -0 159467 00:30:34.839 07:41:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # uname 00:30:34.839 07:41:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:34.839 07:41:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 159467 00:30:35.098 07:41:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:35.098 07:41:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:35.098 07:41:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 159467' 00:30:35.098 killing process with pid 159467 00:30:35.098 07:41:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@965 -- # kill 159467 00:30:35.098 [2024-07-12 07:41:08.734713] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:35.098 07:41:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # wait 159467 00:30:35.098 [2024-07-12 07:41:08.763320] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:30:35.358 00:30:35.358 real 0m26.065s 00:30:35.358 user 0m47.807s 00:30:35.358 sys 0m4.739s 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.358 ************************************ 00:30:35.358 END TEST raid5f_state_function_test 00:30:35.358 ************************************ 00:30:35.358 07:41:09 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:30:35.358 07:41:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:30:35.358 07:41:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:35.358 07:41:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:35.358 ************************************ 00:30:35.358 START TEST raid5f_state_function_test_sb 00:30:35.358 ************************************ 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 3 true 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=160400 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 160400' 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:35.358 Process raid pid: 160400 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 160400 /var/tmp/spdk-raid.sock 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 160400 ']' 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:35.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:35.358 07:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.358 [2024-07-12 07:41:09.166965] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:35.358 [2024-07-12 07:41:09.167373] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.618 [2024-07-12 07:41:09.309835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.618 [2024-07-12 07:41:09.362357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.618 [2024-07-12 07:41:09.408678] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:35.618 07:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:35.618 07:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:30:35.618 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:35.877 [2024-07-12 07:41:09.705792] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:35.877 [2024-07-12 07:41:09.706025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:35.877 [2024-07-12 07:41:09.706108] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:35.878 [2024-07-12 07:41:09.706154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:35.878 [2024-07-12 07:41:09.706180] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:35.878 [2024-07-12 07:41:09.706235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:35.878 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:35.878 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:35.878 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:35.878 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:35.878 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:35.878 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:35.878 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:35.878 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:35.878 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:35.878 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:35.878 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:35.878 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.137 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:36.137 "name": "Existed_Raid", 00:30:36.137 "uuid": "f4812239-5120-44f3-8e7c-a4b1b29186ba", 00:30:36.137 "strip_size_kb": 64, 00:30:36.137 "state": "configuring", 00:30:36.137 "raid_level": "raid5f", 00:30:36.137 "superblock": true, 00:30:36.137 "num_base_bdevs": 3, 00:30:36.137 "num_base_bdevs_discovered": 0, 00:30:36.137 "num_base_bdevs_operational": 3, 00:30:36.137 "base_bdevs_list": [ 00:30:36.137 { 00:30:36.137 "name": "BaseBdev1", 00:30:36.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.137 "is_configured": false, 00:30:36.137 "data_offset": 0, 00:30:36.137 "data_size": 0 00:30:36.137 }, 00:30:36.137 { 00:30:36.137 "name": "BaseBdev2", 00:30:36.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.137 "is_configured": false, 00:30:36.137 "data_offset": 0, 00:30:36.137 "data_size": 0 00:30:36.137 }, 00:30:36.137 { 00:30:36.137 "name": "BaseBdev3", 00:30:36.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.137 "is_configured": false, 00:30:36.137 "data_offset": 0, 00:30:36.137 "data_size": 0 00:30:36.137 } 00:30:36.137 ] 00:30:36.137 }' 00:30:36.137 07:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:36.137 07:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.705 07:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:36.964 [2024-07-12 07:41:10.809827] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:36.964 [2024-07-12 07:41:10.809985] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:30:36.964 07:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:37.222 [2024-07-12 07:41:10.981857] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:37.222 [2024-07-12 07:41:10.982025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:37.222 [2024-07-12 07:41:10.982127] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:37.222 [2024-07-12 07:41:10.982178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:37.222 [2024-07-12 07:41:10.982204] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:37.222 [2024-07-12 07:41:10.982246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:37.222 07:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:37.479 [2024-07-12 07:41:11.259079] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:37.479 BaseBdev1 00:30:37.479 07:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:30:37.479 07:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:30:37.479 07:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:37.479 07:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:37.479 07:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:37.479 07:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:37.480 07:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:37.738 07:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:37.996 [ 00:30:37.996 { 00:30:37.996 "name": "BaseBdev1", 00:30:37.996 "aliases": [ 00:30:37.996 "423f9d2d-538e-429f-b11b-70cdd7fb727c" 00:30:37.996 ], 00:30:37.996 "product_name": "Malloc disk", 00:30:37.996 "block_size": 512, 00:30:37.996 "num_blocks": 65536, 00:30:37.996 "uuid": "423f9d2d-538e-429f-b11b-70cdd7fb727c", 00:30:37.996 "assigned_rate_limits": { 00:30:37.996 "rw_ios_per_sec": 0, 00:30:37.996 "rw_mbytes_per_sec": 0, 00:30:37.996 "r_mbytes_per_sec": 0, 00:30:37.996 "w_mbytes_per_sec": 0 00:30:37.996 }, 00:30:37.996 "claimed": true, 00:30:37.996 "claim_type": "exclusive_write", 00:30:37.996 "zoned": false, 00:30:37.996 "supported_io_types": { 00:30:37.996 "read": true, 00:30:37.996 "write": true, 00:30:37.996 "unmap": true, 00:30:37.996 "write_zeroes": true, 00:30:37.996 "flush": true, 00:30:37.996 "reset": true, 00:30:37.996 "compare": false, 00:30:37.996 "compare_and_write": false, 00:30:37.996 "abort": true, 00:30:37.996 "nvme_admin": false, 00:30:37.996 "nvme_io": false 00:30:37.996 }, 00:30:37.996 "memory_domains": [ 00:30:37.996 { 00:30:37.996 "dma_device_id": "system", 00:30:37.996 "dma_device_type": 1 00:30:37.996 }, 00:30:37.996 { 00:30:37.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:37.996 "dma_device_type": 2 00:30:37.996 } 00:30:37.996 ], 00:30:37.996 "driver_specific": {} 00:30:37.996 } 00:30:37.996 ] 00:30:37.996 07:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:37.996 07:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:37.996 07:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:37.996 07:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:37.996 07:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:37.996 07:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:37.996 07:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:37.996 07:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:37.996 07:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:37.996 07:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:37.996 07:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:37.996 07:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:37.996 07:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:37.996 07:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:37.996 "name": "Existed_Raid", 00:30:37.996 "uuid": "1db1ed4c-2342-49de-b79c-bfe235f68875", 00:30:37.996 "strip_size_kb": 64, 00:30:37.996 "state": "configuring", 00:30:37.996 "raid_level": "raid5f", 00:30:37.996 "superblock": true, 00:30:37.996 "num_base_bdevs": 3, 00:30:37.996 "num_base_bdevs_discovered": 1, 00:30:37.996 "num_base_bdevs_operational": 3, 00:30:37.996 "base_bdevs_list": [ 00:30:37.996 { 00:30:37.996 "name": "BaseBdev1", 00:30:37.996 "uuid": "423f9d2d-538e-429f-b11b-70cdd7fb727c", 00:30:37.996 "is_configured": true, 00:30:37.996 "data_offset": 2048, 00:30:37.996 "data_size": 63488 00:30:37.996 }, 00:30:37.996 { 00:30:37.996 "name": "BaseBdev2", 00:30:37.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.996 "is_configured": false, 00:30:37.996 "data_offset": 0, 00:30:37.996 "data_size": 0 00:30:37.996 }, 00:30:37.996 { 00:30:37.996 "name": "BaseBdev3", 00:30:37.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.997 "is_configured": false, 00:30:37.997 "data_offset": 0, 00:30:37.997 "data_size": 0 00:30:37.997 } 00:30:37.997 ] 00:30:37.997 }' 00:30:37.997 07:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:37.997 07:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.563 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:38.821 [2024-07-12 07:41:12.523299] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:38.821 [2024-07-12 07:41:12.523459] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:30:38.821 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:38.821 [2024-07-12 07:41:12.699378] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:38.821 [2024-07-12 07:41:12.701483] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:38.821 [2024-07-12 07:41:12.701642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:38.821 [2024-07-12 07:41:12.701751] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:38.821 [2024-07-12 07:41:12.701806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:39.078 "name": "Existed_Raid", 00:30:39.078 "uuid": "f268996c-8c62-470a-a9c3-6fc39224ef3e", 00:30:39.078 "strip_size_kb": 64, 00:30:39.078 "state": "configuring", 00:30:39.078 "raid_level": "raid5f", 00:30:39.078 "superblock": true, 00:30:39.078 "num_base_bdevs": 3, 00:30:39.078 "num_base_bdevs_discovered": 1, 00:30:39.078 "num_base_bdevs_operational": 3, 00:30:39.078 "base_bdevs_list": [ 00:30:39.078 { 00:30:39.078 "name": "BaseBdev1", 00:30:39.078 "uuid": "423f9d2d-538e-429f-b11b-70cdd7fb727c", 00:30:39.078 "is_configured": true, 00:30:39.078 "data_offset": 2048, 00:30:39.078 "data_size": 63488 00:30:39.078 }, 00:30:39.078 { 00:30:39.078 "name": "BaseBdev2", 00:30:39.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.078 "is_configured": false, 00:30:39.078 "data_offset": 0, 00:30:39.078 "data_size": 0 00:30:39.078 }, 00:30:39.078 { 00:30:39.078 "name": "BaseBdev3", 00:30:39.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.078 "is_configured": false, 00:30:39.078 "data_offset": 0, 00:30:39.078 "data_size": 0 00:30:39.078 } 00:30:39.078 ] 00:30:39.078 }' 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:39.078 07:41:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.667 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:39.667 [2024-07-12 07:41:13.540981] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:39.667 BaseBdev2 00:30:39.925 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:30:39.925 07:41:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:30:39.925 07:41:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:39.925 07:41:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:39.925 07:41:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:39.925 07:41:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:39.925 07:41:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:39.925 07:41:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:40.183 [ 00:30:40.183 { 00:30:40.183 "name": "BaseBdev2", 00:30:40.183 "aliases": [ 00:30:40.183 "ddb21323-c52b-4d13-866d-756b4dedcf2b" 00:30:40.183 ], 00:30:40.183 "product_name": "Malloc disk", 00:30:40.183 "block_size": 512, 00:30:40.183 "num_blocks": 65536, 00:30:40.183 "uuid": "ddb21323-c52b-4d13-866d-756b4dedcf2b", 00:30:40.183 "assigned_rate_limits": { 00:30:40.183 "rw_ios_per_sec": 0, 00:30:40.183 "rw_mbytes_per_sec": 0, 00:30:40.183 "r_mbytes_per_sec": 0, 00:30:40.183 "w_mbytes_per_sec": 0 00:30:40.183 }, 00:30:40.183 "claimed": true, 00:30:40.183 "claim_type": "exclusive_write", 00:30:40.183 "zoned": false, 00:30:40.183 "supported_io_types": { 00:30:40.183 "read": true, 00:30:40.183 "write": true, 00:30:40.183 "unmap": true, 00:30:40.183 "write_zeroes": true, 00:30:40.183 "flush": true, 00:30:40.183 "reset": true, 00:30:40.183 "compare": false, 00:30:40.183 "compare_and_write": false, 00:30:40.183 "abort": true, 00:30:40.183 "nvme_admin": false, 00:30:40.183 "nvme_io": false 00:30:40.183 }, 00:30:40.183 "memory_domains": [ 00:30:40.183 { 00:30:40.183 "dma_device_id": "system", 00:30:40.183 "dma_device_type": 1 00:30:40.183 }, 00:30:40.183 { 00:30:40.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:40.183 "dma_device_type": 2 00:30:40.183 } 00:30:40.183 ], 00:30:40.183 "driver_specific": {} 00:30:40.183 } 00:30:40.183 ] 00:30:40.183 07:41:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:40.183 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:40.183 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:40.183 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:40.183 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:40.183 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:40.183 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:40.183 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:40.183 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:40.183 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:40.183 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:40.183 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:40.183 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:40.183 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.183 07:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:40.442 07:41:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:40.442 "name": "Existed_Raid", 00:30:40.442 "uuid": "f268996c-8c62-470a-a9c3-6fc39224ef3e", 00:30:40.442 "strip_size_kb": 64, 00:30:40.442 "state": "configuring", 00:30:40.442 "raid_level": "raid5f", 00:30:40.442 "superblock": true, 00:30:40.442 "num_base_bdevs": 3, 00:30:40.442 "num_base_bdevs_discovered": 2, 00:30:40.442 "num_base_bdevs_operational": 3, 00:30:40.442 "base_bdevs_list": [ 00:30:40.442 { 00:30:40.442 "name": "BaseBdev1", 00:30:40.442 "uuid": "423f9d2d-538e-429f-b11b-70cdd7fb727c", 00:30:40.442 "is_configured": true, 00:30:40.442 "data_offset": 2048, 00:30:40.442 "data_size": 63488 00:30:40.442 }, 00:30:40.442 { 00:30:40.442 "name": "BaseBdev2", 00:30:40.442 "uuid": "ddb21323-c52b-4d13-866d-756b4dedcf2b", 00:30:40.442 "is_configured": true, 00:30:40.442 "data_offset": 2048, 00:30:40.442 "data_size": 63488 00:30:40.442 }, 00:30:40.442 { 00:30:40.442 "name": "BaseBdev3", 00:30:40.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.442 "is_configured": false, 00:30:40.442 "data_offset": 0, 00:30:40.442 "data_size": 0 00:30:40.442 } 00:30:40.442 ] 00:30:40.442 }' 00:30:40.442 07:41:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:40.442 07:41:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.010 07:41:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:41.268 [2024-07-12 07:41:14.932001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:41.268 [2024-07-12 07:41:14.932362] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:30:41.268 BaseBdev3 00:30:41.268 [2024-07-12 07:41:14.933559] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:30:41.268 [2024-07-12 07:41:14.934094] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:30:41.268 [2024-07-12 07:41:14.936588] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:30:41.268 [2024-07-12 07:41:14.936894] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:30:41.268 [2024-07-12 07:41:14.937670] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:41.268 07:41:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:30:41.268 07:41:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:30:41.268 07:41:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:41.268 07:41:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:41.268 07:41:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:41.268 07:41:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:41.268 07:41:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:41.268 07:41:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:41.527 [ 00:30:41.527 { 00:30:41.527 "name": "BaseBdev3", 00:30:41.527 "aliases": [ 00:30:41.527 "1cbd40cb-0bef-464e-8bc6-227c612a7741" 00:30:41.527 ], 00:30:41.527 "product_name": "Malloc disk", 00:30:41.527 "block_size": 512, 00:30:41.527 "num_blocks": 65536, 00:30:41.527 "uuid": "1cbd40cb-0bef-464e-8bc6-227c612a7741", 00:30:41.527 "assigned_rate_limits": { 00:30:41.527 "rw_ios_per_sec": 0, 00:30:41.527 "rw_mbytes_per_sec": 0, 00:30:41.527 "r_mbytes_per_sec": 0, 00:30:41.527 "w_mbytes_per_sec": 0 00:30:41.527 }, 00:30:41.527 "claimed": true, 00:30:41.527 "claim_type": "exclusive_write", 00:30:41.527 "zoned": false, 00:30:41.527 "supported_io_types": { 00:30:41.527 "read": true, 00:30:41.527 "write": true, 00:30:41.527 "unmap": true, 00:30:41.527 "write_zeroes": true, 00:30:41.527 "flush": true, 00:30:41.527 "reset": true, 00:30:41.527 "compare": false, 00:30:41.527 "compare_and_write": false, 00:30:41.527 "abort": true, 00:30:41.527 "nvme_admin": false, 00:30:41.527 "nvme_io": false 00:30:41.527 }, 00:30:41.527 "memory_domains": [ 00:30:41.527 { 00:30:41.527 "dma_device_id": "system", 00:30:41.527 "dma_device_type": 1 00:30:41.527 }, 00:30:41.527 { 00:30:41.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.527 "dma_device_type": 2 00:30:41.527 } 00:30:41.527 ], 00:30:41.527 "driver_specific": {} 00:30:41.527 } 00:30:41.527 ] 00:30:41.527 07:41:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:41.527 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:41.527 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:41.527 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:41.527 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:41.527 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:41.527 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:41.527 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:41.527 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:41.527 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:41.527 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:41.527 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:41.527 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:41.528 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:41.528 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.787 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:41.787 "name": "Existed_Raid", 00:30:41.787 "uuid": "f268996c-8c62-470a-a9c3-6fc39224ef3e", 00:30:41.787 "strip_size_kb": 64, 00:30:41.787 "state": "online", 00:30:41.787 "raid_level": "raid5f", 00:30:41.787 "superblock": true, 00:30:41.787 "num_base_bdevs": 3, 00:30:41.787 "num_base_bdevs_discovered": 3, 00:30:41.787 "num_base_bdevs_operational": 3, 00:30:41.787 "base_bdevs_list": [ 00:30:41.787 { 00:30:41.787 "name": "BaseBdev1", 00:30:41.787 "uuid": "423f9d2d-538e-429f-b11b-70cdd7fb727c", 00:30:41.787 "is_configured": true, 00:30:41.787 "data_offset": 2048, 00:30:41.787 "data_size": 63488 00:30:41.787 }, 00:30:41.787 { 00:30:41.787 "name": "BaseBdev2", 00:30:41.787 "uuid": "ddb21323-c52b-4d13-866d-756b4dedcf2b", 00:30:41.787 "is_configured": true, 00:30:41.787 "data_offset": 2048, 00:30:41.787 "data_size": 63488 00:30:41.787 }, 00:30:41.787 { 00:30:41.787 "name": "BaseBdev3", 00:30:41.787 "uuid": "1cbd40cb-0bef-464e-8bc6-227c612a7741", 00:30:41.787 "is_configured": true, 00:30:41.787 "data_offset": 2048, 00:30:41.787 "data_size": 63488 00:30:41.787 } 00:30:41.787 ] 00:30:41.787 }' 00:30:41.787 07:41:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:41.787 07:41:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.388 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:30:42.388 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:42.388 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:42.388 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:42.388 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:42.388 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:30:42.388 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:42.388 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:42.388 [2024-07-12 07:41:16.189982] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:42.388 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:42.388 "name": "Existed_Raid", 00:30:42.388 "aliases": [ 00:30:42.388 "f268996c-8c62-470a-a9c3-6fc39224ef3e" 00:30:42.388 ], 00:30:42.388 "product_name": "Raid Volume", 00:30:42.388 "block_size": 512, 00:30:42.388 "num_blocks": 126976, 00:30:42.388 "uuid": "f268996c-8c62-470a-a9c3-6fc39224ef3e", 00:30:42.388 "assigned_rate_limits": { 00:30:42.388 "rw_ios_per_sec": 0, 00:30:42.388 "rw_mbytes_per_sec": 0, 00:30:42.388 "r_mbytes_per_sec": 0, 00:30:42.388 "w_mbytes_per_sec": 0 00:30:42.388 }, 00:30:42.388 "claimed": false, 00:30:42.388 "zoned": false, 00:30:42.388 "supported_io_types": { 00:30:42.388 "read": true, 00:30:42.388 "write": true, 00:30:42.388 "unmap": false, 00:30:42.388 "write_zeroes": true, 00:30:42.388 "flush": false, 00:30:42.388 "reset": true, 00:30:42.388 "compare": false, 00:30:42.388 "compare_and_write": false, 00:30:42.388 "abort": false, 00:30:42.388 "nvme_admin": false, 00:30:42.388 "nvme_io": false 00:30:42.388 }, 00:30:42.388 "driver_specific": { 00:30:42.388 "raid": { 00:30:42.388 "uuid": "f268996c-8c62-470a-a9c3-6fc39224ef3e", 00:30:42.388 "strip_size_kb": 64, 00:30:42.388 "state": "online", 00:30:42.388 "raid_level": "raid5f", 00:30:42.388 "superblock": true, 00:30:42.388 "num_base_bdevs": 3, 00:30:42.388 "num_base_bdevs_discovered": 3, 00:30:42.388 "num_base_bdevs_operational": 3, 00:30:42.388 "base_bdevs_list": [ 00:30:42.388 { 00:30:42.388 "name": "BaseBdev1", 00:30:42.388 "uuid": "423f9d2d-538e-429f-b11b-70cdd7fb727c", 00:30:42.388 "is_configured": true, 00:30:42.388 "data_offset": 2048, 00:30:42.388 "data_size": 63488 00:30:42.388 }, 00:30:42.388 { 00:30:42.388 "name": "BaseBdev2", 00:30:42.388 "uuid": "ddb21323-c52b-4d13-866d-756b4dedcf2b", 00:30:42.388 "is_configured": true, 00:30:42.388 "data_offset": 2048, 00:30:42.388 "data_size": 63488 00:30:42.388 }, 00:30:42.388 { 00:30:42.388 "name": "BaseBdev3", 00:30:42.388 "uuid": "1cbd40cb-0bef-464e-8bc6-227c612a7741", 00:30:42.388 "is_configured": true, 00:30:42.388 "data_offset": 2048, 00:30:42.388 "data_size": 63488 00:30:42.388 } 00:30:42.388 ] 00:30:42.388 } 00:30:42.388 } 00:30:42.388 }' 00:30:42.388 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:42.388 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:30:42.388 BaseBdev2 00:30:42.388 BaseBdev3' 00:30:42.388 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:42.388 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:42.388 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:30:42.646 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:42.646 "name": "BaseBdev1", 00:30:42.646 "aliases": [ 00:30:42.646 "423f9d2d-538e-429f-b11b-70cdd7fb727c" 00:30:42.646 ], 00:30:42.646 "product_name": "Malloc disk", 00:30:42.646 "block_size": 512, 00:30:42.646 "num_blocks": 65536, 00:30:42.646 "uuid": "423f9d2d-538e-429f-b11b-70cdd7fb727c", 00:30:42.646 "assigned_rate_limits": { 00:30:42.646 "rw_ios_per_sec": 0, 00:30:42.646 "rw_mbytes_per_sec": 0, 00:30:42.646 "r_mbytes_per_sec": 0, 00:30:42.646 "w_mbytes_per_sec": 0 00:30:42.646 }, 00:30:42.646 "claimed": true, 00:30:42.646 "claim_type": "exclusive_write", 00:30:42.646 "zoned": false, 00:30:42.646 "supported_io_types": { 00:30:42.646 "read": true, 00:30:42.646 "write": true, 00:30:42.646 "unmap": true, 00:30:42.646 "write_zeroes": true, 00:30:42.646 "flush": true, 00:30:42.646 "reset": true, 00:30:42.646 "compare": false, 00:30:42.646 "compare_and_write": false, 00:30:42.646 "abort": true, 00:30:42.646 "nvme_admin": false, 00:30:42.646 "nvme_io": false 00:30:42.646 }, 00:30:42.646 "memory_domains": [ 00:30:42.646 { 00:30:42.646 "dma_device_id": "system", 00:30:42.646 "dma_device_type": 1 00:30:42.646 }, 00:30:42.646 { 00:30:42.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:42.646 "dma_device_type": 2 00:30:42.646 } 00:30:42.646 ], 00:30:42.646 "driver_specific": {} 00:30:42.646 }' 00:30:42.646 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:42.646 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:42.646 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:42.646 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:42.904 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:42.904 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:42.904 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:42.904 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:42.904 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:42.904 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:42.904 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:43.163 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:43.163 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:43.163 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:43.163 07:41:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:43.163 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:43.163 "name": "BaseBdev2", 00:30:43.163 "aliases": [ 00:30:43.163 "ddb21323-c52b-4d13-866d-756b4dedcf2b" 00:30:43.163 ], 00:30:43.163 "product_name": "Malloc disk", 00:30:43.163 "block_size": 512, 00:30:43.163 "num_blocks": 65536, 00:30:43.163 "uuid": "ddb21323-c52b-4d13-866d-756b4dedcf2b", 00:30:43.163 "assigned_rate_limits": { 00:30:43.163 "rw_ios_per_sec": 0, 00:30:43.163 "rw_mbytes_per_sec": 0, 00:30:43.163 "r_mbytes_per_sec": 0, 00:30:43.163 "w_mbytes_per_sec": 0 00:30:43.163 }, 00:30:43.163 "claimed": true, 00:30:43.163 "claim_type": "exclusive_write", 00:30:43.163 "zoned": false, 00:30:43.163 "supported_io_types": { 00:30:43.163 "read": true, 00:30:43.163 "write": true, 00:30:43.163 "unmap": true, 00:30:43.163 "write_zeroes": true, 00:30:43.163 "flush": true, 00:30:43.163 "reset": true, 00:30:43.163 "compare": false, 00:30:43.163 "compare_and_write": false, 00:30:43.163 "abort": true, 00:30:43.163 "nvme_admin": false, 00:30:43.163 "nvme_io": false 00:30:43.163 }, 00:30:43.163 "memory_domains": [ 00:30:43.163 { 00:30:43.163 "dma_device_id": "system", 00:30:43.163 "dma_device_type": 1 00:30:43.163 }, 00:30:43.163 { 00:30:43.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:43.163 "dma_device_type": 2 00:30:43.163 } 00:30:43.163 ], 00:30:43.163 "driver_specific": {} 00:30:43.163 }' 00:30:43.163 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:43.422 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:43.422 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:43.422 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:43.422 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:43.422 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:43.422 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:43.422 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:43.422 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:43.422 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:43.680 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:43.680 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:43.680 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:43.680 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:43.680 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:43.939 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:43.939 "name": "BaseBdev3", 00:30:43.939 "aliases": [ 00:30:43.939 "1cbd40cb-0bef-464e-8bc6-227c612a7741" 00:30:43.939 ], 00:30:43.939 "product_name": "Malloc disk", 00:30:43.939 "block_size": 512, 00:30:43.939 "num_blocks": 65536, 00:30:43.939 "uuid": "1cbd40cb-0bef-464e-8bc6-227c612a7741", 00:30:43.939 "assigned_rate_limits": { 00:30:43.939 "rw_ios_per_sec": 0, 00:30:43.939 "rw_mbytes_per_sec": 0, 00:30:43.939 "r_mbytes_per_sec": 0, 00:30:43.939 "w_mbytes_per_sec": 0 00:30:43.939 }, 00:30:43.939 "claimed": true, 00:30:43.939 "claim_type": "exclusive_write", 00:30:43.939 "zoned": false, 00:30:43.939 "supported_io_types": { 00:30:43.939 "read": true, 00:30:43.939 "write": true, 00:30:43.939 "unmap": true, 00:30:43.939 "write_zeroes": true, 00:30:43.939 "flush": true, 00:30:43.939 "reset": true, 00:30:43.939 "compare": false, 00:30:43.939 "compare_and_write": false, 00:30:43.939 "abort": true, 00:30:43.939 "nvme_admin": false, 00:30:43.939 "nvme_io": false 00:30:43.939 }, 00:30:43.939 "memory_domains": [ 00:30:43.939 { 00:30:43.939 "dma_device_id": "system", 00:30:43.939 "dma_device_type": 1 00:30:43.939 }, 00:30:43.939 { 00:30:43.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:43.939 "dma_device_type": 2 00:30:43.939 } 00:30:43.939 ], 00:30:43.939 "driver_specific": {} 00:30:43.939 }' 00:30:43.939 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:43.939 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:43.939 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:43.939 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:43.939 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:43.939 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:43.939 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:44.198 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:44.198 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:44.198 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:44.198 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:44.198 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:44.198 07:41:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:44.457 [2024-07-12 07:41:18.138269] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:44.457 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:44.722 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:44.722 "name": "Existed_Raid", 00:30:44.722 "uuid": "f268996c-8c62-470a-a9c3-6fc39224ef3e", 00:30:44.722 "strip_size_kb": 64, 00:30:44.722 "state": "online", 00:30:44.722 "raid_level": "raid5f", 00:30:44.722 "superblock": true, 00:30:44.722 "num_base_bdevs": 3, 00:30:44.722 "num_base_bdevs_discovered": 2, 00:30:44.722 "num_base_bdevs_operational": 2, 00:30:44.722 "base_bdevs_list": [ 00:30:44.722 { 00:30:44.722 "name": null, 00:30:44.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:44.722 "is_configured": false, 00:30:44.722 "data_offset": 2048, 00:30:44.722 "data_size": 63488 00:30:44.722 }, 00:30:44.722 { 00:30:44.722 "name": "BaseBdev2", 00:30:44.722 "uuid": "ddb21323-c52b-4d13-866d-756b4dedcf2b", 00:30:44.722 "is_configured": true, 00:30:44.722 "data_offset": 2048, 00:30:44.722 "data_size": 63488 00:30:44.722 }, 00:30:44.722 { 00:30:44.722 "name": "BaseBdev3", 00:30:44.722 "uuid": "1cbd40cb-0bef-464e-8bc6-227c612a7741", 00:30:44.722 "is_configured": true, 00:30:44.722 "data_offset": 2048, 00:30:44.722 "data_size": 63488 00:30:44.722 } 00:30:44.722 ] 00:30:44.722 }' 00:30:44.722 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:44.722 07:41:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.290 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:30:45.290 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:45.290 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:45.290 07:41:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:45.549 07:41:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:45.549 07:41:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:45.549 07:41:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:45.549 [2024-07-12 07:41:19.343377] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:45.549 [2024-07-12 07:41:19.343725] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:45.549 [2024-07-12 07:41:19.364748] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:45.549 07:41:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:45.549 07:41:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:45.549 07:41:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:45.549 07:41:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:45.809 07:41:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:45.809 07:41:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:45.809 07:41:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:30:46.068 [2024-07-12 07:41:19.804862] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:46.068 [2024-07-12 07:41:19.805097] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:30:46.068 07:41:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:46.068 07:41:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:46.068 07:41:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.068 07:41:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:30:46.327 07:41:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:30:46.328 07:41:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:30:46.328 07:41:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:30:46.328 07:41:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:30:46.328 07:41:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:46.328 07:41:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:46.587 BaseBdev2 00:30:46.587 07:41:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:30:46.587 07:41:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:30:46.587 07:41:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:46.587 07:41:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:46.587 07:41:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:46.587 07:41:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:46.587 07:41:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:46.847 07:41:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:46.847 [ 00:30:46.847 { 00:30:46.847 "name": "BaseBdev2", 00:30:46.847 "aliases": [ 00:30:46.847 "65d892ac-9824-4bf5-a030-d0a1994d6e04" 00:30:46.847 ], 00:30:46.847 "product_name": "Malloc disk", 00:30:46.847 "block_size": 512, 00:30:46.847 "num_blocks": 65536, 00:30:46.847 "uuid": "65d892ac-9824-4bf5-a030-d0a1994d6e04", 00:30:46.847 "assigned_rate_limits": { 00:30:46.847 "rw_ios_per_sec": 0, 00:30:46.847 "rw_mbytes_per_sec": 0, 00:30:46.847 "r_mbytes_per_sec": 0, 00:30:46.847 "w_mbytes_per_sec": 0 00:30:46.847 }, 00:30:46.847 "claimed": false, 00:30:46.847 "zoned": false, 00:30:46.847 "supported_io_types": { 00:30:46.847 "read": true, 00:30:46.847 "write": true, 00:30:46.847 "unmap": true, 00:30:46.847 "write_zeroes": true, 00:30:46.847 "flush": true, 00:30:46.847 "reset": true, 00:30:46.847 "compare": false, 00:30:46.847 "compare_and_write": false, 00:30:46.847 "abort": true, 00:30:46.847 "nvme_admin": false, 00:30:46.847 "nvme_io": false 00:30:46.847 }, 00:30:46.847 "memory_domains": [ 00:30:46.847 { 00:30:46.847 "dma_device_id": "system", 00:30:46.847 "dma_device_type": 1 00:30:46.847 }, 00:30:46.847 { 00:30:46.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:46.847 "dma_device_type": 2 00:30:46.847 } 00:30:46.847 ], 00:30:46.847 "driver_specific": {} 00:30:46.847 } 00:30:46.847 ] 00:30:46.847 07:41:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:46.847 07:41:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:46.847 07:41:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:46.847 07:41:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:47.107 BaseBdev3 00:30:47.107 07:41:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:30:47.107 07:41:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:30:47.107 07:41:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:47.107 07:41:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:47.107 07:41:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:47.107 07:41:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:47.107 07:41:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:47.367 07:41:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:47.627 [ 00:30:47.627 { 00:30:47.627 "name": "BaseBdev3", 00:30:47.627 "aliases": [ 00:30:47.627 "81613436-b42b-4ed4-9392-1e30095dc833" 00:30:47.627 ], 00:30:47.627 "product_name": "Malloc disk", 00:30:47.627 "block_size": 512, 00:30:47.627 "num_blocks": 65536, 00:30:47.627 "uuid": "81613436-b42b-4ed4-9392-1e30095dc833", 00:30:47.627 "assigned_rate_limits": { 00:30:47.627 "rw_ios_per_sec": 0, 00:30:47.627 "rw_mbytes_per_sec": 0, 00:30:47.627 "r_mbytes_per_sec": 0, 00:30:47.627 "w_mbytes_per_sec": 0 00:30:47.627 }, 00:30:47.627 "claimed": false, 00:30:47.627 "zoned": false, 00:30:47.627 "supported_io_types": { 00:30:47.627 "read": true, 00:30:47.627 "write": true, 00:30:47.627 "unmap": true, 00:30:47.627 "write_zeroes": true, 00:30:47.627 "flush": true, 00:30:47.627 "reset": true, 00:30:47.627 "compare": false, 00:30:47.627 "compare_and_write": false, 00:30:47.627 "abort": true, 00:30:47.627 "nvme_admin": false, 00:30:47.627 "nvme_io": false 00:30:47.627 }, 00:30:47.627 "memory_domains": [ 00:30:47.627 { 00:30:47.627 "dma_device_id": "system", 00:30:47.627 "dma_device_type": 1 00:30:47.627 }, 00:30:47.627 { 00:30:47.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:47.627 "dma_device_type": 2 00:30:47.627 } 00:30:47.627 ], 00:30:47.627 "driver_specific": {} 00:30:47.627 } 00:30:47.627 ] 00:30:47.627 07:41:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:47.627 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:47.627 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:47.627 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:47.627 [2024-07-12 07:41:21.489872] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:47.627 [2024-07-12 07:41:21.490149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:47.627 [2024-07-12 07:41:21.490280] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:47.627 [2024-07-12 07:41:21.492762] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:47.627 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:47.627 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:47.627 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:47.627 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:47.627 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:47.627 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:47.627 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:47.627 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:47.627 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:47.887 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:47.887 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.887 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:47.887 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:47.887 "name": "Existed_Raid", 00:30:47.887 "uuid": "3d2c6d88-dc4e-47a9-85f4-8029a915a176", 00:30:47.887 "strip_size_kb": 64, 00:30:47.887 "state": "configuring", 00:30:47.887 "raid_level": "raid5f", 00:30:47.887 "superblock": true, 00:30:47.887 "num_base_bdevs": 3, 00:30:47.887 "num_base_bdevs_discovered": 2, 00:30:47.887 "num_base_bdevs_operational": 3, 00:30:47.887 "base_bdevs_list": [ 00:30:47.887 { 00:30:47.887 "name": "BaseBdev1", 00:30:47.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.887 "is_configured": false, 00:30:47.887 "data_offset": 0, 00:30:47.887 "data_size": 0 00:30:47.887 }, 00:30:47.887 { 00:30:47.887 "name": "BaseBdev2", 00:30:47.887 "uuid": "65d892ac-9824-4bf5-a030-d0a1994d6e04", 00:30:47.887 "is_configured": true, 00:30:47.887 "data_offset": 2048, 00:30:47.888 "data_size": 63488 00:30:47.888 }, 00:30:47.888 { 00:30:47.888 "name": "BaseBdev3", 00:30:47.888 "uuid": "81613436-b42b-4ed4-9392-1e30095dc833", 00:30:47.888 "is_configured": true, 00:30:47.888 "data_offset": 2048, 00:30:47.888 "data_size": 63488 00:30:47.888 } 00:30:47.888 ] 00:30:47.888 }' 00:30:47.888 07:41:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:47.888 07:41:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.457 07:41:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:48.716 [2024-07-12 07:41:22.458020] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:48.716 07:41:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:48.716 07:41:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:48.716 07:41:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:48.716 07:41:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:48.716 07:41:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:48.716 07:41:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:48.716 07:41:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:48.716 07:41:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:48.716 07:41:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:48.716 07:41:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:48.716 07:41:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.716 07:41:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:48.975 07:41:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:48.975 "name": "Existed_Raid", 00:30:48.975 "uuid": "3d2c6d88-dc4e-47a9-85f4-8029a915a176", 00:30:48.975 "strip_size_kb": 64, 00:30:48.975 "state": "configuring", 00:30:48.975 "raid_level": "raid5f", 00:30:48.975 "superblock": true, 00:30:48.975 "num_base_bdevs": 3, 00:30:48.975 "num_base_bdevs_discovered": 1, 00:30:48.975 "num_base_bdevs_operational": 3, 00:30:48.975 "base_bdevs_list": [ 00:30:48.975 { 00:30:48.975 "name": "BaseBdev1", 00:30:48.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.975 "is_configured": false, 00:30:48.975 "data_offset": 0, 00:30:48.975 "data_size": 0 00:30:48.975 }, 00:30:48.975 { 00:30:48.975 "name": null, 00:30:48.975 "uuid": "65d892ac-9824-4bf5-a030-d0a1994d6e04", 00:30:48.975 "is_configured": false, 00:30:48.975 "data_offset": 2048, 00:30:48.975 "data_size": 63488 00:30:48.975 }, 00:30:48.975 { 00:30:48.975 "name": "BaseBdev3", 00:30:48.975 "uuid": "81613436-b42b-4ed4-9392-1e30095dc833", 00:30:48.975 "is_configured": true, 00:30:48.975 "data_offset": 2048, 00:30:48.975 "data_size": 63488 00:30:48.975 } 00:30:48.975 ] 00:30:48.975 }' 00:30:48.975 07:41:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:48.975 07:41:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.541 07:41:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:49.541 07:41:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:49.800 07:41:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:30:49.800 07:41:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:50.059 [2024-07-12 07:41:23.899117] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:50.059 BaseBdev1 00:30:50.059 07:41:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:30:50.059 07:41:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:30:50.059 07:41:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:50.059 07:41:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:50.059 07:41:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:50.059 07:41:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:50.059 07:41:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:50.318 07:41:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:50.576 [ 00:30:50.576 { 00:30:50.576 "name": "BaseBdev1", 00:30:50.576 "aliases": [ 00:30:50.576 "735df6b4-d0fe-4e6c-a1cd-3238bfac7dbc" 00:30:50.576 ], 00:30:50.577 "product_name": "Malloc disk", 00:30:50.577 "block_size": 512, 00:30:50.577 "num_blocks": 65536, 00:30:50.577 "uuid": "735df6b4-d0fe-4e6c-a1cd-3238bfac7dbc", 00:30:50.577 "assigned_rate_limits": { 00:30:50.577 "rw_ios_per_sec": 0, 00:30:50.577 "rw_mbytes_per_sec": 0, 00:30:50.577 "r_mbytes_per_sec": 0, 00:30:50.577 "w_mbytes_per_sec": 0 00:30:50.577 }, 00:30:50.577 "claimed": true, 00:30:50.577 "claim_type": "exclusive_write", 00:30:50.577 "zoned": false, 00:30:50.577 "supported_io_types": { 00:30:50.577 "read": true, 00:30:50.577 "write": true, 00:30:50.577 "unmap": true, 00:30:50.577 "write_zeroes": true, 00:30:50.577 "flush": true, 00:30:50.577 "reset": true, 00:30:50.577 "compare": false, 00:30:50.577 "compare_and_write": false, 00:30:50.577 "abort": true, 00:30:50.577 "nvme_admin": false, 00:30:50.577 "nvme_io": false 00:30:50.577 }, 00:30:50.577 "memory_domains": [ 00:30:50.577 { 00:30:50.577 "dma_device_id": "system", 00:30:50.577 "dma_device_type": 1 00:30:50.577 }, 00:30:50.577 { 00:30:50.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:50.577 "dma_device_type": 2 00:30:50.577 } 00:30:50.577 ], 00:30:50.577 "driver_specific": {} 00:30:50.577 } 00:30:50.577 ] 00:30:50.577 07:41:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:50.577 07:41:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:50.577 07:41:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:50.577 07:41:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:50.577 07:41:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:50.577 07:41:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:50.577 07:41:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:50.577 07:41:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:50.577 07:41:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:50.577 07:41:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:50.577 07:41:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:50.577 07:41:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:50.577 07:41:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:50.837 07:41:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:50.837 "name": "Existed_Raid", 00:30:50.837 "uuid": "3d2c6d88-dc4e-47a9-85f4-8029a915a176", 00:30:50.837 "strip_size_kb": 64, 00:30:50.837 "state": "configuring", 00:30:50.837 "raid_level": "raid5f", 00:30:50.837 "superblock": true, 00:30:50.837 "num_base_bdevs": 3, 00:30:50.837 "num_base_bdevs_discovered": 2, 00:30:50.837 "num_base_bdevs_operational": 3, 00:30:50.837 "base_bdevs_list": [ 00:30:50.837 { 00:30:50.837 "name": "BaseBdev1", 00:30:50.837 "uuid": "735df6b4-d0fe-4e6c-a1cd-3238bfac7dbc", 00:30:50.837 "is_configured": true, 00:30:50.837 "data_offset": 2048, 00:30:50.837 "data_size": 63488 00:30:50.837 }, 00:30:50.837 { 00:30:50.837 "name": null, 00:30:50.837 "uuid": "65d892ac-9824-4bf5-a030-d0a1994d6e04", 00:30:50.837 "is_configured": false, 00:30:50.837 "data_offset": 2048, 00:30:50.837 "data_size": 63488 00:30:50.837 }, 00:30:50.837 { 00:30:50.837 "name": "BaseBdev3", 00:30:50.837 "uuid": "81613436-b42b-4ed4-9392-1e30095dc833", 00:30:50.837 "is_configured": true, 00:30:50.837 "data_offset": 2048, 00:30:50.837 "data_size": 63488 00:30:50.837 } 00:30:50.837 ] 00:30:50.837 }' 00:30:50.837 07:41:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:50.837 07:41:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:51.406 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.406 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:51.406 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:30:51.406 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:30:51.666 [2024-07-12 07:41:25.473881] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:51.666 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:51.666 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:51.666 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:51.666 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:51.666 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:51.666 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:51.666 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:51.666 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:51.666 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:51.666 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:51.666 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.666 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:51.926 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:51.926 "name": "Existed_Raid", 00:30:51.926 "uuid": "3d2c6d88-dc4e-47a9-85f4-8029a915a176", 00:30:51.926 "strip_size_kb": 64, 00:30:51.926 "state": "configuring", 00:30:51.926 "raid_level": "raid5f", 00:30:51.926 "superblock": true, 00:30:51.926 "num_base_bdevs": 3, 00:30:51.926 "num_base_bdevs_discovered": 1, 00:30:51.926 "num_base_bdevs_operational": 3, 00:30:51.926 "base_bdevs_list": [ 00:30:51.926 { 00:30:51.926 "name": "BaseBdev1", 00:30:51.926 "uuid": "735df6b4-d0fe-4e6c-a1cd-3238bfac7dbc", 00:30:51.926 "is_configured": true, 00:30:51.926 "data_offset": 2048, 00:30:51.926 "data_size": 63488 00:30:51.926 }, 00:30:51.926 { 00:30:51.926 "name": null, 00:30:51.926 "uuid": "65d892ac-9824-4bf5-a030-d0a1994d6e04", 00:30:51.926 "is_configured": false, 00:30:51.926 "data_offset": 2048, 00:30:51.926 "data_size": 63488 00:30:51.926 }, 00:30:51.926 { 00:30:51.926 "name": null, 00:30:51.926 "uuid": "81613436-b42b-4ed4-9392-1e30095dc833", 00:30:51.926 "is_configured": false, 00:30:51.926 "data_offset": 2048, 00:30:51.926 "data_size": 63488 00:30:51.926 } 00:30:51.926 ] 00:30:51.926 }' 00:30:51.926 07:41:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:51.926 07:41:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:52.496 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.496 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:52.755 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:30:52.755 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:53.015 [2024-07-12 07:41:26.706091] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:53.015 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:53.015 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:53.015 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:53.015 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:53.015 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:53.015 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:53.015 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:53.015 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:53.016 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:53.016 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:53.016 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:53.016 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:53.275 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:53.275 "name": "Existed_Raid", 00:30:53.275 "uuid": "3d2c6d88-dc4e-47a9-85f4-8029a915a176", 00:30:53.275 "strip_size_kb": 64, 00:30:53.275 "state": "configuring", 00:30:53.275 "raid_level": "raid5f", 00:30:53.275 "superblock": true, 00:30:53.275 "num_base_bdevs": 3, 00:30:53.275 "num_base_bdevs_discovered": 2, 00:30:53.275 "num_base_bdevs_operational": 3, 00:30:53.275 "base_bdevs_list": [ 00:30:53.275 { 00:30:53.275 "name": "BaseBdev1", 00:30:53.275 "uuid": "735df6b4-d0fe-4e6c-a1cd-3238bfac7dbc", 00:30:53.275 "is_configured": true, 00:30:53.275 "data_offset": 2048, 00:30:53.275 "data_size": 63488 00:30:53.275 }, 00:30:53.275 { 00:30:53.275 "name": null, 00:30:53.275 "uuid": "65d892ac-9824-4bf5-a030-d0a1994d6e04", 00:30:53.275 "is_configured": false, 00:30:53.275 "data_offset": 2048, 00:30:53.275 "data_size": 63488 00:30:53.275 }, 00:30:53.275 { 00:30:53.275 "name": "BaseBdev3", 00:30:53.275 "uuid": "81613436-b42b-4ed4-9392-1e30095dc833", 00:30:53.275 "is_configured": true, 00:30:53.275 "data_offset": 2048, 00:30:53.275 "data_size": 63488 00:30:53.275 } 00:30:53.275 ] 00:30:53.275 }' 00:30:53.275 07:41:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:53.275 07:41:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.845 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:53.845 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:54.105 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:30:54.105 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:54.105 [2024-07-12 07:41:27.890309] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:54.105 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:54.105 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:54.105 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:54.105 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:54.105 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:54.105 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:54.105 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:54.105 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:54.105 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:54.105 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:54.105 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.105 07:41:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:54.365 07:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:54.365 "name": "Existed_Raid", 00:30:54.365 "uuid": "3d2c6d88-dc4e-47a9-85f4-8029a915a176", 00:30:54.365 "strip_size_kb": 64, 00:30:54.365 "state": "configuring", 00:30:54.365 "raid_level": "raid5f", 00:30:54.365 "superblock": true, 00:30:54.365 "num_base_bdevs": 3, 00:30:54.365 "num_base_bdevs_discovered": 1, 00:30:54.365 "num_base_bdevs_operational": 3, 00:30:54.365 "base_bdevs_list": [ 00:30:54.365 { 00:30:54.365 "name": null, 00:30:54.365 "uuid": "735df6b4-d0fe-4e6c-a1cd-3238bfac7dbc", 00:30:54.365 "is_configured": false, 00:30:54.365 "data_offset": 2048, 00:30:54.365 "data_size": 63488 00:30:54.365 }, 00:30:54.365 { 00:30:54.365 "name": null, 00:30:54.365 "uuid": "65d892ac-9824-4bf5-a030-d0a1994d6e04", 00:30:54.365 "is_configured": false, 00:30:54.365 "data_offset": 2048, 00:30:54.365 "data_size": 63488 00:30:54.365 }, 00:30:54.365 { 00:30:54.365 "name": "BaseBdev3", 00:30:54.365 "uuid": "81613436-b42b-4ed4-9392-1e30095dc833", 00:30:54.365 "is_configured": true, 00:30:54.365 "data_offset": 2048, 00:30:54.365 "data_size": 63488 00:30:54.365 } 00:30:54.365 ] 00:30:54.365 }' 00:30:54.365 07:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:54.365 07:41:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.935 07:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.935 07:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:55.195 07:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:30:55.195 07:41:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:55.455 [2024-07-12 07:41:29.172936] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:55.455 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:55.455 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:55.455 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:55.455 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:55.455 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:55.455 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:55.455 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:55.455 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:55.455 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:55.455 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:55.455 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.455 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:55.715 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:55.715 "name": "Existed_Raid", 00:30:55.715 "uuid": "3d2c6d88-dc4e-47a9-85f4-8029a915a176", 00:30:55.715 "strip_size_kb": 64, 00:30:55.715 "state": "configuring", 00:30:55.715 "raid_level": "raid5f", 00:30:55.715 "superblock": true, 00:30:55.715 "num_base_bdevs": 3, 00:30:55.715 "num_base_bdevs_discovered": 2, 00:30:55.715 "num_base_bdevs_operational": 3, 00:30:55.715 "base_bdevs_list": [ 00:30:55.715 { 00:30:55.715 "name": null, 00:30:55.715 "uuid": "735df6b4-d0fe-4e6c-a1cd-3238bfac7dbc", 00:30:55.715 "is_configured": false, 00:30:55.715 "data_offset": 2048, 00:30:55.715 "data_size": 63488 00:30:55.715 }, 00:30:55.715 { 00:30:55.715 "name": "BaseBdev2", 00:30:55.715 "uuid": "65d892ac-9824-4bf5-a030-d0a1994d6e04", 00:30:55.715 "is_configured": true, 00:30:55.715 "data_offset": 2048, 00:30:55.715 "data_size": 63488 00:30:55.715 }, 00:30:55.715 { 00:30:55.715 "name": "BaseBdev3", 00:30:55.715 "uuid": "81613436-b42b-4ed4-9392-1e30095dc833", 00:30:55.715 "is_configured": true, 00:30:55.715 "data_offset": 2048, 00:30:55.715 "data_size": 63488 00:30:55.715 } 00:30:55.715 ] 00:30:55.715 }' 00:30:55.715 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:55.715 07:41:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.285 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.285 07:41:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:56.544 07:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:30:56.544 07:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.544 07:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:56.804 07:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 735df6b4-d0fe-4e6c-a1cd-3238bfac7dbc 00:30:57.062 [2024-07-12 07:41:30.736556] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:57.062 [2024-07-12 07:41:30.736889] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:30:57.062 [2024-07-12 07:41:30.736935] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:30:57.062 [2024-07-12 07:41:30.737094] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:30:57.062 NewBaseBdev 00:30:57.063 [2024-07-12 07:41:30.737699] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:30:57.063 [2024-07-12 07:41:30.737809] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:30:57.063 [2024-07-12 07:41:30.737942] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:57.063 07:41:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:30:57.063 07:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:30:57.063 07:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:57.063 07:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:57.063 07:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:57.063 07:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:57.063 07:41:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:57.322 07:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:57.581 [ 00:30:57.581 { 00:30:57.581 "name": "NewBaseBdev", 00:30:57.581 "aliases": [ 00:30:57.581 "735df6b4-d0fe-4e6c-a1cd-3238bfac7dbc" 00:30:57.581 ], 00:30:57.581 "product_name": "Malloc disk", 00:30:57.581 "block_size": 512, 00:30:57.581 "num_blocks": 65536, 00:30:57.581 "uuid": "735df6b4-d0fe-4e6c-a1cd-3238bfac7dbc", 00:30:57.581 "assigned_rate_limits": { 00:30:57.581 "rw_ios_per_sec": 0, 00:30:57.581 "rw_mbytes_per_sec": 0, 00:30:57.581 "r_mbytes_per_sec": 0, 00:30:57.581 "w_mbytes_per_sec": 0 00:30:57.581 }, 00:30:57.581 "claimed": true, 00:30:57.581 "claim_type": "exclusive_write", 00:30:57.581 "zoned": false, 00:30:57.581 "supported_io_types": { 00:30:57.581 "read": true, 00:30:57.581 "write": true, 00:30:57.582 "unmap": true, 00:30:57.582 "write_zeroes": true, 00:30:57.582 "flush": true, 00:30:57.582 "reset": true, 00:30:57.582 "compare": false, 00:30:57.582 "compare_and_write": false, 00:30:57.582 "abort": true, 00:30:57.582 "nvme_admin": false, 00:30:57.582 "nvme_io": false 00:30:57.582 }, 00:30:57.582 "memory_domains": [ 00:30:57.582 { 00:30:57.582 "dma_device_id": "system", 00:30:57.582 "dma_device_type": 1 00:30:57.582 }, 00:30:57.582 { 00:30:57.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:57.582 "dma_device_type": 2 00:30:57.582 } 00:30:57.582 ], 00:30:57.582 "driver_specific": {} 00:30:57.582 } 00:30:57.582 ] 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:57.582 "name": "Existed_Raid", 00:30:57.582 "uuid": "3d2c6d88-dc4e-47a9-85f4-8029a915a176", 00:30:57.582 "strip_size_kb": 64, 00:30:57.582 "state": "online", 00:30:57.582 "raid_level": "raid5f", 00:30:57.582 "superblock": true, 00:30:57.582 "num_base_bdevs": 3, 00:30:57.582 "num_base_bdevs_discovered": 3, 00:30:57.582 "num_base_bdevs_operational": 3, 00:30:57.582 "base_bdevs_list": [ 00:30:57.582 { 00:30:57.582 "name": "NewBaseBdev", 00:30:57.582 "uuid": "735df6b4-d0fe-4e6c-a1cd-3238bfac7dbc", 00:30:57.582 "is_configured": true, 00:30:57.582 "data_offset": 2048, 00:30:57.582 "data_size": 63488 00:30:57.582 }, 00:30:57.582 { 00:30:57.582 "name": "BaseBdev2", 00:30:57.582 "uuid": "65d892ac-9824-4bf5-a030-d0a1994d6e04", 00:30:57.582 "is_configured": true, 00:30:57.582 "data_offset": 2048, 00:30:57.582 "data_size": 63488 00:30:57.582 }, 00:30:57.582 { 00:30:57.582 "name": "BaseBdev3", 00:30:57.582 "uuid": "81613436-b42b-4ed4-9392-1e30095dc833", 00:30:57.582 "is_configured": true, 00:30:57.582 "data_offset": 2048, 00:30:57.582 "data_size": 63488 00:30:57.582 } 00:30:57.582 ] 00:30:57.582 }' 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:57.582 07:41:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:58.150 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:30:58.150 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:58.150 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:58.150 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:58.150 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:58.150 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:30:58.150 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:58.150 07:41:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:58.409 [2024-07-12 07:41:32.076932] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:58.409 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:58.409 "name": "Existed_Raid", 00:30:58.409 "aliases": [ 00:30:58.410 "3d2c6d88-dc4e-47a9-85f4-8029a915a176" 00:30:58.410 ], 00:30:58.410 "product_name": "Raid Volume", 00:30:58.410 "block_size": 512, 00:30:58.410 "num_blocks": 126976, 00:30:58.410 "uuid": "3d2c6d88-dc4e-47a9-85f4-8029a915a176", 00:30:58.410 "assigned_rate_limits": { 00:30:58.410 "rw_ios_per_sec": 0, 00:30:58.410 "rw_mbytes_per_sec": 0, 00:30:58.410 "r_mbytes_per_sec": 0, 00:30:58.410 "w_mbytes_per_sec": 0 00:30:58.410 }, 00:30:58.410 "claimed": false, 00:30:58.410 "zoned": false, 00:30:58.410 "supported_io_types": { 00:30:58.410 "read": true, 00:30:58.410 "write": true, 00:30:58.410 "unmap": false, 00:30:58.410 "write_zeroes": true, 00:30:58.410 "flush": false, 00:30:58.410 "reset": true, 00:30:58.410 "compare": false, 00:30:58.410 "compare_and_write": false, 00:30:58.410 "abort": false, 00:30:58.410 "nvme_admin": false, 00:30:58.410 "nvme_io": false 00:30:58.410 }, 00:30:58.410 "driver_specific": { 00:30:58.410 "raid": { 00:30:58.410 "uuid": "3d2c6d88-dc4e-47a9-85f4-8029a915a176", 00:30:58.410 "strip_size_kb": 64, 00:30:58.410 "state": "online", 00:30:58.410 "raid_level": "raid5f", 00:30:58.410 "superblock": true, 00:30:58.410 "num_base_bdevs": 3, 00:30:58.410 "num_base_bdevs_discovered": 3, 00:30:58.410 "num_base_bdevs_operational": 3, 00:30:58.410 "base_bdevs_list": [ 00:30:58.410 { 00:30:58.410 "name": "NewBaseBdev", 00:30:58.410 "uuid": "735df6b4-d0fe-4e6c-a1cd-3238bfac7dbc", 00:30:58.410 "is_configured": true, 00:30:58.410 "data_offset": 2048, 00:30:58.410 "data_size": 63488 00:30:58.410 }, 00:30:58.410 { 00:30:58.410 "name": "BaseBdev2", 00:30:58.410 "uuid": "65d892ac-9824-4bf5-a030-d0a1994d6e04", 00:30:58.410 "is_configured": true, 00:30:58.410 "data_offset": 2048, 00:30:58.410 "data_size": 63488 00:30:58.410 }, 00:30:58.410 { 00:30:58.410 "name": "BaseBdev3", 00:30:58.410 "uuid": "81613436-b42b-4ed4-9392-1e30095dc833", 00:30:58.410 "is_configured": true, 00:30:58.410 "data_offset": 2048, 00:30:58.410 "data_size": 63488 00:30:58.410 } 00:30:58.410 ] 00:30:58.410 } 00:30:58.410 } 00:30:58.410 }' 00:30:58.410 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:58.410 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:30:58.410 BaseBdev2 00:30:58.410 BaseBdev3' 00:30:58.410 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:58.410 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:30:58.410 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:58.670 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:58.670 "name": "NewBaseBdev", 00:30:58.670 "aliases": [ 00:30:58.670 "735df6b4-d0fe-4e6c-a1cd-3238bfac7dbc" 00:30:58.670 ], 00:30:58.670 "product_name": "Malloc disk", 00:30:58.670 "block_size": 512, 00:30:58.670 "num_blocks": 65536, 00:30:58.670 "uuid": "735df6b4-d0fe-4e6c-a1cd-3238bfac7dbc", 00:30:58.670 "assigned_rate_limits": { 00:30:58.670 "rw_ios_per_sec": 0, 00:30:58.670 "rw_mbytes_per_sec": 0, 00:30:58.670 "r_mbytes_per_sec": 0, 00:30:58.670 "w_mbytes_per_sec": 0 00:30:58.670 }, 00:30:58.670 "claimed": true, 00:30:58.670 "claim_type": "exclusive_write", 00:30:58.670 "zoned": false, 00:30:58.670 "supported_io_types": { 00:30:58.670 "read": true, 00:30:58.670 "write": true, 00:30:58.670 "unmap": true, 00:30:58.670 "write_zeroes": true, 00:30:58.670 "flush": true, 00:30:58.670 "reset": true, 00:30:58.670 "compare": false, 00:30:58.670 "compare_and_write": false, 00:30:58.670 "abort": true, 00:30:58.670 "nvme_admin": false, 00:30:58.670 "nvme_io": false 00:30:58.670 }, 00:30:58.670 "memory_domains": [ 00:30:58.670 { 00:30:58.670 "dma_device_id": "system", 00:30:58.670 "dma_device_type": 1 00:30:58.670 }, 00:30:58.670 { 00:30:58.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:58.670 "dma_device_type": 2 00:30:58.670 } 00:30:58.670 ], 00:30:58.670 "driver_specific": {} 00:30:58.670 }' 00:30:58.670 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:58.670 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:58.670 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:58.670 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:58.670 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:58.930 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:58.930 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:58.930 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:58.930 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:58.930 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:58.930 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:58.930 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:58.930 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:58.930 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:58.930 07:41:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:59.189 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:59.189 "name": "BaseBdev2", 00:30:59.189 "aliases": [ 00:30:59.189 "65d892ac-9824-4bf5-a030-d0a1994d6e04" 00:30:59.189 ], 00:30:59.189 "product_name": "Malloc disk", 00:30:59.189 "block_size": 512, 00:30:59.189 "num_blocks": 65536, 00:30:59.189 "uuid": "65d892ac-9824-4bf5-a030-d0a1994d6e04", 00:30:59.189 "assigned_rate_limits": { 00:30:59.189 "rw_ios_per_sec": 0, 00:30:59.189 "rw_mbytes_per_sec": 0, 00:30:59.189 "r_mbytes_per_sec": 0, 00:30:59.189 "w_mbytes_per_sec": 0 00:30:59.189 }, 00:30:59.189 "claimed": true, 00:30:59.189 "claim_type": "exclusive_write", 00:30:59.189 "zoned": false, 00:30:59.189 "supported_io_types": { 00:30:59.189 "read": true, 00:30:59.189 "write": true, 00:30:59.189 "unmap": true, 00:30:59.189 "write_zeroes": true, 00:30:59.189 "flush": true, 00:30:59.189 "reset": true, 00:30:59.189 "compare": false, 00:30:59.189 "compare_and_write": false, 00:30:59.189 "abort": true, 00:30:59.189 "nvme_admin": false, 00:30:59.189 "nvme_io": false 00:30:59.189 }, 00:30:59.189 "memory_domains": [ 00:30:59.189 { 00:30:59.189 "dma_device_id": "system", 00:30:59.189 "dma_device_type": 1 00:30:59.189 }, 00:30:59.189 { 00:30:59.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:59.189 "dma_device_type": 2 00:30:59.189 } 00:30:59.189 ], 00:30:59.189 "driver_specific": {} 00:30:59.189 }' 00:30:59.189 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:59.189 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:59.447 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:59.447 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:59.447 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:59.447 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:59.447 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:59.447 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:59.447 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:59.447 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:59.705 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:59.705 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:59.705 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:59.705 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:59.705 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:59.964 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:59.964 "name": "BaseBdev3", 00:30:59.964 "aliases": [ 00:30:59.964 "81613436-b42b-4ed4-9392-1e30095dc833" 00:30:59.964 ], 00:30:59.964 "product_name": "Malloc disk", 00:30:59.964 "block_size": 512, 00:30:59.964 "num_blocks": 65536, 00:30:59.964 "uuid": "81613436-b42b-4ed4-9392-1e30095dc833", 00:30:59.964 "assigned_rate_limits": { 00:30:59.964 "rw_ios_per_sec": 0, 00:30:59.964 "rw_mbytes_per_sec": 0, 00:30:59.964 "r_mbytes_per_sec": 0, 00:30:59.964 "w_mbytes_per_sec": 0 00:30:59.964 }, 00:30:59.964 "claimed": true, 00:30:59.964 "claim_type": "exclusive_write", 00:30:59.964 "zoned": false, 00:30:59.964 "supported_io_types": { 00:30:59.964 "read": true, 00:30:59.964 "write": true, 00:30:59.964 "unmap": true, 00:30:59.964 "write_zeroes": true, 00:30:59.964 "flush": true, 00:30:59.964 "reset": true, 00:30:59.964 "compare": false, 00:30:59.964 "compare_and_write": false, 00:30:59.964 "abort": true, 00:30:59.964 "nvme_admin": false, 00:30:59.964 "nvme_io": false 00:30:59.964 }, 00:30:59.964 "memory_domains": [ 00:30:59.964 { 00:30:59.964 "dma_device_id": "system", 00:30:59.964 "dma_device_type": 1 00:30:59.964 }, 00:30:59.964 { 00:30:59.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:59.964 "dma_device_type": 2 00:30:59.964 } 00:30:59.964 ], 00:30:59.964 "driver_specific": {} 00:30:59.964 }' 00:30:59.964 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:59.964 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:59.964 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:59.964 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:59.964 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:59.964 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:59.964 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:59.964 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:00.223 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:00.223 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:00.223 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:00.223 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:00.223 07:41:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:00.482 [2024-07-12 07:41:34.226973] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:00.482 [2024-07-12 07:41:34.227151] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:00.482 [2024-07-12 07:41:34.227302] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:00.482 [2024-07-12 07:41:34.227646] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:00.482 [2024-07-12 07:41:34.227737] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:31:00.482 07:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 160400 00:31:00.482 07:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 160400 ']' 00:31:00.482 07:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 160400 00:31:00.482 07:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:31:00.482 07:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:00.482 07:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 160400 00:31:00.482 07:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:00.482 07:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:00.482 07:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 160400' 00:31:00.482 killing process with pid 160400 00:31:00.482 07:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 160400 00:31:00.482 [2024-07-12 07:41:34.276770] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:00.482 07:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 160400 00:31:00.482 [2024-07-12 07:41:34.306170] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:00.740 07:41:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:31:00.740 00:31:00.740 real 0m25.451s 00:31:00.740 user 0m47.226s 00:31:00.740 sys 0m4.532s 00:31:00.741 07:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:00.741 07:41:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:00.741 ************************************ 00:31:00.741 END TEST raid5f_state_function_test_sb 00:31:00.741 ************************************ 00:31:00.741 07:41:34 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:31:00.741 07:41:34 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:31:00.741 07:41:34 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:00.741 07:41:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:01.000 ************************************ 00:31:01.000 START TEST raid5f_superblock_test 00:31:01.000 ************************************ 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid5f 3 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=161327 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 161327 /var/tmp/spdk-raid.sock 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 161327 ']' 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:01.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:01.000 07:41:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.000 [2024-07-12 07:41:34.690811] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:01.000 [2024-07-12 07:41:34.691161] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161327 ] 00:31:01.000 [2024-07-12 07:41:34.839271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.260 [2024-07-12 07:41:34.922462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.260 [2024-07-12 07:41:35.011607] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:01.829 07:41:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:01.829 07:41:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:31:01.829 07:41:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:31:01.829 07:41:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:01.829 07:41:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:31:01.829 07:41:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:31:01.829 07:41:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:01.829 07:41:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:01.829 07:41:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:01.829 07:41:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:01.829 07:41:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:31:02.088 malloc1 00:31:02.088 07:41:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:02.347 [2024-07-12 07:41:36.028359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:02.347 [2024-07-12 07:41:36.028650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:02.347 [2024-07-12 07:41:36.028738] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:31:02.347 [2024-07-12 07:41:36.028868] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:02.347 [2024-07-12 07:41:36.031931] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:02.347 [2024-07-12 07:41:36.032099] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:02.347 pt1 00:31:02.347 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:02.347 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:02.347 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:31:02.347 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:31:02.347 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:02.347 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:02.347 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:02.347 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:02.347 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:31:02.347 malloc2 00:31:02.605 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:02.605 [2024-07-12 07:41:36.464218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:02.605 [2024-07-12 07:41:36.464568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:02.605 [2024-07-12 07:41:36.464680] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:31:02.605 [2024-07-12 07:41:36.465057] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:02.605 [2024-07-12 07:41:36.469117] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:02.605 [2024-07-12 07:41:36.469368] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:02.605 pt2 00:31:02.605 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:02.605 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:02.605 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:31:02.605 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:31:02.863 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:31:02.863 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:02.863 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:02.863 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:02.863 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:31:02.863 malloc3 00:31:02.863 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:03.121 [2024-07-12 07:41:36.870582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:03.121 [2024-07-12 07:41:36.870830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:03.121 [2024-07-12 07:41:36.870931] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:03.121 [2024-07-12 07:41:36.871049] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:03.121 [2024-07-12 07:41:36.873794] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:03.121 [2024-07-12 07:41:36.873946] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:03.121 pt3 00:31:03.121 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:03.121 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:03.121 07:41:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:31:03.379 [2024-07-12 07:41:37.050822] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:03.379 [2024-07-12 07:41:37.053409] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:03.379 [2024-07-12 07:41:37.053576] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:03.379 [2024-07-12 07:41:37.053913] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:31:03.379 [2024-07-12 07:41:37.054011] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:03.379 [2024-07-12 07:41:37.054266] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:31:03.379 [2024-07-12 07:41:37.055138] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:31:03.379 [2024-07-12 07:41:37.055242] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:31:03.379 [2024-07-12 07:41:37.055521] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:03.379 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:03.379 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:03.379 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:03.379 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:03.379 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:03.379 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:03.379 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:03.379 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:03.379 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:03.379 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:03.379 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.379 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:03.638 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:03.638 "name": "raid_bdev1", 00:31:03.638 "uuid": "3bad6adb-5eed-4c02-9b6d-f39f321a95a7", 00:31:03.638 "strip_size_kb": 64, 00:31:03.638 "state": "online", 00:31:03.638 "raid_level": "raid5f", 00:31:03.638 "superblock": true, 00:31:03.638 "num_base_bdevs": 3, 00:31:03.638 "num_base_bdevs_discovered": 3, 00:31:03.638 "num_base_bdevs_operational": 3, 00:31:03.638 "base_bdevs_list": [ 00:31:03.638 { 00:31:03.638 "name": "pt1", 00:31:03.638 "uuid": "f24a9094-bbb1-5f4b-b804-c209fd78feff", 00:31:03.638 "is_configured": true, 00:31:03.638 "data_offset": 2048, 00:31:03.638 "data_size": 63488 00:31:03.638 }, 00:31:03.638 { 00:31:03.638 "name": "pt2", 00:31:03.638 "uuid": "772f30e1-5166-59a2-907e-02b6075f0e93", 00:31:03.638 "is_configured": true, 00:31:03.638 "data_offset": 2048, 00:31:03.638 "data_size": 63488 00:31:03.638 }, 00:31:03.638 { 00:31:03.638 "name": "pt3", 00:31:03.638 "uuid": "ae8e46ae-6a4c-5219-a154-99ddbeb72beb", 00:31:03.638 "is_configured": true, 00:31:03.638 "data_offset": 2048, 00:31:03.638 "data_size": 63488 00:31:03.638 } 00:31:03.638 ] 00:31:03.638 }' 00:31:03.638 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:03.638 07:41:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.205 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:31:04.205 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:04.205 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:04.205 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:04.205 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:04.205 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:31:04.205 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:04.205 07:41:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:04.205 [2024-07-12 07:41:38.007814] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:04.205 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:04.205 "name": "raid_bdev1", 00:31:04.205 "aliases": [ 00:31:04.205 "3bad6adb-5eed-4c02-9b6d-f39f321a95a7" 00:31:04.205 ], 00:31:04.205 "product_name": "Raid Volume", 00:31:04.205 "block_size": 512, 00:31:04.205 "num_blocks": 126976, 00:31:04.205 "uuid": "3bad6adb-5eed-4c02-9b6d-f39f321a95a7", 00:31:04.205 "assigned_rate_limits": { 00:31:04.205 "rw_ios_per_sec": 0, 00:31:04.205 "rw_mbytes_per_sec": 0, 00:31:04.205 "r_mbytes_per_sec": 0, 00:31:04.205 "w_mbytes_per_sec": 0 00:31:04.205 }, 00:31:04.205 "claimed": false, 00:31:04.205 "zoned": false, 00:31:04.205 "supported_io_types": { 00:31:04.205 "read": true, 00:31:04.205 "write": true, 00:31:04.205 "unmap": false, 00:31:04.205 "write_zeroes": true, 00:31:04.205 "flush": false, 00:31:04.205 "reset": true, 00:31:04.205 "compare": false, 00:31:04.205 "compare_and_write": false, 00:31:04.205 "abort": false, 00:31:04.205 "nvme_admin": false, 00:31:04.205 "nvme_io": false 00:31:04.205 }, 00:31:04.205 "driver_specific": { 00:31:04.205 "raid": { 00:31:04.205 "uuid": "3bad6adb-5eed-4c02-9b6d-f39f321a95a7", 00:31:04.205 "strip_size_kb": 64, 00:31:04.205 "state": "online", 00:31:04.205 "raid_level": "raid5f", 00:31:04.205 "superblock": true, 00:31:04.205 "num_base_bdevs": 3, 00:31:04.205 "num_base_bdevs_discovered": 3, 00:31:04.205 "num_base_bdevs_operational": 3, 00:31:04.205 "base_bdevs_list": [ 00:31:04.205 { 00:31:04.205 "name": "pt1", 00:31:04.205 "uuid": "f24a9094-bbb1-5f4b-b804-c209fd78feff", 00:31:04.205 "is_configured": true, 00:31:04.205 "data_offset": 2048, 00:31:04.205 "data_size": 63488 00:31:04.205 }, 00:31:04.205 { 00:31:04.205 "name": "pt2", 00:31:04.205 "uuid": "772f30e1-5166-59a2-907e-02b6075f0e93", 00:31:04.205 "is_configured": true, 00:31:04.205 "data_offset": 2048, 00:31:04.205 "data_size": 63488 00:31:04.205 }, 00:31:04.205 { 00:31:04.205 "name": "pt3", 00:31:04.205 "uuid": "ae8e46ae-6a4c-5219-a154-99ddbeb72beb", 00:31:04.205 "is_configured": true, 00:31:04.205 "data_offset": 2048, 00:31:04.205 "data_size": 63488 00:31:04.205 } 00:31:04.205 ] 00:31:04.205 } 00:31:04.205 } 00:31:04.205 }' 00:31:04.205 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:04.205 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:04.205 pt2 00:31:04.205 pt3' 00:31:04.205 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:04.205 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:04.205 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:04.463 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:04.464 "name": "pt1", 00:31:04.464 "aliases": [ 00:31:04.464 "f24a9094-bbb1-5f4b-b804-c209fd78feff" 00:31:04.464 ], 00:31:04.464 "product_name": "passthru", 00:31:04.464 "block_size": 512, 00:31:04.464 "num_blocks": 65536, 00:31:04.464 "uuid": "f24a9094-bbb1-5f4b-b804-c209fd78feff", 00:31:04.464 "assigned_rate_limits": { 00:31:04.464 "rw_ios_per_sec": 0, 00:31:04.464 "rw_mbytes_per_sec": 0, 00:31:04.464 "r_mbytes_per_sec": 0, 00:31:04.464 "w_mbytes_per_sec": 0 00:31:04.464 }, 00:31:04.464 "claimed": true, 00:31:04.464 "claim_type": "exclusive_write", 00:31:04.464 "zoned": false, 00:31:04.464 "supported_io_types": { 00:31:04.464 "read": true, 00:31:04.464 "write": true, 00:31:04.464 "unmap": true, 00:31:04.464 "write_zeroes": true, 00:31:04.464 "flush": true, 00:31:04.464 "reset": true, 00:31:04.464 "compare": false, 00:31:04.464 "compare_and_write": false, 00:31:04.464 "abort": true, 00:31:04.464 "nvme_admin": false, 00:31:04.464 "nvme_io": false 00:31:04.464 }, 00:31:04.464 "memory_domains": [ 00:31:04.464 { 00:31:04.464 "dma_device_id": "system", 00:31:04.464 "dma_device_type": 1 00:31:04.464 }, 00:31:04.464 { 00:31:04.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:04.464 "dma_device_type": 2 00:31:04.464 } 00:31:04.464 ], 00:31:04.464 "driver_specific": { 00:31:04.464 "passthru": { 00:31:04.464 "name": "pt1", 00:31:04.464 "base_bdev_name": "malloc1" 00:31:04.464 } 00:31:04.464 } 00:31:04.464 }' 00:31:04.464 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:04.722 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:04.722 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:04.722 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:04.722 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:04.722 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:04.722 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:04.722 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:04.722 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:04.722 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:04.981 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:04.981 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:04.981 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:04.981 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:04.981 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:05.240 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:05.240 "name": "pt2", 00:31:05.240 "aliases": [ 00:31:05.240 "772f30e1-5166-59a2-907e-02b6075f0e93" 00:31:05.240 ], 00:31:05.240 "product_name": "passthru", 00:31:05.240 "block_size": 512, 00:31:05.240 "num_blocks": 65536, 00:31:05.240 "uuid": "772f30e1-5166-59a2-907e-02b6075f0e93", 00:31:05.240 "assigned_rate_limits": { 00:31:05.240 "rw_ios_per_sec": 0, 00:31:05.240 "rw_mbytes_per_sec": 0, 00:31:05.240 "r_mbytes_per_sec": 0, 00:31:05.240 "w_mbytes_per_sec": 0 00:31:05.240 }, 00:31:05.240 "claimed": true, 00:31:05.240 "claim_type": "exclusive_write", 00:31:05.240 "zoned": false, 00:31:05.240 "supported_io_types": { 00:31:05.240 "read": true, 00:31:05.240 "write": true, 00:31:05.240 "unmap": true, 00:31:05.240 "write_zeroes": true, 00:31:05.240 "flush": true, 00:31:05.240 "reset": true, 00:31:05.240 "compare": false, 00:31:05.240 "compare_and_write": false, 00:31:05.240 "abort": true, 00:31:05.240 "nvme_admin": false, 00:31:05.240 "nvme_io": false 00:31:05.240 }, 00:31:05.240 "memory_domains": [ 00:31:05.240 { 00:31:05.240 "dma_device_id": "system", 00:31:05.240 "dma_device_type": 1 00:31:05.240 }, 00:31:05.240 { 00:31:05.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:05.240 "dma_device_type": 2 00:31:05.240 } 00:31:05.240 ], 00:31:05.240 "driver_specific": { 00:31:05.240 "passthru": { 00:31:05.240 "name": "pt2", 00:31:05.240 "base_bdev_name": "malloc2" 00:31:05.240 } 00:31:05.240 } 00:31:05.240 }' 00:31:05.241 07:41:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:05.241 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:05.241 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:05.241 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:05.241 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:05.241 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:05.241 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:05.508 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:05.508 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:05.508 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:05.508 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:05.508 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:05.508 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:05.508 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:31:05.508 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:05.838 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:05.838 "name": "pt3", 00:31:05.838 "aliases": [ 00:31:05.838 "ae8e46ae-6a4c-5219-a154-99ddbeb72beb" 00:31:05.838 ], 00:31:05.838 "product_name": "passthru", 00:31:05.838 "block_size": 512, 00:31:05.838 "num_blocks": 65536, 00:31:05.838 "uuid": "ae8e46ae-6a4c-5219-a154-99ddbeb72beb", 00:31:05.838 "assigned_rate_limits": { 00:31:05.838 "rw_ios_per_sec": 0, 00:31:05.838 "rw_mbytes_per_sec": 0, 00:31:05.838 "r_mbytes_per_sec": 0, 00:31:05.838 "w_mbytes_per_sec": 0 00:31:05.838 }, 00:31:05.838 "claimed": true, 00:31:05.838 "claim_type": "exclusive_write", 00:31:05.838 "zoned": false, 00:31:05.838 "supported_io_types": { 00:31:05.838 "read": true, 00:31:05.838 "write": true, 00:31:05.838 "unmap": true, 00:31:05.838 "write_zeroes": true, 00:31:05.838 "flush": true, 00:31:05.838 "reset": true, 00:31:05.838 "compare": false, 00:31:05.838 "compare_and_write": false, 00:31:05.838 "abort": true, 00:31:05.838 "nvme_admin": false, 00:31:05.838 "nvme_io": false 00:31:05.838 }, 00:31:05.838 "memory_domains": [ 00:31:05.838 { 00:31:05.838 "dma_device_id": "system", 00:31:05.838 "dma_device_type": 1 00:31:05.838 }, 00:31:05.838 { 00:31:05.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:05.838 "dma_device_type": 2 00:31:05.838 } 00:31:05.838 ], 00:31:05.838 "driver_specific": { 00:31:05.838 "passthru": { 00:31:05.838 "name": "pt3", 00:31:05.838 "base_bdev_name": "malloc3" 00:31:05.838 } 00:31:05.838 } 00:31:05.838 }' 00:31:05.838 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:05.838 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:05.838 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:05.838 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:05.838 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:05.838 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:05.838 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:06.098 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:06.098 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:06.098 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:06.098 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:06.098 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:06.098 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:06.098 07:41:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:31:06.357 [2024-07-12 07:41:40.116098] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:06.357 07:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3bad6adb-5eed-4c02-9b6d-f39f321a95a7 00:31:06.357 07:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 3bad6adb-5eed-4c02-9b6d-f39f321a95a7 ']' 00:31:06.357 07:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:06.615 [2024-07-12 07:41:40.340052] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:06.615 [2024-07-12 07:41:40.340190] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:06.615 [2024-07-12 07:41:40.340356] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:06.615 [2024-07-12 07:41:40.340524] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:06.615 [2024-07-12 07:41:40.340620] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:31:06.615 07:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:06.615 07:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:31:06.874 07:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:31:06.874 07:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:31:06.874 07:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:06.874 07:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:07.133 07:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:07.133 07:41:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:07.390 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:07.390 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:31:07.390 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:31:07.390 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:07.648 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:31:07.648 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:31:07.648 07:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:31:07.648 07:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:31:07.648 07:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:07.648 07:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:07.648 07:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:07.648 07:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:07.648 07:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:07.648 07:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:07.648 07:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:07.648 07:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:07.648 07:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:31:07.907 [2024-07-12 07:41:41.539151] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:07.907 [2024-07-12 07:41:41.541190] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:07.907 [2024-07-12 07:41:41.541376] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:31:07.907 [2024-07-12 07:41:41.541462] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:07.907 [2024-07-12 07:41:41.541630] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:07.907 [2024-07-12 07:41:41.541695] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:31:07.907 [2024-07-12 07:41:41.541826] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:07.907 [2024-07-12 07:41:41.541863] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:31:07.907 request: 00:31:07.907 { 00:31:07.907 "name": "raid_bdev1", 00:31:07.907 "raid_level": "raid5f", 00:31:07.907 "base_bdevs": [ 00:31:07.907 "malloc1", 00:31:07.907 "malloc2", 00:31:07.907 "malloc3" 00:31:07.907 ], 00:31:07.907 "superblock": false, 00:31:07.907 "strip_size_kb": 64, 00:31:07.907 "method": "bdev_raid_create", 00:31:07.907 "req_id": 1 00:31:07.907 } 00:31:07.907 Got JSON-RPC error response 00:31:07.907 response: 00:31:07.907 { 00:31:07.907 "code": -17, 00:31:07.907 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:07.907 } 00:31:07.907 07:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:31:07.907 07:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:07.907 07:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:07.907 07:41:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:07.907 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:07.907 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:31:08.165 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:31:08.165 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:31:08.165 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:08.165 [2024-07-12 07:41:41.971156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:08.165 [2024-07-12 07:41:41.971412] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:08.165 [2024-07-12 07:41:41.971491] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:08.165 [2024-07-12 07:41:41.971603] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:08.165 [2024-07-12 07:41:41.973870] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:08.165 [2024-07-12 07:41:41.974057] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:08.165 [2024-07-12 07:41:41.974250] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:08.165 [2024-07-12 07:41:41.974403] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:08.165 pt1 00:31:08.165 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:08.165 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:08.165 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:08.166 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:08.166 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:08.166 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:08.166 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:08.166 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:08.166 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:08.166 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:08.166 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:08.166 07:41:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:08.425 07:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:08.425 "name": "raid_bdev1", 00:31:08.425 "uuid": "3bad6adb-5eed-4c02-9b6d-f39f321a95a7", 00:31:08.425 "strip_size_kb": 64, 00:31:08.425 "state": "configuring", 00:31:08.425 "raid_level": "raid5f", 00:31:08.425 "superblock": true, 00:31:08.425 "num_base_bdevs": 3, 00:31:08.425 "num_base_bdevs_discovered": 1, 00:31:08.425 "num_base_bdevs_operational": 3, 00:31:08.425 "base_bdevs_list": [ 00:31:08.425 { 00:31:08.425 "name": "pt1", 00:31:08.425 "uuid": "f24a9094-bbb1-5f4b-b804-c209fd78feff", 00:31:08.425 "is_configured": true, 00:31:08.425 "data_offset": 2048, 00:31:08.425 "data_size": 63488 00:31:08.425 }, 00:31:08.425 { 00:31:08.425 "name": null, 00:31:08.425 "uuid": "772f30e1-5166-59a2-907e-02b6075f0e93", 00:31:08.425 "is_configured": false, 00:31:08.425 "data_offset": 2048, 00:31:08.425 "data_size": 63488 00:31:08.425 }, 00:31:08.425 { 00:31:08.425 "name": null, 00:31:08.425 "uuid": "ae8e46ae-6a4c-5219-a154-99ddbeb72beb", 00:31:08.425 "is_configured": false, 00:31:08.425 "data_offset": 2048, 00:31:08.425 "data_size": 63488 00:31:08.425 } 00:31:08.425 ] 00:31:08.425 }' 00:31:08.425 07:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:08.425 07:41:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:08.994 07:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:31:08.994 07:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:09.254 [2024-07-12 07:41:42.923097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:09.254 [2024-07-12 07:41:42.923395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:09.254 [2024-07-12 07:41:42.923491] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:31:09.254 [2024-07-12 07:41:42.923604] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:09.254 [2024-07-12 07:41:42.924034] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:09.254 [2024-07-12 07:41:42.924182] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:09.254 [2024-07-12 07:41:42.924365] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:09.254 [2024-07-12 07:41:42.924488] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:09.254 pt2 00:31:09.254 07:41:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:09.512 [2024-07-12 07:41:43.143130] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:31:09.512 07:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:09.512 07:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:09.512 07:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:09.512 07:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:09.512 07:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:09.512 07:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:09.512 07:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:09.512 07:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:09.512 07:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:09.512 07:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:09.512 07:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:09.512 07:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:09.771 07:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:09.771 "name": "raid_bdev1", 00:31:09.771 "uuid": "3bad6adb-5eed-4c02-9b6d-f39f321a95a7", 00:31:09.771 "strip_size_kb": 64, 00:31:09.771 "state": "configuring", 00:31:09.771 "raid_level": "raid5f", 00:31:09.771 "superblock": true, 00:31:09.771 "num_base_bdevs": 3, 00:31:09.771 "num_base_bdevs_discovered": 1, 00:31:09.771 "num_base_bdevs_operational": 3, 00:31:09.771 "base_bdevs_list": [ 00:31:09.771 { 00:31:09.771 "name": "pt1", 00:31:09.771 "uuid": "f24a9094-bbb1-5f4b-b804-c209fd78feff", 00:31:09.771 "is_configured": true, 00:31:09.771 "data_offset": 2048, 00:31:09.771 "data_size": 63488 00:31:09.771 }, 00:31:09.771 { 00:31:09.771 "name": null, 00:31:09.771 "uuid": "772f30e1-5166-59a2-907e-02b6075f0e93", 00:31:09.771 "is_configured": false, 00:31:09.771 "data_offset": 2048, 00:31:09.771 "data_size": 63488 00:31:09.771 }, 00:31:09.771 { 00:31:09.771 "name": null, 00:31:09.771 "uuid": "ae8e46ae-6a4c-5219-a154-99ddbeb72beb", 00:31:09.771 "is_configured": false, 00:31:09.771 "data_offset": 2048, 00:31:09.771 "data_size": 63488 00:31:09.771 } 00:31:09.771 ] 00:31:09.771 }' 00:31:09.771 07:41:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:09.771 07:41:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.338 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:31:10.338 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:10.338 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:10.596 [2024-07-12 07:41:44.275318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:10.597 [2024-07-12 07:41:44.275636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:10.597 [2024-07-12 07:41:44.275706] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:31:10.597 [2024-07-12 07:41:44.275827] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:10.597 [2024-07-12 07:41:44.276271] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:10.597 [2024-07-12 07:41:44.276428] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:10.597 [2024-07-12 07:41:44.276610] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:10.597 [2024-07-12 07:41:44.276664] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:10.597 pt2 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:10.597 [2024-07-12 07:41:44.443327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:10.597 [2024-07-12 07:41:44.443568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:10.597 [2024-07-12 07:41:44.443632] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:10.597 [2024-07-12 07:41:44.443735] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:10.597 [2024-07-12 07:41:44.444161] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:10.597 [2024-07-12 07:41:44.444348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:10.597 [2024-07-12 07:41:44.444535] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:10.597 [2024-07-12 07:41:44.444588] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:10.597 [2024-07-12 07:41:44.444817] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:31:10.597 [2024-07-12 07:41:44.444908] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:10.597 [2024-07-12 07:41:44.445056] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:31:10.597 [2024-07-12 07:41:44.445663] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:31:10.597 [2024-07-12 07:41:44.445776] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:31:10.597 [2024-07-12 07:41:44.445970] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:10.597 pt3 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:10.597 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:10.856 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:10.856 "name": "raid_bdev1", 00:31:10.856 "uuid": "3bad6adb-5eed-4c02-9b6d-f39f321a95a7", 00:31:10.856 "strip_size_kb": 64, 00:31:10.856 "state": "online", 00:31:10.856 "raid_level": "raid5f", 00:31:10.856 "superblock": true, 00:31:10.856 "num_base_bdevs": 3, 00:31:10.856 "num_base_bdevs_discovered": 3, 00:31:10.856 "num_base_bdevs_operational": 3, 00:31:10.856 "base_bdevs_list": [ 00:31:10.856 { 00:31:10.856 "name": "pt1", 00:31:10.856 "uuid": "f24a9094-bbb1-5f4b-b804-c209fd78feff", 00:31:10.856 "is_configured": true, 00:31:10.856 "data_offset": 2048, 00:31:10.856 "data_size": 63488 00:31:10.856 }, 00:31:10.856 { 00:31:10.856 "name": "pt2", 00:31:10.856 "uuid": "772f30e1-5166-59a2-907e-02b6075f0e93", 00:31:10.856 "is_configured": true, 00:31:10.856 "data_offset": 2048, 00:31:10.856 "data_size": 63488 00:31:10.856 }, 00:31:10.856 { 00:31:10.856 "name": "pt3", 00:31:10.856 "uuid": "ae8e46ae-6a4c-5219-a154-99ddbeb72beb", 00:31:10.856 "is_configured": true, 00:31:10.856 "data_offset": 2048, 00:31:10.856 "data_size": 63488 00:31:10.856 } 00:31:10.856 ] 00:31:10.856 }' 00:31:10.856 07:41:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:10.856 07:41:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.423 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:31:11.423 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:11.423 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:11.423 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:11.423 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:11.423 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:31:11.423 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:11.423 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:11.682 [2024-07-12 07:41:45.379636] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:11.682 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:11.682 "name": "raid_bdev1", 00:31:11.682 "aliases": [ 00:31:11.682 "3bad6adb-5eed-4c02-9b6d-f39f321a95a7" 00:31:11.682 ], 00:31:11.682 "product_name": "Raid Volume", 00:31:11.682 "block_size": 512, 00:31:11.682 "num_blocks": 126976, 00:31:11.682 "uuid": "3bad6adb-5eed-4c02-9b6d-f39f321a95a7", 00:31:11.682 "assigned_rate_limits": { 00:31:11.682 "rw_ios_per_sec": 0, 00:31:11.682 "rw_mbytes_per_sec": 0, 00:31:11.682 "r_mbytes_per_sec": 0, 00:31:11.682 "w_mbytes_per_sec": 0 00:31:11.682 }, 00:31:11.682 "claimed": false, 00:31:11.682 "zoned": false, 00:31:11.682 "supported_io_types": { 00:31:11.682 "read": true, 00:31:11.682 "write": true, 00:31:11.682 "unmap": false, 00:31:11.682 "write_zeroes": true, 00:31:11.682 "flush": false, 00:31:11.682 "reset": true, 00:31:11.682 "compare": false, 00:31:11.682 "compare_and_write": false, 00:31:11.682 "abort": false, 00:31:11.682 "nvme_admin": false, 00:31:11.682 "nvme_io": false 00:31:11.682 }, 00:31:11.682 "driver_specific": { 00:31:11.682 "raid": { 00:31:11.682 "uuid": "3bad6adb-5eed-4c02-9b6d-f39f321a95a7", 00:31:11.682 "strip_size_kb": 64, 00:31:11.682 "state": "online", 00:31:11.682 "raid_level": "raid5f", 00:31:11.682 "superblock": true, 00:31:11.682 "num_base_bdevs": 3, 00:31:11.682 "num_base_bdevs_discovered": 3, 00:31:11.682 "num_base_bdevs_operational": 3, 00:31:11.682 "base_bdevs_list": [ 00:31:11.682 { 00:31:11.682 "name": "pt1", 00:31:11.682 "uuid": "f24a9094-bbb1-5f4b-b804-c209fd78feff", 00:31:11.682 "is_configured": true, 00:31:11.682 "data_offset": 2048, 00:31:11.682 "data_size": 63488 00:31:11.682 }, 00:31:11.682 { 00:31:11.682 "name": "pt2", 00:31:11.682 "uuid": "772f30e1-5166-59a2-907e-02b6075f0e93", 00:31:11.682 "is_configured": true, 00:31:11.682 "data_offset": 2048, 00:31:11.682 "data_size": 63488 00:31:11.682 }, 00:31:11.682 { 00:31:11.682 "name": "pt3", 00:31:11.682 "uuid": "ae8e46ae-6a4c-5219-a154-99ddbeb72beb", 00:31:11.682 "is_configured": true, 00:31:11.682 "data_offset": 2048, 00:31:11.682 "data_size": 63488 00:31:11.682 } 00:31:11.682 ] 00:31:11.682 } 00:31:11.682 } 00:31:11.682 }' 00:31:11.682 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:11.682 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:11.682 pt2 00:31:11.682 pt3' 00:31:11.682 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:11.683 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:11.683 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:11.941 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:11.942 "name": "pt1", 00:31:11.942 "aliases": [ 00:31:11.942 "f24a9094-bbb1-5f4b-b804-c209fd78feff" 00:31:11.942 ], 00:31:11.942 "product_name": "passthru", 00:31:11.942 "block_size": 512, 00:31:11.942 "num_blocks": 65536, 00:31:11.942 "uuid": "f24a9094-bbb1-5f4b-b804-c209fd78feff", 00:31:11.942 "assigned_rate_limits": { 00:31:11.942 "rw_ios_per_sec": 0, 00:31:11.942 "rw_mbytes_per_sec": 0, 00:31:11.942 "r_mbytes_per_sec": 0, 00:31:11.942 "w_mbytes_per_sec": 0 00:31:11.942 }, 00:31:11.942 "claimed": true, 00:31:11.942 "claim_type": "exclusive_write", 00:31:11.942 "zoned": false, 00:31:11.942 "supported_io_types": { 00:31:11.942 "read": true, 00:31:11.942 "write": true, 00:31:11.942 "unmap": true, 00:31:11.942 "write_zeroes": true, 00:31:11.942 "flush": true, 00:31:11.942 "reset": true, 00:31:11.942 "compare": false, 00:31:11.942 "compare_and_write": false, 00:31:11.942 "abort": true, 00:31:11.942 "nvme_admin": false, 00:31:11.942 "nvme_io": false 00:31:11.942 }, 00:31:11.942 "memory_domains": [ 00:31:11.942 { 00:31:11.942 "dma_device_id": "system", 00:31:11.942 "dma_device_type": 1 00:31:11.942 }, 00:31:11.942 { 00:31:11.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.942 "dma_device_type": 2 00:31:11.942 } 00:31:11.942 ], 00:31:11.942 "driver_specific": { 00:31:11.942 "passthru": { 00:31:11.942 "name": "pt1", 00:31:11.942 "base_bdev_name": "malloc1" 00:31:11.942 } 00:31:11.942 } 00:31:11.942 }' 00:31:11.942 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:11.942 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:11.942 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:11.942 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:11.942 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:11.942 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:11.942 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:12.201 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:12.201 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:12.201 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:12.201 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:12.201 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:12.201 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:12.201 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:12.201 07:41:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:12.460 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:12.460 "name": "pt2", 00:31:12.460 "aliases": [ 00:31:12.460 "772f30e1-5166-59a2-907e-02b6075f0e93" 00:31:12.460 ], 00:31:12.460 "product_name": "passthru", 00:31:12.460 "block_size": 512, 00:31:12.460 "num_blocks": 65536, 00:31:12.460 "uuid": "772f30e1-5166-59a2-907e-02b6075f0e93", 00:31:12.460 "assigned_rate_limits": { 00:31:12.460 "rw_ios_per_sec": 0, 00:31:12.460 "rw_mbytes_per_sec": 0, 00:31:12.460 "r_mbytes_per_sec": 0, 00:31:12.460 "w_mbytes_per_sec": 0 00:31:12.460 }, 00:31:12.460 "claimed": true, 00:31:12.460 "claim_type": "exclusive_write", 00:31:12.460 "zoned": false, 00:31:12.460 "supported_io_types": { 00:31:12.460 "read": true, 00:31:12.460 "write": true, 00:31:12.460 "unmap": true, 00:31:12.460 "write_zeroes": true, 00:31:12.460 "flush": true, 00:31:12.460 "reset": true, 00:31:12.460 "compare": false, 00:31:12.460 "compare_and_write": false, 00:31:12.460 "abort": true, 00:31:12.460 "nvme_admin": false, 00:31:12.460 "nvme_io": false 00:31:12.460 }, 00:31:12.460 "memory_domains": [ 00:31:12.460 { 00:31:12.460 "dma_device_id": "system", 00:31:12.460 "dma_device_type": 1 00:31:12.460 }, 00:31:12.460 { 00:31:12.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:12.460 "dma_device_type": 2 00:31:12.460 } 00:31:12.460 ], 00:31:12.460 "driver_specific": { 00:31:12.460 "passthru": { 00:31:12.460 "name": "pt2", 00:31:12.460 "base_bdev_name": "malloc2" 00:31:12.460 } 00:31:12.460 } 00:31:12.460 }' 00:31:12.460 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:12.460 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:12.720 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:12.720 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:12.720 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:12.720 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:12.720 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:12.720 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:12.720 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:12.720 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:12.720 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:12.979 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:12.979 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:12.979 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:31:12.979 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:13.238 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:13.238 "name": "pt3", 00:31:13.238 "aliases": [ 00:31:13.238 "ae8e46ae-6a4c-5219-a154-99ddbeb72beb" 00:31:13.238 ], 00:31:13.238 "product_name": "passthru", 00:31:13.238 "block_size": 512, 00:31:13.238 "num_blocks": 65536, 00:31:13.238 "uuid": "ae8e46ae-6a4c-5219-a154-99ddbeb72beb", 00:31:13.238 "assigned_rate_limits": { 00:31:13.238 "rw_ios_per_sec": 0, 00:31:13.238 "rw_mbytes_per_sec": 0, 00:31:13.238 "r_mbytes_per_sec": 0, 00:31:13.238 "w_mbytes_per_sec": 0 00:31:13.238 }, 00:31:13.238 "claimed": true, 00:31:13.238 "claim_type": "exclusive_write", 00:31:13.238 "zoned": false, 00:31:13.238 "supported_io_types": { 00:31:13.238 "read": true, 00:31:13.238 "write": true, 00:31:13.238 "unmap": true, 00:31:13.238 "write_zeroes": true, 00:31:13.238 "flush": true, 00:31:13.238 "reset": true, 00:31:13.238 "compare": false, 00:31:13.238 "compare_and_write": false, 00:31:13.238 "abort": true, 00:31:13.238 "nvme_admin": false, 00:31:13.238 "nvme_io": false 00:31:13.238 }, 00:31:13.238 "memory_domains": [ 00:31:13.238 { 00:31:13.238 "dma_device_id": "system", 00:31:13.238 "dma_device_type": 1 00:31:13.238 }, 00:31:13.238 { 00:31:13.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:13.238 "dma_device_type": 2 00:31:13.238 } 00:31:13.238 ], 00:31:13.238 "driver_specific": { 00:31:13.238 "passthru": { 00:31:13.238 "name": "pt3", 00:31:13.238 "base_bdev_name": "malloc3" 00:31:13.238 } 00:31:13.238 } 00:31:13.238 }' 00:31:13.238 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:13.238 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:13.238 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:13.238 07:41:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:13.238 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:13.238 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:13.238 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:13.238 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:13.497 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:13.497 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:13.497 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:13.497 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:13.497 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:13.497 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:31:13.756 [2024-07-12 07:41:47.498042] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:13.756 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 3bad6adb-5eed-4c02-9b6d-f39f321a95a7 '!=' 3bad6adb-5eed-4c02-9b6d-f39f321a95a7 ']' 00:31:13.756 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:31:13.756 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:13.756 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:31:13.756 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:14.015 [2024-07-12 07:41:47.750151] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:14.015 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:14.015 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:14.015 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:14.015 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:14.015 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:14.015 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:14.015 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:14.015 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:14.015 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:14.015 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:14.015 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:14.015 07:41:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:14.274 07:41:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:14.274 "name": "raid_bdev1", 00:31:14.274 "uuid": "3bad6adb-5eed-4c02-9b6d-f39f321a95a7", 00:31:14.274 "strip_size_kb": 64, 00:31:14.274 "state": "online", 00:31:14.274 "raid_level": "raid5f", 00:31:14.274 "superblock": true, 00:31:14.274 "num_base_bdevs": 3, 00:31:14.274 "num_base_bdevs_discovered": 2, 00:31:14.274 "num_base_bdevs_operational": 2, 00:31:14.274 "base_bdevs_list": [ 00:31:14.274 { 00:31:14.274 "name": null, 00:31:14.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:14.274 "is_configured": false, 00:31:14.274 "data_offset": 2048, 00:31:14.274 "data_size": 63488 00:31:14.274 }, 00:31:14.274 { 00:31:14.274 "name": "pt2", 00:31:14.274 "uuid": "772f30e1-5166-59a2-907e-02b6075f0e93", 00:31:14.274 "is_configured": true, 00:31:14.274 "data_offset": 2048, 00:31:14.274 "data_size": 63488 00:31:14.274 }, 00:31:14.274 { 00:31:14.274 "name": "pt3", 00:31:14.274 "uuid": "ae8e46ae-6a4c-5219-a154-99ddbeb72beb", 00:31:14.274 "is_configured": true, 00:31:14.274 "data_offset": 2048, 00:31:14.274 "data_size": 63488 00:31:14.274 } 00:31:14.274 ] 00:31:14.274 }' 00:31:14.274 07:41:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:14.274 07:41:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.841 07:41:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:15.100 [2024-07-12 07:41:48.798305] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:15.100 [2024-07-12 07:41:48.798454] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:15.100 [2024-07-12 07:41:48.798694] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:15.100 [2024-07-12 07:41:48.798868] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:15.100 [2024-07-12 07:41:48.798971] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:31:15.100 07:41:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.100 07:41:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:31:15.359 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:31:15.359 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:31:15.359 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:31:15.359 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:15.359 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:15.618 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:31:15.618 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:15.618 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:31:15.878 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:31:15.878 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:15.878 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:31:15.878 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:31:15.878 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:16.137 [2024-07-12 07:41:49.770431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:16.137 [2024-07-12 07:41:49.771172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:16.137 [2024-07-12 07:41:49.771433] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:31:16.137 [2024-07-12 07:41:49.771654] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:16.137 [2024-07-12 07:41:49.774583] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:16.137 [2024-07-12 07:41:49.774869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:16.137 [2024-07-12 07:41:49.775187] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:16.137 [2024-07-12 07:41:49.775328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:16.137 pt2 00:31:16.137 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:31:16.137 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:16.137 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:16.137 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:16.137 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:16.137 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:16.137 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:16.137 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:16.137 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:16.137 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:16.137 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.137 07:41:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.397 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:16.397 "name": "raid_bdev1", 00:31:16.397 "uuid": "3bad6adb-5eed-4c02-9b6d-f39f321a95a7", 00:31:16.397 "strip_size_kb": 64, 00:31:16.397 "state": "configuring", 00:31:16.397 "raid_level": "raid5f", 00:31:16.397 "superblock": true, 00:31:16.397 "num_base_bdevs": 3, 00:31:16.397 "num_base_bdevs_discovered": 1, 00:31:16.397 "num_base_bdevs_operational": 2, 00:31:16.397 "base_bdevs_list": [ 00:31:16.397 { 00:31:16.397 "name": null, 00:31:16.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.397 "is_configured": false, 00:31:16.397 "data_offset": 2048, 00:31:16.397 "data_size": 63488 00:31:16.397 }, 00:31:16.397 { 00:31:16.397 "name": "pt2", 00:31:16.397 "uuid": "772f30e1-5166-59a2-907e-02b6075f0e93", 00:31:16.397 "is_configured": true, 00:31:16.397 "data_offset": 2048, 00:31:16.397 "data_size": 63488 00:31:16.397 }, 00:31:16.397 { 00:31:16.397 "name": null, 00:31:16.397 "uuid": "ae8e46ae-6a4c-5219-a154-99ddbeb72beb", 00:31:16.397 "is_configured": false, 00:31:16.397 "data_offset": 2048, 00:31:16.397 "data_size": 63488 00:31:16.397 } 00:31:16.397 ] 00:31:16.397 }' 00:31:16.397 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:16.397 07:41:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:16.964 [2024-07-12 07:41:50.775416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:16.964 [2024-07-12 07:41:50.776043] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:16.964 [2024-07-12 07:41:50.776294] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:31:16.964 [2024-07-12 07:41:50.776516] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:16.964 [2024-07-12 07:41:50.777223] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:16.964 [2024-07-12 07:41:50.777476] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:16.964 [2024-07-12 07:41:50.777793] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:16.964 [2024-07-12 07:41:50.777922] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:16.964 [2024-07-12 07:41:50.778090] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:31:16.964 [2024-07-12 07:41:50.778177] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:16.964 [2024-07-12 07:41:50.778277] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:31:16.964 [2024-07-12 07:41:50.779022] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:31:16.964 [2024-07-12 07:41:50.779138] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:31:16.964 [2024-07-12 07:41:50.779537] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:16.964 pt3 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.964 07:41:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:17.224 07:41:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:17.224 "name": "raid_bdev1", 00:31:17.224 "uuid": "3bad6adb-5eed-4c02-9b6d-f39f321a95a7", 00:31:17.224 "strip_size_kb": 64, 00:31:17.224 "state": "online", 00:31:17.224 "raid_level": "raid5f", 00:31:17.224 "superblock": true, 00:31:17.224 "num_base_bdevs": 3, 00:31:17.224 "num_base_bdevs_discovered": 2, 00:31:17.224 "num_base_bdevs_operational": 2, 00:31:17.224 "base_bdevs_list": [ 00:31:17.224 { 00:31:17.224 "name": null, 00:31:17.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:17.224 "is_configured": false, 00:31:17.224 "data_offset": 2048, 00:31:17.224 "data_size": 63488 00:31:17.224 }, 00:31:17.224 { 00:31:17.224 "name": "pt2", 00:31:17.224 "uuid": "772f30e1-5166-59a2-907e-02b6075f0e93", 00:31:17.224 "is_configured": true, 00:31:17.224 "data_offset": 2048, 00:31:17.224 "data_size": 63488 00:31:17.224 }, 00:31:17.224 { 00:31:17.224 "name": "pt3", 00:31:17.224 "uuid": "ae8e46ae-6a4c-5219-a154-99ddbeb72beb", 00:31:17.224 "is_configured": true, 00:31:17.224 "data_offset": 2048, 00:31:17.224 "data_size": 63488 00:31:17.224 } 00:31:17.224 ] 00:31:17.224 }' 00:31:17.224 07:41:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:17.224 07:41:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:18.159 07:41:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:18.159 [2024-07-12 07:41:52.011668] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:18.159 [2024-07-12 07:41:52.011832] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:18.159 [2024-07-12 07:41:52.012057] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:18.159 [2024-07-12 07:41:52.012159] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:18.159 [2024-07-12 07:41:52.012335] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:31:18.159 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.159 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:31:18.418 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:31:18.418 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:31:18.418 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:31:18.418 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:31:18.418 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:31:18.676 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:18.935 [2024-07-12 07:41:52.735805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:18.935 [2024-07-12 07:41:52.736873] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:18.935 [2024-07-12 07:41:52.737157] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:18.935 [2024-07-12 07:41:52.737413] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:18.935 [2024-07-12 07:41:52.740454] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:18.935 [2024-07-12 07:41:52.740711] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:18.935 [2024-07-12 07:41:52.741036] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:18.935 [2024-07-12 07:41:52.741186] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:18.935 [2024-07-12 07:41:52.741562] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:18.935 [2024-07-12 07:41:52.741675] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:18.935 [2024-07-12 07:41:52.741728] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:31:18.935 [2024-07-12 07:41:52.741841] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:18.935 pt1 00:31:18.935 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:31:18.935 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:31:18.935 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:18.935 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:18.935 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:18.935 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:18.935 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:18.935 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:18.935 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:18.935 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:18.935 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:18.935 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:18.935 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.194 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:19.194 "name": "raid_bdev1", 00:31:19.194 "uuid": "3bad6adb-5eed-4c02-9b6d-f39f321a95a7", 00:31:19.194 "strip_size_kb": 64, 00:31:19.194 "state": "configuring", 00:31:19.194 "raid_level": "raid5f", 00:31:19.194 "superblock": true, 00:31:19.194 "num_base_bdevs": 3, 00:31:19.194 "num_base_bdevs_discovered": 1, 00:31:19.194 "num_base_bdevs_operational": 2, 00:31:19.194 "base_bdevs_list": [ 00:31:19.194 { 00:31:19.194 "name": null, 00:31:19.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:19.194 "is_configured": false, 00:31:19.194 "data_offset": 2048, 00:31:19.194 "data_size": 63488 00:31:19.194 }, 00:31:19.194 { 00:31:19.194 "name": "pt2", 00:31:19.194 "uuid": "772f30e1-5166-59a2-907e-02b6075f0e93", 00:31:19.194 "is_configured": true, 00:31:19.194 "data_offset": 2048, 00:31:19.194 "data_size": 63488 00:31:19.194 }, 00:31:19.194 { 00:31:19.194 "name": null, 00:31:19.194 "uuid": "ae8e46ae-6a4c-5219-a154-99ddbeb72beb", 00:31:19.194 "is_configured": false, 00:31:19.194 "data_offset": 2048, 00:31:19.194 "data_size": 63488 00:31:19.194 } 00:31:19.194 ] 00:31:19.194 }' 00:31:19.194 07:41:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:19.194 07:41:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:19.758 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:31:19.758 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:20.016 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:31:20.016 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:20.274 [2024-07-12 07:41:53.977315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:20.274 [2024-07-12 07:41:53.978022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:20.274 [2024-07-12 07:41:53.978297] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:31:20.274 [2024-07-12 07:41:53.978536] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:20.274 [2024-07-12 07:41:53.979271] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:20.274 [2024-07-12 07:41:53.979528] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:20.274 [2024-07-12 07:41:53.979860] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:20.274 [2024-07-12 07:41:53.979990] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:20.275 [2024-07-12 07:41:53.980171] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:31:20.275 [2024-07-12 07:41:53.980306] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:20.275 [2024-07-12 07:41:53.980410] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:31:20.275 [2024-07-12 07:41:53.981152] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:31:20.275 [2024-07-12 07:41:53.981280] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:31:20.275 [2024-07-12 07:41:53.981550] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:20.275 pt3 00:31:20.275 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:20.275 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:20.275 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:20.275 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:20.275 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:20.275 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:20.275 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:20.275 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:20.275 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:20.275 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:20.275 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:20.275 07:41:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:20.533 07:41:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:20.533 "name": "raid_bdev1", 00:31:20.533 "uuid": "3bad6adb-5eed-4c02-9b6d-f39f321a95a7", 00:31:20.533 "strip_size_kb": 64, 00:31:20.533 "state": "online", 00:31:20.533 "raid_level": "raid5f", 00:31:20.533 "superblock": true, 00:31:20.533 "num_base_bdevs": 3, 00:31:20.533 "num_base_bdevs_discovered": 2, 00:31:20.533 "num_base_bdevs_operational": 2, 00:31:20.533 "base_bdevs_list": [ 00:31:20.533 { 00:31:20.533 "name": null, 00:31:20.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:20.533 "is_configured": false, 00:31:20.533 "data_offset": 2048, 00:31:20.533 "data_size": 63488 00:31:20.533 }, 00:31:20.533 { 00:31:20.533 "name": "pt2", 00:31:20.533 "uuid": "772f30e1-5166-59a2-907e-02b6075f0e93", 00:31:20.533 "is_configured": true, 00:31:20.533 "data_offset": 2048, 00:31:20.533 "data_size": 63488 00:31:20.533 }, 00:31:20.533 { 00:31:20.533 "name": "pt3", 00:31:20.533 "uuid": "ae8e46ae-6a4c-5219-a154-99ddbeb72beb", 00:31:20.533 "is_configured": true, 00:31:20.533 "data_offset": 2048, 00:31:20.533 "data_size": 63488 00:31:20.533 } 00:31:20.533 ] 00:31:20.533 }' 00:31:20.533 07:41:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:20.533 07:41:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:21.100 07:41:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:31:21.100 07:41:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:21.358 07:41:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:31:21.358 07:41:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:21.358 07:41:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:31:21.617 [2024-07-12 07:41:55.261809] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:21.617 07:41:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3bad6adb-5eed-4c02-9b6d-f39f321a95a7 '!=' 3bad6adb-5eed-4c02-9b6d-f39f321a95a7 ']' 00:31:21.617 07:41:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 161327 00:31:21.617 07:41:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 161327 ']' 00:31:21.617 07:41:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # kill -0 161327 00:31:21.617 07:41:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # uname 00:31:21.617 07:41:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:21.617 07:41:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 161327 00:31:21.617 07:41:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:21.617 07:41:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:21.617 07:41:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 161327' 00:31:21.617 killing process with pid 161327 00:31:21.617 07:41:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@965 -- # kill 161327 00:31:21.617 [2024-07-12 07:41:55.311427] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:21.617 07:41:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # wait 161327 00:31:21.617 [2024-07-12 07:41:55.311624] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:21.617 [2024-07-12 07:41:55.311700] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:21.617 [2024-07-12 07:41:55.311709] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:31:21.617 [2024-07-12 07:41:55.370675] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:22.185 07:41:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:31:22.185 00:31:22.185 real 0m21.147s 00:31:22.185 user 0m38.441s 00:31:22.185 sys 0m3.819s 00:31:22.185 07:41:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:22.185 07:41:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.185 ************************************ 00:31:22.185 END TEST raid5f_superblock_test 00:31:22.185 ************************************ 00:31:22.185 07:41:55 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:31:22.185 07:41:55 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:31:22.185 07:41:55 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:31:22.185 07:41:55 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:22.185 07:41:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:22.185 ************************************ 00:31:22.185 START TEST raid5f_rebuild_test 00:31:22.185 ************************************ 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 3 false false true 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=162039 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 162039 /var/tmp/spdk-raid.sock 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 162039 ']' 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:22.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:22.185 07:41:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:22.186 07:41:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.186 [2024-07-12 07:41:55.941078] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:22.186 [2024-07-12 07:41:55.942295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162039 ] 00:31:22.186 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:22.186 Zero copy mechanism will not be used. 00:31:22.444 [2024-07-12 07:41:56.088162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.444 [2024-07-12 07:41:56.148108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.444 [2024-07-12 07:41:56.206658] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:23.381 07:41:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:23.381 07:41:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:31:23.381 07:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:23.381 07:41:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:23.381 BaseBdev1_malloc 00:31:23.381 07:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:23.639 [2024-07-12 07:41:57.319854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:23.639 [2024-07-12 07:41:57.320074] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:23.639 [2024-07-12 07:41:57.320187] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:31:23.639 [2024-07-12 07:41:57.320304] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:23.639 [2024-07-12 07:41:57.322764] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:23.639 [2024-07-12 07:41:57.322945] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:23.639 BaseBdev1 00:31:23.639 07:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:23.639 07:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:23.898 BaseBdev2_malloc 00:31:23.898 07:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:23.898 [2024-07-12 07:41:57.740726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:23.898 [2024-07-12 07:41:57.740912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:23.898 [2024-07-12 07:41:57.740977] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:31:23.898 [2024-07-12 07:41:57.741086] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:23.898 [2024-07-12 07:41:57.743349] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:23.898 [2024-07-12 07:41:57.743500] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:23.898 BaseBdev2 00:31:23.898 07:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:23.898 07:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:24.156 BaseBdev3_malloc 00:31:24.156 07:41:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:24.415 [2024-07-12 07:41:58.119259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:24.415 [2024-07-12 07:41:58.119441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:24.415 [2024-07-12 07:41:58.119507] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:24.415 [2024-07-12 07:41:58.119615] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:24.415 [2024-07-12 07:41:58.121874] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:24.415 [2024-07-12 07:41:58.122036] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:24.415 BaseBdev3 00:31:24.415 07:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:24.674 spare_malloc 00:31:24.674 07:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:24.933 spare_delay 00:31:24.933 07:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:24.933 [2024-07-12 07:41:58.739909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:24.933 [2024-07-12 07:41:58.740086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:24.933 [2024-07-12 07:41:58.740184] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:24.933 [2024-07-12 07:41:58.740293] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:24.933 [2024-07-12 07:41:58.742646] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:24.933 [2024-07-12 07:41:58.742814] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:24.933 spare 00:31:24.933 07:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:31:25.193 [2024-07-12 07:41:58.984035] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:25.193 [2024-07-12 07:41:58.986261] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:25.193 [2024-07-12 07:41:58.986460] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:25.193 [2024-07-12 07:41:58.986702] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:31:25.193 [2024-07-12 07:41:58.986813] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:25.193 [2024-07-12 07:41:58.987035] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:31:25.193 [2024-07-12 07:41:58.987775] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:31:25.193 [2024-07-12 07:41:58.987916] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:31:25.193 [2024-07-12 07:41:58.988187] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:25.193 07:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:25.193 07:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:25.193 07:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:25.193 07:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:25.193 07:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:25.193 07:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:25.193 07:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:25.193 07:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:25.193 07:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:25.193 07:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:25.193 07:41:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.193 07:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.452 07:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:25.452 "name": "raid_bdev1", 00:31:25.452 "uuid": "5a289060-efe8-4255-98e5-404930c16c72", 00:31:25.452 "strip_size_kb": 64, 00:31:25.452 "state": "online", 00:31:25.452 "raid_level": "raid5f", 00:31:25.452 "superblock": false, 00:31:25.452 "num_base_bdevs": 3, 00:31:25.452 "num_base_bdevs_discovered": 3, 00:31:25.452 "num_base_bdevs_operational": 3, 00:31:25.452 "base_bdevs_list": [ 00:31:25.452 { 00:31:25.452 "name": "BaseBdev1", 00:31:25.452 "uuid": "2ede5195-467d-587f-b3f4-ba70aee44a57", 00:31:25.452 "is_configured": true, 00:31:25.452 "data_offset": 0, 00:31:25.452 "data_size": 65536 00:31:25.452 }, 00:31:25.452 { 00:31:25.452 "name": "BaseBdev2", 00:31:25.452 "uuid": "8d4dc593-9220-5756-ad0d-c077eae969ee", 00:31:25.452 "is_configured": true, 00:31:25.452 "data_offset": 0, 00:31:25.452 "data_size": 65536 00:31:25.452 }, 00:31:25.452 { 00:31:25.452 "name": "BaseBdev3", 00:31:25.452 "uuid": "56ec6a72-e525-524f-b0dd-17ee7e8ceda4", 00:31:25.452 "is_configured": true, 00:31:25.452 "data_offset": 0, 00:31:25.452 "data_size": 65536 00:31:25.452 } 00:31:25.452 ] 00:31:25.452 }' 00:31:25.452 07:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:25.452 07:41:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.048 07:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:26.048 07:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:26.306 [2024-07-12 07:41:59.980436] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:26.307 07:41:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=131072 00:31:26.307 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:26.307 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:26.565 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:31:26.565 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:31:26.565 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:31:26.565 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:31:26.565 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:26.565 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:26.565 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:26.565 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:26.565 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:26.565 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:26.565 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:26.565 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:26.565 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:26.565 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:26.825 [2024-07-12 07:42:00.492493] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:31:26.825 /dev/nbd0 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:26.825 1+0 records in 00:31:26.825 1+0 records out 00:31:26.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211099 s, 19.4 MB/s 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 128 00:31:26.825 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:31:27.085 512+0 records in 00:31:27.085 512+0 records out 00:31:27.085 67108864 bytes (67 MB, 64 MiB) copied, 0.338493 s, 198 MB/s 00:31:27.085 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:27.085 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:27.085 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:27.085 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:27.085 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:27.085 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:27.085 07:42:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:27.349 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:27.349 [2024-07-12 07:42:01.101927] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:27.349 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:27.349 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:27.349 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:27.349 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:27.349 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:27.349 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:27.350 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:27.350 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:27.610 [2024-07-12 07:42:01.373613] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:27.610 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:27.610 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:27.610 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:27.610 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:27.610 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:27.610 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:27.610 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:27.610 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:27.610 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:27.610 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:27.610 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:27.610 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.868 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:27.868 "name": "raid_bdev1", 00:31:27.868 "uuid": "5a289060-efe8-4255-98e5-404930c16c72", 00:31:27.868 "strip_size_kb": 64, 00:31:27.868 "state": "online", 00:31:27.868 "raid_level": "raid5f", 00:31:27.868 "superblock": false, 00:31:27.868 "num_base_bdevs": 3, 00:31:27.868 "num_base_bdevs_discovered": 2, 00:31:27.868 "num_base_bdevs_operational": 2, 00:31:27.868 "base_bdevs_list": [ 00:31:27.868 { 00:31:27.868 "name": null, 00:31:27.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:27.868 "is_configured": false, 00:31:27.868 "data_offset": 0, 00:31:27.868 "data_size": 65536 00:31:27.868 }, 00:31:27.868 { 00:31:27.868 "name": "BaseBdev2", 00:31:27.868 "uuid": "8d4dc593-9220-5756-ad0d-c077eae969ee", 00:31:27.868 "is_configured": true, 00:31:27.868 "data_offset": 0, 00:31:27.868 "data_size": 65536 00:31:27.868 }, 00:31:27.868 { 00:31:27.868 "name": "BaseBdev3", 00:31:27.868 "uuid": "56ec6a72-e525-524f-b0dd-17ee7e8ceda4", 00:31:27.868 "is_configured": true, 00:31:27.868 "data_offset": 0, 00:31:27.868 "data_size": 65536 00:31:27.868 } 00:31:27.868 ] 00:31:27.868 }' 00:31:27.868 07:42:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:27.868 07:42:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.433 07:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:28.692 [2024-07-12 07:42:02.365768] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:28.692 [2024-07-12 07:42:02.369446] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027c00 00:31:28.692 [2024-07-12 07:42:02.371836] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:28.692 07:42:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:29.633 07:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:29.633 07:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:29.633 07:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:29.633 07:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:29.633 07:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:29.633 07:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.633 07:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:29.926 07:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:29.926 "name": "raid_bdev1", 00:31:29.926 "uuid": "5a289060-efe8-4255-98e5-404930c16c72", 00:31:29.926 "strip_size_kb": 64, 00:31:29.926 "state": "online", 00:31:29.926 "raid_level": "raid5f", 00:31:29.926 "superblock": false, 00:31:29.926 "num_base_bdevs": 3, 00:31:29.926 "num_base_bdevs_discovered": 3, 00:31:29.926 "num_base_bdevs_operational": 3, 00:31:29.926 "process": { 00:31:29.926 "type": "rebuild", 00:31:29.926 "target": "spare", 00:31:29.926 "progress": { 00:31:29.926 "blocks": 24576, 00:31:29.926 "percent": 18 00:31:29.926 } 00:31:29.926 }, 00:31:29.926 "base_bdevs_list": [ 00:31:29.926 { 00:31:29.926 "name": "spare", 00:31:29.926 "uuid": "3afcf5ea-a330-5f7a-baae-3a4b089132b1", 00:31:29.926 "is_configured": true, 00:31:29.926 "data_offset": 0, 00:31:29.926 "data_size": 65536 00:31:29.926 }, 00:31:29.926 { 00:31:29.926 "name": "BaseBdev2", 00:31:29.926 "uuid": "8d4dc593-9220-5756-ad0d-c077eae969ee", 00:31:29.926 "is_configured": true, 00:31:29.926 "data_offset": 0, 00:31:29.926 "data_size": 65536 00:31:29.926 }, 00:31:29.926 { 00:31:29.926 "name": "BaseBdev3", 00:31:29.926 "uuid": "56ec6a72-e525-524f-b0dd-17ee7e8ceda4", 00:31:29.926 "is_configured": true, 00:31:29.927 "data_offset": 0, 00:31:29.927 "data_size": 65536 00:31:29.927 } 00:31:29.927 ] 00:31:29.927 }' 00:31:29.927 07:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:29.927 07:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:29.927 07:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:29.927 07:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:29.927 07:42:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:30.189 [2024-07-12 07:42:03.943769] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:30.189 [2024-07-12 07:42:03.983975] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:30.189 [2024-07-12 07:42:03.984043] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:30.189 [2024-07-12 07:42:03.984058] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:30.189 [2024-07-12 07:42:03.984066] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:30.189 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:30.189 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:30.189 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:30.189 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:30.189 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:30.189 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:30.189 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:30.189 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:30.189 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:30.189 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:30.189 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:30.189 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:30.448 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:30.448 "name": "raid_bdev1", 00:31:30.448 "uuid": "5a289060-efe8-4255-98e5-404930c16c72", 00:31:30.448 "strip_size_kb": 64, 00:31:30.448 "state": "online", 00:31:30.448 "raid_level": "raid5f", 00:31:30.448 "superblock": false, 00:31:30.448 "num_base_bdevs": 3, 00:31:30.448 "num_base_bdevs_discovered": 2, 00:31:30.448 "num_base_bdevs_operational": 2, 00:31:30.448 "base_bdevs_list": [ 00:31:30.448 { 00:31:30.448 "name": null, 00:31:30.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:30.448 "is_configured": false, 00:31:30.448 "data_offset": 0, 00:31:30.448 "data_size": 65536 00:31:30.448 }, 00:31:30.448 { 00:31:30.448 "name": "BaseBdev2", 00:31:30.448 "uuid": "8d4dc593-9220-5756-ad0d-c077eae969ee", 00:31:30.448 "is_configured": true, 00:31:30.448 "data_offset": 0, 00:31:30.448 "data_size": 65536 00:31:30.448 }, 00:31:30.448 { 00:31:30.448 "name": "BaseBdev3", 00:31:30.448 "uuid": "56ec6a72-e525-524f-b0dd-17ee7e8ceda4", 00:31:30.448 "is_configured": true, 00:31:30.448 "data_offset": 0, 00:31:30.448 "data_size": 65536 00:31:30.448 } 00:31:30.448 ] 00:31:30.448 }' 00:31:30.448 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:30.448 07:42:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.016 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:31.016 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:31.016 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:31.016 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:31.016 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:31.016 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:31.016 07:42:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:31.275 07:42:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:31.275 "name": "raid_bdev1", 00:31:31.275 "uuid": "5a289060-efe8-4255-98e5-404930c16c72", 00:31:31.275 "strip_size_kb": 64, 00:31:31.275 "state": "online", 00:31:31.275 "raid_level": "raid5f", 00:31:31.275 "superblock": false, 00:31:31.275 "num_base_bdevs": 3, 00:31:31.275 "num_base_bdevs_discovered": 2, 00:31:31.275 "num_base_bdevs_operational": 2, 00:31:31.275 "base_bdevs_list": [ 00:31:31.275 { 00:31:31.275 "name": null, 00:31:31.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:31.275 "is_configured": false, 00:31:31.275 "data_offset": 0, 00:31:31.275 "data_size": 65536 00:31:31.275 }, 00:31:31.275 { 00:31:31.275 "name": "BaseBdev2", 00:31:31.275 "uuid": "8d4dc593-9220-5756-ad0d-c077eae969ee", 00:31:31.275 "is_configured": true, 00:31:31.275 "data_offset": 0, 00:31:31.275 "data_size": 65536 00:31:31.275 }, 00:31:31.275 { 00:31:31.275 "name": "BaseBdev3", 00:31:31.275 "uuid": "56ec6a72-e525-524f-b0dd-17ee7e8ceda4", 00:31:31.275 "is_configured": true, 00:31:31.275 "data_offset": 0, 00:31:31.275 "data_size": 65536 00:31:31.275 } 00:31:31.275 ] 00:31:31.275 }' 00:31:31.275 07:42:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:31.275 07:42:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:31.275 07:42:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:31.275 07:42:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:31.275 07:42:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:31.533 [2024-07-12 07:42:05.353799] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:31.533 [2024-07-12 07:42:05.357499] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:31:31.533 [2024-07-12 07:42:05.359696] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:31.533 07:42:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:32.912 "name": "raid_bdev1", 00:31:32.912 "uuid": "5a289060-efe8-4255-98e5-404930c16c72", 00:31:32.912 "strip_size_kb": 64, 00:31:32.912 "state": "online", 00:31:32.912 "raid_level": "raid5f", 00:31:32.912 "superblock": false, 00:31:32.912 "num_base_bdevs": 3, 00:31:32.912 "num_base_bdevs_discovered": 3, 00:31:32.912 "num_base_bdevs_operational": 3, 00:31:32.912 "process": { 00:31:32.912 "type": "rebuild", 00:31:32.912 "target": "spare", 00:31:32.912 "progress": { 00:31:32.912 "blocks": 24576, 00:31:32.912 "percent": 18 00:31:32.912 } 00:31:32.912 }, 00:31:32.912 "base_bdevs_list": [ 00:31:32.912 { 00:31:32.912 "name": "spare", 00:31:32.912 "uuid": "3afcf5ea-a330-5f7a-baae-3a4b089132b1", 00:31:32.912 "is_configured": true, 00:31:32.912 "data_offset": 0, 00:31:32.912 "data_size": 65536 00:31:32.912 }, 00:31:32.912 { 00:31:32.912 "name": "BaseBdev2", 00:31:32.912 "uuid": "8d4dc593-9220-5756-ad0d-c077eae969ee", 00:31:32.912 "is_configured": true, 00:31:32.912 "data_offset": 0, 00:31:32.912 "data_size": 65536 00:31:32.912 }, 00:31:32.912 { 00:31:32.912 "name": "BaseBdev3", 00:31:32.912 "uuid": "56ec6a72-e525-524f-b0dd-17ee7e8ceda4", 00:31:32.912 "is_configured": true, 00:31:32.912 "data_offset": 0, 00:31:32.912 "data_size": 65536 00:31:32.912 } 00:31:32.912 ] 00:31:32.912 }' 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1037 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:32.912 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:33.172 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:33.172 "name": "raid_bdev1", 00:31:33.172 "uuid": "5a289060-efe8-4255-98e5-404930c16c72", 00:31:33.172 "strip_size_kb": 64, 00:31:33.172 "state": "online", 00:31:33.172 "raid_level": "raid5f", 00:31:33.172 "superblock": false, 00:31:33.172 "num_base_bdevs": 3, 00:31:33.172 "num_base_bdevs_discovered": 3, 00:31:33.172 "num_base_bdevs_operational": 3, 00:31:33.172 "process": { 00:31:33.172 "type": "rebuild", 00:31:33.172 "target": "spare", 00:31:33.172 "progress": { 00:31:33.172 "blocks": 30720, 00:31:33.172 "percent": 23 00:31:33.172 } 00:31:33.172 }, 00:31:33.172 "base_bdevs_list": [ 00:31:33.172 { 00:31:33.172 "name": "spare", 00:31:33.172 "uuid": "3afcf5ea-a330-5f7a-baae-3a4b089132b1", 00:31:33.172 "is_configured": true, 00:31:33.172 "data_offset": 0, 00:31:33.172 "data_size": 65536 00:31:33.172 }, 00:31:33.172 { 00:31:33.172 "name": "BaseBdev2", 00:31:33.172 "uuid": "8d4dc593-9220-5756-ad0d-c077eae969ee", 00:31:33.172 "is_configured": true, 00:31:33.172 "data_offset": 0, 00:31:33.172 "data_size": 65536 00:31:33.172 }, 00:31:33.172 { 00:31:33.172 "name": "BaseBdev3", 00:31:33.172 "uuid": "56ec6a72-e525-524f-b0dd-17ee7e8ceda4", 00:31:33.172 "is_configured": true, 00:31:33.172 "data_offset": 0, 00:31:33.172 "data_size": 65536 00:31:33.172 } 00:31:33.172 ] 00:31:33.172 }' 00:31:33.172 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:33.172 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:33.172 07:42:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:33.172 07:42:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:33.172 07:42:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:34.551 07:42:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:34.551 07:42:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:34.551 07:42:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:34.551 07:42:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:34.551 07:42:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:34.551 07:42:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:34.551 07:42:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.551 07:42:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.551 07:42:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:34.551 "name": "raid_bdev1", 00:31:34.551 "uuid": "5a289060-efe8-4255-98e5-404930c16c72", 00:31:34.551 "strip_size_kb": 64, 00:31:34.551 "state": "online", 00:31:34.551 "raid_level": "raid5f", 00:31:34.551 "superblock": false, 00:31:34.551 "num_base_bdevs": 3, 00:31:34.551 "num_base_bdevs_discovered": 3, 00:31:34.551 "num_base_bdevs_operational": 3, 00:31:34.551 "process": { 00:31:34.551 "type": "rebuild", 00:31:34.551 "target": "spare", 00:31:34.551 "progress": { 00:31:34.551 "blocks": 57344, 00:31:34.551 "percent": 43 00:31:34.551 } 00:31:34.551 }, 00:31:34.551 "base_bdevs_list": [ 00:31:34.551 { 00:31:34.551 "name": "spare", 00:31:34.551 "uuid": "3afcf5ea-a330-5f7a-baae-3a4b089132b1", 00:31:34.551 "is_configured": true, 00:31:34.551 "data_offset": 0, 00:31:34.551 "data_size": 65536 00:31:34.551 }, 00:31:34.551 { 00:31:34.551 "name": "BaseBdev2", 00:31:34.551 "uuid": "8d4dc593-9220-5756-ad0d-c077eae969ee", 00:31:34.551 "is_configured": true, 00:31:34.551 "data_offset": 0, 00:31:34.551 "data_size": 65536 00:31:34.551 }, 00:31:34.551 { 00:31:34.551 "name": "BaseBdev3", 00:31:34.551 "uuid": "56ec6a72-e525-524f-b0dd-17ee7e8ceda4", 00:31:34.551 "is_configured": true, 00:31:34.551 "data_offset": 0, 00:31:34.551 "data_size": 65536 00:31:34.551 } 00:31:34.551 ] 00:31:34.551 }' 00:31:34.551 07:42:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:34.551 07:42:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:34.551 07:42:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:34.551 07:42:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:34.551 07:42:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:35.488 07:42:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:35.488 07:42:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:35.488 07:42:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:35.488 07:42:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:35.488 07:42:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:35.488 07:42:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:35.488 07:42:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:35.488 07:42:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:35.747 07:42:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:35.747 "name": "raid_bdev1", 00:31:35.747 "uuid": "5a289060-efe8-4255-98e5-404930c16c72", 00:31:35.747 "strip_size_kb": 64, 00:31:35.747 "state": "online", 00:31:35.747 "raid_level": "raid5f", 00:31:35.747 "superblock": false, 00:31:35.747 "num_base_bdevs": 3, 00:31:35.747 "num_base_bdevs_discovered": 3, 00:31:35.747 "num_base_bdevs_operational": 3, 00:31:35.747 "process": { 00:31:35.747 "type": "rebuild", 00:31:35.747 "target": "spare", 00:31:35.747 "progress": { 00:31:35.747 "blocks": 83968, 00:31:35.747 "percent": 64 00:31:35.747 } 00:31:35.747 }, 00:31:35.747 "base_bdevs_list": [ 00:31:35.747 { 00:31:35.747 "name": "spare", 00:31:35.747 "uuid": "3afcf5ea-a330-5f7a-baae-3a4b089132b1", 00:31:35.747 "is_configured": true, 00:31:35.747 "data_offset": 0, 00:31:35.747 "data_size": 65536 00:31:35.747 }, 00:31:35.747 { 00:31:35.747 "name": "BaseBdev2", 00:31:35.747 "uuid": "8d4dc593-9220-5756-ad0d-c077eae969ee", 00:31:35.747 "is_configured": true, 00:31:35.747 "data_offset": 0, 00:31:35.747 "data_size": 65536 00:31:35.747 }, 00:31:35.747 { 00:31:35.747 "name": "BaseBdev3", 00:31:35.747 "uuid": "56ec6a72-e525-524f-b0dd-17ee7e8ceda4", 00:31:35.747 "is_configured": true, 00:31:35.747 "data_offset": 0, 00:31:35.747 "data_size": 65536 00:31:35.747 } 00:31:35.747 ] 00:31:35.747 }' 00:31:35.747 07:42:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:36.006 07:42:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:36.006 07:42:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:36.006 07:42:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:36.006 07:42:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:36.944 07:42:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:36.944 07:42:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:36.944 07:42:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:36.944 07:42:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:36.944 07:42:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:36.944 07:42:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:36.944 07:42:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.944 07:42:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.204 07:42:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:37.204 "name": "raid_bdev1", 00:31:37.204 "uuid": "5a289060-efe8-4255-98e5-404930c16c72", 00:31:37.204 "strip_size_kb": 64, 00:31:37.204 "state": "online", 00:31:37.204 "raid_level": "raid5f", 00:31:37.204 "superblock": false, 00:31:37.204 "num_base_bdevs": 3, 00:31:37.204 "num_base_bdevs_discovered": 3, 00:31:37.204 "num_base_bdevs_operational": 3, 00:31:37.204 "process": { 00:31:37.204 "type": "rebuild", 00:31:37.204 "target": "spare", 00:31:37.204 "progress": { 00:31:37.204 "blocks": 112640, 00:31:37.204 "percent": 85 00:31:37.204 } 00:31:37.204 }, 00:31:37.204 "base_bdevs_list": [ 00:31:37.204 { 00:31:37.204 "name": "spare", 00:31:37.204 "uuid": "3afcf5ea-a330-5f7a-baae-3a4b089132b1", 00:31:37.204 "is_configured": true, 00:31:37.204 "data_offset": 0, 00:31:37.204 "data_size": 65536 00:31:37.204 }, 00:31:37.204 { 00:31:37.204 "name": "BaseBdev2", 00:31:37.204 "uuid": "8d4dc593-9220-5756-ad0d-c077eae969ee", 00:31:37.204 "is_configured": true, 00:31:37.204 "data_offset": 0, 00:31:37.204 "data_size": 65536 00:31:37.204 }, 00:31:37.204 { 00:31:37.204 "name": "BaseBdev3", 00:31:37.204 "uuid": "56ec6a72-e525-524f-b0dd-17ee7e8ceda4", 00:31:37.204 "is_configured": true, 00:31:37.204 "data_offset": 0, 00:31:37.204 "data_size": 65536 00:31:37.204 } 00:31:37.204 ] 00:31:37.204 }' 00:31:37.204 07:42:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:37.204 07:42:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:37.204 07:42:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:37.204 07:42:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:37.204 07:42:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:38.139 [2024-07-12 07:42:11.806442] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:38.139 [2024-07-12 07:42:11.806499] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:38.139 [2024-07-12 07:42:11.806586] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:38.139 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:38.398 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:38.398 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:38.398 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:38.398 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:38.398 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:38.398 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:38.398 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:38.398 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:38.398 "name": "raid_bdev1", 00:31:38.398 "uuid": "5a289060-efe8-4255-98e5-404930c16c72", 00:31:38.398 "strip_size_kb": 64, 00:31:38.398 "state": "online", 00:31:38.398 "raid_level": "raid5f", 00:31:38.398 "superblock": false, 00:31:38.398 "num_base_bdevs": 3, 00:31:38.398 "num_base_bdevs_discovered": 3, 00:31:38.398 "num_base_bdevs_operational": 3, 00:31:38.398 "base_bdevs_list": [ 00:31:38.398 { 00:31:38.398 "name": "spare", 00:31:38.398 "uuid": "3afcf5ea-a330-5f7a-baae-3a4b089132b1", 00:31:38.398 "is_configured": true, 00:31:38.398 "data_offset": 0, 00:31:38.398 "data_size": 65536 00:31:38.398 }, 00:31:38.398 { 00:31:38.398 "name": "BaseBdev2", 00:31:38.398 "uuid": "8d4dc593-9220-5756-ad0d-c077eae969ee", 00:31:38.398 "is_configured": true, 00:31:38.398 "data_offset": 0, 00:31:38.398 "data_size": 65536 00:31:38.398 }, 00:31:38.398 { 00:31:38.398 "name": "BaseBdev3", 00:31:38.398 "uuid": "56ec6a72-e525-524f-b0dd-17ee7e8ceda4", 00:31:38.398 "is_configured": true, 00:31:38.398 "data_offset": 0, 00:31:38.398 "data_size": 65536 00:31:38.398 } 00:31:38.398 ] 00:31:38.398 }' 00:31:38.398 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:38.672 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:38.672 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:38.672 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:38.672 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:31:38.672 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:38.672 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:38.672 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:38.672 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:38.672 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:38.672 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:38.672 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:38.930 "name": "raid_bdev1", 00:31:38.930 "uuid": "5a289060-efe8-4255-98e5-404930c16c72", 00:31:38.930 "strip_size_kb": 64, 00:31:38.930 "state": "online", 00:31:38.930 "raid_level": "raid5f", 00:31:38.930 "superblock": false, 00:31:38.930 "num_base_bdevs": 3, 00:31:38.930 "num_base_bdevs_discovered": 3, 00:31:38.930 "num_base_bdevs_operational": 3, 00:31:38.930 "base_bdevs_list": [ 00:31:38.930 { 00:31:38.930 "name": "spare", 00:31:38.930 "uuid": "3afcf5ea-a330-5f7a-baae-3a4b089132b1", 00:31:38.930 "is_configured": true, 00:31:38.930 "data_offset": 0, 00:31:38.930 "data_size": 65536 00:31:38.930 }, 00:31:38.930 { 00:31:38.930 "name": "BaseBdev2", 00:31:38.930 "uuid": "8d4dc593-9220-5756-ad0d-c077eae969ee", 00:31:38.930 "is_configured": true, 00:31:38.930 "data_offset": 0, 00:31:38.930 "data_size": 65536 00:31:38.930 }, 00:31:38.930 { 00:31:38.930 "name": "BaseBdev3", 00:31:38.930 "uuid": "56ec6a72-e525-524f-b0dd-17ee7e8ceda4", 00:31:38.930 "is_configured": true, 00:31:38.930 "data_offset": 0, 00:31:38.930 "data_size": 65536 00:31:38.930 } 00:31:38.930 ] 00:31:38.930 }' 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:38.930 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.189 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:39.189 "name": "raid_bdev1", 00:31:39.189 "uuid": "5a289060-efe8-4255-98e5-404930c16c72", 00:31:39.189 "strip_size_kb": 64, 00:31:39.189 "state": "online", 00:31:39.189 "raid_level": "raid5f", 00:31:39.189 "superblock": false, 00:31:39.189 "num_base_bdevs": 3, 00:31:39.189 "num_base_bdevs_discovered": 3, 00:31:39.189 "num_base_bdevs_operational": 3, 00:31:39.189 "base_bdevs_list": [ 00:31:39.189 { 00:31:39.189 "name": "spare", 00:31:39.189 "uuid": "3afcf5ea-a330-5f7a-baae-3a4b089132b1", 00:31:39.189 "is_configured": true, 00:31:39.189 "data_offset": 0, 00:31:39.189 "data_size": 65536 00:31:39.189 }, 00:31:39.189 { 00:31:39.189 "name": "BaseBdev2", 00:31:39.189 "uuid": "8d4dc593-9220-5756-ad0d-c077eae969ee", 00:31:39.189 "is_configured": true, 00:31:39.189 "data_offset": 0, 00:31:39.189 "data_size": 65536 00:31:39.189 }, 00:31:39.189 { 00:31:39.189 "name": "BaseBdev3", 00:31:39.189 "uuid": "56ec6a72-e525-524f-b0dd-17ee7e8ceda4", 00:31:39.189 "is_configured": true, 00:31:39.189 "data_offset": 0, 00:31:39.189 "data_size": 65536 00:31:39.189 } 00:31:39.189 ] 00:31:39.189 }' 00:31:39.189 07:42:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:39.189 07:42:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.756 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:39.756 [2024-07-12 07:42:13.597571] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:39.756 [2024-07-12 07:42:13.597600] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:39.756 [2024-07-12 07:42:13.597702] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:39.756 [2024-07-12 07:42:13.597789] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:39.756 [2024-07-12 07:42:13.597798] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:31:39.756 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:39.756 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:31:40.015 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:40.015 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:40.015 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:31:40.015 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:40.015 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:40.015 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:40.015 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:40.015 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:40.015 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:40.015 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:40.015 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:40.015 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:40.015 07:42:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:40.274 /dev/nbd0 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:40.274 1+0 records in 00:31:40.274 1+0 records out 00:31:40.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208893 s, 19.6 MB/s 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:40.274 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:31:40.532 /dev/nbd1 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:40.532 1+0 records in 00:31:40.532 1+0 records out 00:31:40.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313651 s, 13.1 MB/s 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:40.532 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:40.790 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:31:40.790 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:40.790 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:40.790 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:40.790 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:40.790 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:40.791 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:41.049 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:41.049 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:41.049 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:41.050 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:41.050 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:41.050 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:41.050 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:41.050 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:41.050 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:41.050 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:41.309 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:41.309 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:41.309 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:41.309 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:41.309 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:41.309 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:41.309 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:41.309 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:41.309 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:31:41.309 07:42:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 162039 00:31:41.309 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 162039 ']' 00:31:41.309 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 162039 00:31:41.309 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:31:41.309 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:41.309 07:42:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 162039 00:31:41.309 07:42:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:41.309 07:42:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:41.309 killing process with pid 162039 00:31:41.309 07:42:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 162039' 00:31:41.309 07:42:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@965 -- # kill 162039 00:31:41.309 Received shutdown signal, test time was about 60.000000 seconds 00:31:41.309 00:31:41.309 Latency(us) 00:31:41.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.309 =================================================================================================================== 00:31:41.309 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:41.309 [2024-07-12 07:42:15.016590] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:41.309 07:42:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # wait 162039 00:31:41.309 [2024-07-12 07:42:15.055357] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:31:41.568 00:31:41.568 real 0m19.437s 00:31:41.568 user 0m28.794s 00:31:41.568 sys 0m3.251s 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.568 ************************************ 00:31:41.568 END TEST raid5f_rebuild_test 00:31:41.568 ************************************ 00:31:41.568 07:42:15 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:31:41.568 07:42:15 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:31:41.568 07:42:15 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:41.568 07:42:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:41.568 ************************************ 00:31:41.568 START TEST raid5f_rebuild_test_sb 00:31:41.568 ************************************ 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 3 true false true 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=162570 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 162570 /var/tmp/spdk-raid.sock 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 162570 ']' 00:31:41.568 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:41.569 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:41.569 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:41.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:41.569 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:41.569 07:42:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.827 [2024-07-12 07:42:15.455828] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:41.827 [2024-07-12 07:42:15.456280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162570 ] 00:31:41.827 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:41.827 Zero copy mechanism will not be used. 00:31:41.827 [2024-07-12 07:42:15.611031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.827 [2024-07-12 07:42:15.657226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.827 [2024-07-12 07:42:15.700360] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:42.763 07:42:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:42.763 07:42:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:31:42.763 07:42:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:42.763 07:42:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:42.763 BaseBdev1_malloc 00:31:42.763 07:42:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:43.022 [2024-07-12 07:42:16.759241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:43.022 [2024-07-12 07:42:16.759336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:43.023 [2024-07-12 07:42:16.759368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:31:43.023 [2024-07-12 07:42:16.759412] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:43.023 [2024-07-12 07:42:16.761806] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:43.023 [2024-07-12 07:42:16.761857] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:43.023 BaseBdev1 00:31:43.023 07:42:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:43.023 07:42:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:43.281 BaseBdev2_malloc 00:31:43.281 07:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:43.541 [2024-07-12 07:42:17.196078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:43.541 [2024-07-12 07:42:17.196138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:43.541 [2024-07-12 07:42:17.196169] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:31:43.541 [2024-07-12 07:42:17.196205] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:43.541 [2024-07-12 07:42:17.198273] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:43.541 [2024-07-12 07:42:17.198315] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:43.541 BaseBdev2 00:31:43.541 07:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:43.541 07:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:43.541 BaseBdev3_malloc 00:31:43.541 07:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:43.800 [2024-07-12 07:42:17.571950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:43.800 [2024-07-12 07:42:17.572004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:43.800 [2024-07-12 07:42:17.572034] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:43.800 [2024-07-12 07:42:17.572068] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:43.800 [2024-07-12 07:42:17.574133] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:43.800 [2024-07-12 07:42:17.574185] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:43.800 BaseBdev3 00:31:43.800 07:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:44.059 spare_malloc 00:31:44.059 07:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:44.059 spare_delay 00:31:44.318 07:42:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:44.318 [2024-07-12 07:42:18.176742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:44.318 [2024-07-12 07:42:18.176813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:44.318 [2024-07-12 07:42:18.176840] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:44.318 [2024-07-12 07:42:18.176876] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:44.318 [2024-07-12 07:42:18.179101] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:44.318 [2024-07-12 07:42:18.179154] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:44.318 spare 00:31:44.318 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:31:44.577 [2024-07-12 07:42:18.352851] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:44.577 [2024-07-12 07:42:18.354746] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:44.577 [2024-07-12 07:42:18.354805] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:44.577 [2024-07-12 07:42:18.354975] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:31:44.577 [2024-07-12 07:42:18.354985] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:44.578 [2024-07-12 07:42:18.355111] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:31:44.578 [2024-07-12 07:42:18.355726] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:31:44.578 [2024-07-12 07:42:18.355746] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:31:44.578 [2024-07-12 07:42:18.355849] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:44.578 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:44.578 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:44.578 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:44.578 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:44.578 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:44.578 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:44.578 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:44.578 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:44.578 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:44.578 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:44.578 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.578 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.837 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:44.837 "name": "raid_bdev1", 00:31:44.837 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:31:44.837 "strip_size_kb": 64, 00:31:44.837 "state": "online", 00:31:44.837 "raid_level": "raid5f", 00:31:44.837 "superblock": true, 00:31:44.837 "num_base_bdevs": 3, 00:31:44.837 "num_base_bdevs_discovered": 3, 00:31:44.837 "num_base_bdevs_operational": 3, 00:31:44.837 "base_bdevs_list": [ 00:31:44.837 { 00:31:44.837 "name": "BaseBdev1", 00:31:44.837 "uuid": "101b878b-9458-57f5-b0b1-8c9fc53c9f2b", 00:31:44.837 "is_configured": true, 00:31:44.837 "data_offset": 2048, 00:31:44.837 "data_size": 63488 00:31:44.837 }, 00:31:44.837 { 00:31:44.837 "name": "BaseBdev2", 00:31:44.837 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:31:44.837 "is_configured": true, 00:31:44.837 "data_offset": 2048, 00:31:44.837 "data_size": 63488 00:31:44.837 }, 00:31:44.837 { 00:31:44.837 "name": "BaseBdev3", 00:31:44.837 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:31:44.837 "is_configured": true, 00:31:44.837 "data_offset": 2048, 00:31:44.837 "data_size": 63488 00:31:44.837 } 00:31:44.837 ] 00:31:44.837 }' 00:31:44.837 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:44.837 07:42:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:45.406 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:45.406 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:45.406 [2024-07-12 07:42:19.177597] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:45.406 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=126976 00:31:45.406 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.406 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:45.665 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:31:45.665 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:31:45.665 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:31:45.665 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:31:45.665 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:45.665 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:45.665 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:45.665 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:45.665 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:45.665 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:45.665 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:31:45.665 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:45.665 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:45.665 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:45.665 [2024-07-12 07:42:19.529715] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:31:45.925 /dev/nbd0 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:45.925 1+0 records in 00:31:45.925 1+0 records out 00:31:45.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227126 s, 18.0 MB/s 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 128 00:31:45.925 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:31:46.184 496+0 records in 00:31:46.184 496+0 records out 00:31:46.184 65011712 bytes (65 MB, 62 MiB) copied, 0.371187 s, 175 MB/s 00:31:46.184 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:46.184 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:46.184 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:46.184 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:46.184 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:31:46.184 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:46.184 07:42:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:46.444 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:46.444 [2024-07-12 07:42:20.244528] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:46.444 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:46.444 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:46.444 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:46.444 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:46.444 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:46.444 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:46.444 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:46.444 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:46.702 [2024-07-12 07:42:20.500309] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:46.702 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:46.702 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:46.702 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:46.702 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:46.702 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:46.702 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:46.702 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:46.702 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:46.702 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:46.702 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:46.702 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:46.702 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.962 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:46.962 "name": "raid_bdev1", 00:31:46.962 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:31:46.962 "strip_size_kb": 64, 00:31:46.962 "state": "online", 00:31:46.962 "raid_level": "raid5f", 00:31:46.962 "superblock": true, 00:31:46.962 "num_base_bdevs": 3, 00:31:46.962 "num_base_bdevs_discovered": 2, 00:31:46.962 "num_base_bdevs_operational": 2, 00:31:46.962 "base_bdevs_list": [ 00:31:46.962 { 00:31:46.962 "name": null, 00:31:46.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:46.962 "is_configured": false, 00:31:46.962 "data_offset": 2048, 00:31:46.962 "data_size": 63488 00:31:46.962 }, 00:31:46.962 { 00:31:46.962 "name": "BaseBdev2", 00:31:46.962 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:31:46.962 "is_configured": true, 00:31:46.962 "data_offset": 2048, 00:31:46.962 "data_size": 63488 00:31:46.962 }, 00:31:46.962 { 00:31:46.962 "name": "BaseBdev3", 00:31:46.962 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:31:46.962 "is_configured": true, 00:31:46.962 "data_offset": 2048, 00:31:46.962 "data_size": 63488 00:31:46.962 } 00:31:46.962 ] 00:31:46.962 }' 00:31:46.962 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:46.962 07:42:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.531 07:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:47.789 [2024-07-12 07:42:21.460468] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:47.789 [2024-07-12 07:42:21.464151] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000025500 00:31:47.789 [2024-07-12 07:42:21.466454] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:47.789 07:42:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:48.726 07:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:48.726 07:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:48.726 07:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:48.726 07:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:48.726 07:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:48.726 07:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.726 07:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:48.985 07:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:48.985 "name": "raid_bdev1", 00:31:48.985 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:31:48.985 "strip_size_kb": 64, 00:31:48.985 "state": "online", 00:31:48.985 "raid_level": "raid5f", 00:31:48.985 "superblock": true, 00:31:48.985 "num_base_bdevs": 3, 00:31:48.985 "num_base_bdevs_discovered": 3, 00:31:48.985 "num_base_bdevs_operational": 3, 00:31:48.985 "process": { 00:31:48.985 "type": "rebuild", 00:31:48.985 "target": "spare", 00:31:48.985 "progress": { 00:31:48.985 "blocks": 24576, 00:31:48.985 "percent": 19 00:31:48.985 } 00:31:48.985 }, 00:31:48.985 "base_bdevs_list": [ 00:31:48.985 { 00:31:48.985 "name": "spare", 00:31:48.985 "uuid": "a4e65d5c-e875-5db8-8dc8-91f154be3a5a", 00:31:48.985 "is_configured": true, 00:31:48.985 "data_offset": 2048, 00:31:48.985 "data_size": 63488 00:31:48.985 }, 00:31:48.985 { 00:31:48.985 "name": "BaseBdev2", 00:31:48.985 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:31:48.985 "is_configured": true, 00:31:48.985 "data_offset": 2048, 00:31:48.985 "data_size": 63488 00:31:48.985 }, 00:31:48.985 { 00:31:48.985 "name": "BaseBdev3", 00:31:48.985 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:31:48.985 "is_configured": true, 00:31:48.985 "data_offset": 2048, 00:31:48.985 "data_size": 63488 00:31:48.985 } 00:31:48.985 ] 00:31:48.985 }' 00:31:48.985 07:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:48.985 07:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:48.985 07:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:48.986 07:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:48.986 07:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:49.245 [2024-07-12 07:42:22.955666] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:49.245 [2024-07-12 07:42:22.977885] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:49.245 [2024-07-12 07:42:22.977948] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:49.245 [2024-07-12 07:42:22.977961] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:49.245 [2024-07-12 07:42:22.977968] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:49.245 07:42:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:49.245 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:49.245 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:49.245 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:49.245 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:49.245 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:49.245 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:49.245 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:49.245 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:49.245 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:49.245 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.245 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:49.503 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:49.503 "name": "raid_bdev1", 00:31:49.503 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:31:49.503 "strip_size_kb": 64, 00:31:49.503 "state": "online", 00:31:49.503 "raid_level": "raid5f", 00:31:49.503 "superblock": true, 00:31:49.503 "num_base_bdevs": 3, 00:31:49.503 "num_base_bdevs_discovered": 2, 00:31:49.503 "num_base_bdevs_operational": 2, 00:31:49.503 "base_bdevs_list": [ 00:31:49.503 { 00:31:49.503 "name": null, 00:31:49.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:49.503 "is_configured": false, 00:31:49.503 "data_offset": 2048, 00:31:49.503 "data_size": 63488 00:31:49.503 }, 00:31:49.503 { 00:31:49.503 "name": "BaseBdev2", 00:31:49.503 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:31:49.503 "is_configured": true, 00:31:49.503 "data_offset": 2048, 00:31:49.503 "data_size": 63488 00:31:49.503 }, 00:31:49.503 { 00:31:49.503 "name": "BaseBdev3", 00:31:49.503 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:31:49.503 "is_configured": true, 00:31:49.503 "data_offset": 2048, 00:31:49.503 "data_size": 63488 00:31:49.503 } 00:31:49.503 ] 00:31:49.503 }' 00:31:49.503 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:49.503 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.068 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:50.068 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:50.068 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:50.068 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:50.068 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:50.068 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.068 07:42:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:50.327 07:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:50.327 "name": "raid_bdev1", 00:31:50.327 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:31:50.327 "strip_size_kb": 64, 00:31:50.327 "state": "online", 00:31:50.327 "raid_level": "raid5f", 00:31:50.327 "superblock": true, 00:31:50.327 "num_base_bdevs": 3, 00:31:50.327 "num_base_bdevs_discovered": 2, 00:31:50.327 "num_base_bdevs_operational": 2, 00:31:50.327 "base_bdevs_list": [ 00:31:50.327 { 00:31:50.327 "name": null, 00:31:50.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.327 "is_configured": false, 00:31:50.327 "data_offset": 2048, 00:31:50.327 "data_size": 63488 00:31:50.327 }, 00:31:50.327 { 00:31:50.327 "name": "BaseBdev2", 00:31:50.327 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:31:50.327 "is_configured": true, 00:31:50.327 "data_offset": 2048, 00:31:50.327 "data_size": 63488 00:31:50.327 }, 00:31:50.327 { 00:31:50.327 "name": "BaseBdev3", 00:31:50.327 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:31:50.327 "is_configured": true, 00:31:50.327 "data_offset": 2048, 00:31:50.327 "data_size": 63488 00:31:50.327 } 00:31:50.327 ] 00:31:50.327 }' 00:31:50.327 07:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:50.327 07:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:50.327 07:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:50.327 07:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:50.327 07:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:50.586 [2024-07-12 07:42:24.347767] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:50.586 [2024-07-12 07:42:24.351372] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:31:50.586 [2024-07-12 07:42:24.353513] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:50.586 07:42:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:51.522 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:51.522 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:51.522 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:51.522 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:51.522 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:51.522 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:51.522 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:51.781 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:51.781 "name": "raid_bdev1", 00:31:51.781 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:31:51.781 "strip_size_kb": 64, 00:31:51.781 "state": "online", 00:31:51.781 "raid_level": "raid5f", 00:31:51.781 "superblock": true, 00:31:51.781 "num_base_bdevs": 3, 00:31:51.781 "num_base_bdevs_discovered": 3, 00:31:51.781 "num_base_bdevs_operational": 3, 00:31:51.781 "process": { 00:31:51.781 "type": "rebuild", 00:31:51.781 "target": "spare", 00:31:51.781 "progress": { 00:31:51.781 "blocks": 24576, 00:31:51.781 "percent": 19 00:31:51.781 } 00:31:51.781 }, 00:31:51.781 "base_bdevs_list": [ 00:31:51.781 { 00:31:51.781 "name": "spare", 00:31:51.781 "uuid": "a4e65d5c-e875-5db8-8dc8-91f154be3a5a", 00:31:51.781 "is_configured": true, 00:31:51.781 "data_offset": 2048, 00:31:51.781 "data_size": 63488 00:31:51.781 }, 00:31:51.781 { 00:31:51.781 "name": "BaseBdev2", 00:31:51.781 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:31:51.781 "is_configured": true, 00:31:51.781 "data_offset": 2048, 00:31:51.781 "data_size": 63488 00:31:51.781 }, 00:31:51.781 { 00:31:51.781 "name": "BaseBdev3", 00:31:51.781 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:31:51.781 "is_configured": true, 00:31:51.781 "data_offset": 2048, 00:31:51.781 "data_size": 63488 00:31:51.781 } 00:31:51.781 ] 00:31:51.781 }' 00:31:51.781 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:51.781 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:51.781 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:52.040 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:52.040 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:31:52.040 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:31:52.040 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:31:52.040 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:31:52.040 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:31:52.040 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1056 00:31:52.040 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:52.040 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:52.040 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:52.040 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:52.040 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:52.040 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:52.040 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:52.040 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:52.299 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:52.299 "name": "raid_bdev1", 00:31:52.299 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:31:52.299 "strip_size_kb": 64, 00:31:52.299 "state": "online", 00:31:52.299 "raid_level": "raid5f", 00:31:52.299 "superblock": true, 00:31:52.299 "num_base_bdevs": 3, 00:31:52.299 "num_base_bdevs_discovered": 3, 00:31:52.299 "num_base_bdevs_operational": 3, 00:31:52.299 "process": { 00:31:52.299 "type": "rebuild", 00:31:52.299 "target": "spare", 00:31:52.299 "progress": { 00:31:52.299 "blocks": 30720, 00:31:52.299 "percent": 24 00:31:52.299 } 00:31:52.299 }, 00:31:52.299 "base_bdevs_list": [ 00:31:52.299 { 00:31:52.299 "name": "spare", 00:31:52.299 "uuid": "a4e65d5c-e875-5db8-8dc8-91f154be3a5a", 00:31:52.299 "is_configured": true, 00:31:52.299 "data_offset": 2048, 00:31:52.299 "data_size": 63488 00:31:52.299 }, 00:31:52.299 { 00:31:52.299 "name": "BaseBdev2", 00:31:52.299 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:31:52.299 "is_configured": true, 00:31:52.299 "data_offset": 2048, 00:31:52.299 "data_size": 63488 00:31:52.299 }, 00:31:52.299 { 00:31:52.299 "name": "BaseBdev3", 00:31:52.299 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:31:52.299 "is_configured": true, 00:31:52.299 "data_offset": 2048, 00:31:52.299 "data_size": 63488 00:31:52.299 } 00:31:52.299 ] 00:31:52.299 }' 00:31:52.299 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:52.299 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:52.299 07:42:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:52.299 07:42:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:52.299 07:42:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:53.236 07:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:53.236 07:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:53.236 07:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:53.236 07:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:53.236 07:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:53.236 07:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:53.236 07:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.236 07:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:53.495 07:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:53.495 "name": "raid_bdev1", 00:31:53.495 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:31:53.495 "strip_size_kb": 64, 00:31:53.495 "state": "online", 00:31:53.495 "raid_level": "raid5f", 00:31:53.495 "superblock": true, 00:31:53.495 "num_base_bdevs": 3, 00:31:53.495 "num_base_bdevs_discovered": 3, 00:31:53.495 "num_base_bdevs_operational": 3, 00:31:53.495 "process": { 00:31:53.495 "type": "rebuild", 00:31:53.495 "target": "spare", 00:31:53.495 "progress": { 00:31:53.495 "blocks": 59392, 00:31:53.495 "percent": 46 00:31:53.495 } 00:31:53.495 }, 00:31:53.495 "base_bdevs_list": [ 00:31:53.495 { 00:31:53.495 "name": "spare", 00:31:53.495 "uuid": "a4e65d5c-e875-5db8-8dc8-91f154be3a5a", 00:31:53.495 "is_configured": true, 00:31:53.495 "data_offset": 2048, 00:31:53.495 "data_size": 63488 00:31:53.495 }, 00:31:53.495 { 00:31:53.495 "name": "BaseBdev2", 00:31:53.495 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:31:53.495 "is_configured": true, 00:31:53.495 "data_offset": 2048, 00:31:53.495 "data_size": 63488 00:31:53.495 }, 00:31:53.495 { 00:31:53.495 "name": "BaseBdev3", 00:31:53.495 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:31:53.495 "is_configured": true, 00:31:53.495 "data_offset": 2048, 00:31:53.495 "data_size": 63488 00:31:53.495 } 00:31:53.495 ] 00:31:53.495 }' 00:31:53.495 07:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:53.495 07:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:53.495 07:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:53.754 07:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:53.754 07:42:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:54.759 07:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:54.759 07:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:54.759 07:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:54.759 07:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:54.759 07:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:54.759 07:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:54.759 07:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:54.759 07:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:54.759 07:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:54.759 "name": "raid_bdev1", 00:31:54.759 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:31:54.759 "strip_size_kb": 64, 00:31:54.759 "state": "online", 00:31:54.759 "raid_level": "raid5f", 00:31:54.759 "superblock": true, 00:31:54.759 "num_base_bdevs": 3, 00:31:54.759 "num_base_bdevs_discovered": 3, 00:31:54.759 "num_base_bdevs_operational": 3, 00:31:54.759 "process": { 00:31:54.759 "type": "rebuild", 00:31:54.759 "target": "spare", 00:31:54.759 "progress": { 00:31:54.759 "blocks": 86016, 00:31:54.759 "percent": 67 00:31:54.759 } 00:31:54.759 }, 00:31:54.759 "base_bdevs_list": [ 00:31:54.759 { 00:31:54.759 "name": "spare", 00:31:54.759 "uuid": "a4e65d5c-e875-5db8-8dc8-91f154be3a5a", 00:31:54.759 "is_configured": true, 00:31:54.759 "data_offset": 2048, 00:31:54.759 "data_size": 63488 00:31:54.759 }, 00:31:54.759 { 00:31:54.759 "name": "BaseBdev2", 00:31:54.759 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:31:54.759 "is_configured": true, 00:31:54.759 "data_offset": 2048, 00:31:54.759 "data_size": 63488 00:31:54.759 }, 00:31:54.759 { 00:31:54.759 "name": "BaseBdev3", 00:31:54.759 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:31:54.759 "is_configured": true, 00:31:54.759 "data_offset": 2048, 00:31:54.759 "data_size": 63488 00:31:54.759 } 00:31:54.759 ] 00:31:54.759 }' 00:31:54.759 07:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:55.018 07:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:55.018 07:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:55.018 07:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:55.018 07:42:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:55.953 07:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:55.953 07:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:55.953 07:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:55.953 07:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:55.953 07:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:55.953 07:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:55.953 07:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:55.953 07:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:56.212 07:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:56.212 "name": "raid_bdev1", 00:31:56.212 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:31:56.212 "strip_size_kb": 64, 00:31:56.212 "state": "online", 00:31:56.212 "raid_level": "raid5f", 00:31:56.212 "superblock": true, 00:31:56.212 "num_base_bdevs": 3, 00:31:56.212 "num_base_bdevs_discovered": 3, 00:31:56.212 "num_base_bdevs_operational": 3, 00:31:56.212 "process": { 00:31:56.212 "type": "rebuild", 00:31:56.212 "target": "spare", 00:31:56.212 "progress": { 00:31:56.212 "blocks": 112640, 00:31:56.212 "percent": 88 00:31:56.212 } 00:31:56.212 }, 00:31:56.212 "base_bdevs_list": [ 00:31:56.212 { 00:31:56.213 "name": "spare", 00:31:56.213 "uuid": "a4e65d5c-e875-5db8-8dc8-91f154be3a5a", 00:31:56.213 "is_configured": true, 00:31:56.213 "data_offset": 2048, 00:31:56.213 "data_size": 63488 00:31:56.213 }, 00:31:56.213 { 00:31:56.213 "name": "BaseBdev2", 00:31:56.213 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:31:56.213 "is_configured": true, 00:31:56.213 "data_offset": 2048, 00:31:56.213 "data_size": 63488 00:31:56.213 }, 00:31:56.213 { 00:31:56.213 "name": "BaseBdev3", 00:31:56.213 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:31:56.213 "is_configured": true, 00:31:56.213 "data_offset": 2048, 00:31:56.213 "data_size": 63488 00:31:56.213 } 00:31:56.213 ] 00:31:56.213 }' 00:31:56.213 07:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:56.213 07:42:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:56.213 07:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:56.213 07:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:56.213 07:42:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:56.780 [2024-07-12 07:42:30.598937] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:56.780 [2024-07-12 07:42:30.599017] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:56.780 [2024-07-12 07:42:30.599165] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:57.349 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:57.349 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:57.349 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:57.349 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:57.349 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:57.349 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:57.349 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.349 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.607 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:57.607 "name": "raid_bdev1", 00:31:57.607 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:31:57.607 "strip_size_kb": 64, 00:31:57.607 "state": "online", 00:31:57.607 "raid_level": "raid5f", 00:31:57.607 "superblock": true, 00:31:57.607 "num_base_bdevs": 3, 00:31:57.607 "num_base_bdevs_discovered": 3, 00:31:57.607 "num_base_bdevs_operational": 3, 00:31:57.607 "base_bdevs_list": [ 00:31:57.607 { 00:31:57.607 "name": "spare", 00:31:57.607 "uuid": "a4e65d5c-e875-5db8-8dc8-91f154be3a5a", 00:31:57.607 "is_configured": true, 00:31:57.607 "data_offset": 2048, 00:31:57.607 "data_size": 63488 00:31:57.607 }, 00:31:57.607 { 00:31:57.607 "name": "BaseBdev2", 00:31:57.607 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:31:57.607 "is_configured": true, 00:31:57.607 "data_offset": 2048, 00:31:57.607 "data_size": 63488 00:31:57.607 }, 00:31:57.607 { 00:31:57.607 "name": "BaseBdev3", 00:31:57.607 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:31:57.607 "is_configured": true, 00:31:57.607 "data_offset": 2048, 00:31:57.607 "data_size": 63488 00:31:57.607 } 00:31:57.607 ] 00:31:57.607 }' 00:31:57.607 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:57.607 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:57.607 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:57.607 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:57.607 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:31:57.607 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:57.607 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:57.607 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:57.607 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:57.607 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:57.607 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.607 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:57.864 "name": "raid_bdev1", 00:31:57.864 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:31:57.864 "strip_size_kb": 64, 00:31:57.864 "state": "online", 00:31:57.864 "raid_level": "raid5f", 00:31:57.864 "superblock": true, 00:31:57.864 "num_base_bdevs": 3, 00:31:57.864 "num_base_bdevs_discovered": 3, 00:31:57.864 "num_base_bdevs_operational": 3, 00:31:57.864 "base_bdevs_list": [ 00:31:57.864 { 00:31:57.864 "name": "spare", 00:31:57.864 "uuid": "a4e65d5c-e875-5db8-8dc8-91f154be3a5a", 00:31:57.864 "is_configured": true, 00:31:57.864 "data_offset": 2048, 00:31:57.864 "data_size": 63488 00:31:57.864 }, 00:31:57.864 { 00:31:57.864 "name": "BaseBdev2", 00:31:57.864 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:31:57.864 "is_configured": true, 00:31:57.864 "data_offset": 2048, 00:31:57.864 "data_size": 63488 00:31:57.864 }, 00:31:57.864 { 00:31:57.864 "name": "BaseBdev3", 00:31:57.864 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:31:57.864 "is_configured": true, 00:31:57.864 "data_offset": 2048, 00:31:57.864 "data_size": 63488 00:31:57.864 } 00:31:57.864 ] 00:31:57.864 }' 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.864 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.122 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:58.122 "name": "raid_bdev1", 00:31:58.122 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:31:58.122 "strip_size_kb": 64, 00:31:58.122 "state": "online", 00:31:58.122 "raid_level": "raid5f", 00:31:58.122 "superblock": true, 00:31:58.122 "num_base_bdevs": 3, 00:31:58.122 "num_base_bdevs_discovered": 3, 00:31:58.122 "num_base_bdevs_operational": 3, 00:31:58.122 "base_bdevs_list": [ 00:31:58.122 { 00:31:58.122 "name": "spare", 00:31:58.122 "uuid": "a4e65d5c-e875-5db8-8dc8-91f154be3a5a", 00:31:58.122 "is_configured": true, 00:31:58.122 "data_offset": 2048, 00:31:58.122 "data_size": 63488 00:31:58.122 }, 00:31:58.122 { 00:31:58.122 "name": "BaseBdev2", 00:31:58.122 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:31:58.122 "is_configured": true, 00:31:58.122 "data_offset": 2048, 00:31:58.122 "data_size": 63488 00:31:58.122 }, 00:31:58.122 { 00:31:58.122 "name": "BaseBdev3", 00:31:58.122 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:31:58.122 "is_configured": true, 00:31:58.122 "data_offset": 2048, 00:31:58.122 "data_size": 63488 00:31:58.122 } 00:31:58.122 ] 00:31:58.122 }' 00:31:58.122 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:58.122 07:42:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:58.689 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:58.947 [2024-07-12 07:42:32.620482] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:58.947 [2024-07-12 07:42:32.620511] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:58.947 [2024-07-12 07:42:32.620607] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:58.947 [2024-07-12 07:42:32.620694] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:58.947 [2024-07-12 07:42:32.620703] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:31:58.947 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:58.947 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:31:58.947 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:58.947 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:58.947 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:31:58.947 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:58.947 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:58.947 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:58.947 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:58.947 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:58.947 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:58.947 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:31:58.947 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:58.947 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:58.947 07:42:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:59.206 /dev/nbd0 00:31:59.206 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:59.206 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:59.207 1+0 records in 00:31:59.207 1+0 records out 00:31:59.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603271 s, 6.8 MB/s 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:59.207 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:31:59.465 /dev/nbd1 00:31:59.465 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:59.465 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:59.465 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:31:59.465 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:31:59.465 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:31:59.465 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:31:59.465 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:31:59.465 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:31:59.465 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:31:59.465 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:31:59.465 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:59.465 1+0 records in 00:31:59.465 1+0 records out 00:31:59.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330547 s, 12.4 MB/s 00:31:59.465 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.465 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:31:59.465 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.724 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:31:59.724 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:31:59.724 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:59.724 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:59.724 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:59.724 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:31:59.724 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:59.724 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:59.724 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:59.724 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:31:59.724 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:59.724 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:59.982 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:59.982 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:59.982 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:59.982 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:59.982 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:59.983 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:59.983 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:59.983 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:59.983 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:59.983 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:00.241 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:00.241 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:00.241 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:00.241 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:00.241 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:00.241 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:00.241 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:00.241 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:00.241 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:32:00.241 07:42:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:00.241 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:00.500 [2024-07-12 07:42:34.301358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:00.500 [2024-07-12 07:42:34.301467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:00.500 [2024-07-12 07:42:34.301513] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:32:00.500 [2024-07-12 07:42:34.301535] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:00.500 [2024-07-12 07:42:34.304320] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:00.500 [2024-07-12 07:42:34.304374] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:00.500 [2024-07-12 07:42:34.304503] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:00.500 [2024-07-12 07:42:34.304586] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:00.500 [2024-07-12 07:42:34.304774] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:00.500 [2024-07-12 07:42:34.304910] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:00.500 spare 00:32:00.500 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:00.500 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:00.500 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:00.500 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:00.500 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:00.500 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:00.500 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:00.500 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:00.500 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:00.500 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:00.500 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:00.500 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.758 [2024-07-12 07:42:34.405001] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:32:00.758 [2024-07-12 07:42:34.405027] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:00.758 [2024-07-12 07:42:34.405212] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043fc0 00:32:00.758 [2024-07-12 07:42:34.405990] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:32:00.758 [2024-07-12 07:42:34.406003] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:32:00.758 [2024-07-12 07:42:34.406206] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:00.758 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:00.758 "name": "raid_bdev1", 00:32:00.758 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:32:00.758 "strip_size_kb": 64, 00:32:00.758 "state": "online", 00:32:00.758 "raid_level": "raid5f", 00:32:00.758 "superblock": true, 00:32:00.758 "num_base_bdevs": 3, 00:32:00.758 "num_base_bdevs_discovered": 3, 00:32:00.758 "num_base_bdevs_operational": 3, 00:32:00.758 "base_bdevs_list": [ 00:32:00.758 { 00:32:00.758 "name": "spare", 00:32:00.758 "uuid": "a4e65d5c-e875-5db8-8dc8-91f154be3a5a", 00:32:00.758 "is_configured": true, 00:32:00.758 "data_offset": 2048, 00:32:00.758 "data_size": 63488 00:32:00.758 }, 00:32:00.758 { 00:32:00.758 "name": "BaseBdev2", 00:32:00.758 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:32:00.758 "is_configured": true, 00:32:00.758 "data_offset": 2048, 00:32:00.758 "data_size": 63488 00:32:00.758 }, 00:32:00.758 { 00:32:00.758 "name": "BaseBdev3", 00:32:00.758 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:32:00.758 "is_configured": true, 00:32:00.758 "data_offset": 2048, 00:32:00.758 "data_size": 63488 00:32:00.758 } 00:32:00.758 ] 00:32:00.758 }' 00:32:00.758 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:00.758 07:42:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.323 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:01.323 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:01.323 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:01.323 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:01.323 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:01.323 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:01.323 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:01.582 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:01.582 "name": "raid_bdev1", 00:32:01.582 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:32:01.582 "strip_size_kb": 64, 00:32:01.582 "state": "online", 00:32:01.582 "raid_level": "raid5f", 00:32:01.582 "superblock": true, 00:32:01.582 "num_base_bdevs": 3, 00:32:01.582 "num_base_bdevs_discovered": 3, 00:32:01.582 "num_base_bdevs_operational": 3, 00:32:01.582 "base_bdevs_list": [ 00:32:01.582 { 00:32:01.582 "name": "spare", 00:32:01.582 "uuid": "a4e65d5c-e875-5db8-8dc8-91f154be3a5a", 00:32:01.582 "is_configured": true, 00:32:01.582 "data_offset": 2048, 00:32:01.582 "data_size": 63488 00:32:01.582 }, 00:32:01.582 { 00:32:01.582 "name": "BaseBdev2", 00:32:01.582 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:32:01.582 "is_configured": true, 00:32:01.582 "data_offset": 2048, 00:32:01.582 "data_size": 63488 00:32:01.582 }, 00:32:01.582 { 00:32:01.582 "name": "BaseBdev3", 00:32:01.582 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:32:01.582 "is_configured": true, 00:32:01.582 "data_offset": 2048, 00:32:01.582 "data_size": 63488 00:32:01.582 } 00:32:01.582 ] 00:32:01.582 }' 00:32:01.582 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:01.582 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:01.582 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:01.582 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:01.582 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:01.582 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:01.841 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:32:01.841 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:02.100 [2024-07-12 07:42:35.750419] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:02.100 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:02.100 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:02.100 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:02.100 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:02.100 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:02.100 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:02.100 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:02.100 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:02.100 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:02.100 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:02.100 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.100 07:42:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.358 07:42:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:02.358 "name": "raid_bdev1", 00:32:02.358 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:32:02.358 "strip_size_kb": 64, 00:32:02.358 "state": "online", 00:32:02.358 "raid_level": "raid5f", 00:32:02.358 "superblock": true, 00:32:02.358 "num_base_bdevs": 3, 00:32:02.358 "num_base_bdevs_discovered": 2, 00:32:02.358 "num_base_bdevs_operational": 2, 00:32:02.358 "base_bdevs_list": [ 00:32:02.358 { 00:32:02.358 "name": null, 00:32:02.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.358 "is_configured": false, 00:32:02.358 "data_offset": 2048, 00:32:02.358 "data_size": 63488 00:32:02.358 }, 00:32:02.358 { 00:32:02.358 "name": "BaseBdev2", 00:32:02.358 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:32:02.358 "is_configured": true, 00:32:02.358 "data_offset": 2048, 00:32:02.358 "data_size": 63488 00:32:02.358 }, 00:32:02.358 { 00:32:02.358 "name": "BaseBdev3", 00:32:02.358 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:32:02.358 "is_configured": true, 00:32:02.358 "data_offset": 2048, 00:32:02.358 "data_size": 63488 00:32:02.358 } 00:32:02.358 ] 00:32:02.358 }' 00:32:02.358 07:42:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:02.358 07:42:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.924 07:42:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:02.924 [2024-07-12 07:42:36.730666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:02.924 [2024-07-12 07:42:36.730934] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:02.924 [2024-07-12 07:42:36.730949] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:02.924 [2024-07-12 07:42:36.731041] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:02.924 [2024-07-12 07:42:36.737940] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000044160 00:32:02.924 [2024-07-12 07:42:36.740741] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:02.924 07:42:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:32:04.299 07:42:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:04.299 07:42:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:04.299 07:42:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:04.299 07:42:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:04.299 07:42:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:04.299 07:42:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.299 07:42:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.299 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:04.299 "name": "raid_bdev1", 00:32:04.299 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:32:04.299 "strip_size_kb": 64, 00:32:04.299 "state": "online", 00:32:04.299 "raid_level": "raid5f", 00:32:04.299 "superblock": true, 00:32:04.299 "num_base_bdevs": 3, 00:32:04.299 "num_base_bdevs_discovered": 3, 00:32:04.299 "num_base_bdevs_operational": 3, 00:32:04.299 "process": { 00:32:04.299 "type": "rebuild", 00:32:04.299 "target": "spare", 00:32:04.299 "progress": { 00:32:04.299 "blocks": 24576, 00:32:04.299 "percent": 19 00:32:04.299 } 00:32:04.299 }, 00:32:04.299 "base_bdevs_list": [ 00:32:04.299 { 00:32:04.299 "name": "spare", 00:32:04.299 "uuid": "a4e65d5c-e875-5db8-8dc8-91f154be3a5a", 00:32:04.299 "is_configured": true, 00:32:04.299 "data_offset": 2048, 00:32:04.299 "data_size": 63488 00:32:04.299 }, 00:32:04.299 { 00:32:04.299 "name": "BaseBdev2", 00:32:04.299 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:32:04.299 "is_configured": true, 00:32:04.299 "data_offset": 2048, 00:32:04.299 "data_size": 63488 00:32:04.299 }, 00:32:04.299 { 00:32:04.299 "name": "BaseBdev3", 00:32:04.299 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:32:04.299 "is_configured": true, 00:32:04.299 "data_offset": 2048, 00:32:04.299 "data_size": 63488 00:32:04.299 } 00:32:04.299 ] 00:32:04.299 }' 00:32:04.299 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:04.299 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:04.299 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:04.299 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:04.299 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:04.559 [2024-07-12 07:42:38.334332] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:04.559 [2024-07-12 07:42:38.354413] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:04.559 [2024-07-12 07:42:38.354491] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:04.559 [2024-07-12 07:42:38.354508] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:04.559 [2024-07-12 07:42:38.354517] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:04.559 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:04.559 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:04.559 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:04.559 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:04.559 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:04.559 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:04.559 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:04.559 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:04.559 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:04.559 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:04.559 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.559 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.818 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:04.818 "name": "raid_bdev1", 00:32:04.818 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:32:04.818 "strip_size_kb": 64, 00:32:04.818 "state": "online", 00:32:04.818 "raid_level": "raid5f", 00:32:04.818 "superblock": true, 00:32:04.818 "num_base_bdevs": 3, 00:32:04.818 "num_base_bdevs_discovered": 2, 00:32:04.818 "num_base_bdevs_operational": 2, 00:32:04.818 "base_bdevs_list": [ 00:32:04.818 { 00:32:04.818 "name": null, 00:32:04.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:04.818 "is_configured": false, 00:32:04.818 "data_offset": 2048, 00:32:04.818 "data_size": 63488 00:32:04.818 }, 00:32:04.818 { 00:32:04.818 "name": "BaseBdev2", 00:32:04.818 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:32:04.818 "is_configured": true, 00:32:04.818 "data_offset": 2048, 00:32:04.818 "data_size": 63488 00:32:04.818 }, 00:32:04.818 { 00:32:04.818 "name": "BaseBdev3", 00:32:04.818 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:32:04.818 "is_configured": true, 00:32:04.818 "data_offset": 2048, 00:32:04.818 "data_size": 63488 00:32:04.818 } 00:32:04.818 ] 00:32:04.818 }' 00:32:04.819 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:04.819 07:42:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:05.386 07:42:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:05.644 [2024-07-12 07:42:39.307484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:05.644 [2024-07-12 07:42:39.307598] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:05.644 [2024-07-12 07:42:39.307638] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:32:05.644 [2024-07-12 07:42:39.307674] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:05.644 [2024-07-12 07:42:39.308216] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:05.644 [2024-07-12 07:42:39.308260] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:05.644 [2024-07-12 07:42:39.308384] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:05.644 [2024-07-12 07:42:39.308397] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:05.644 [2024-07-12 07:42:39.308407] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:05.644 [2024-07-12 07:42:39.308470] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:05.644 [2024-07-12 07:42:39.315309] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000444a0 00:32:05.644 spare 00:32:05.644 [2024-07-12 07:42:39.317988] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:05.644 07:42:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:32:06.580 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:06.580 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:06.580 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:06.580 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:06.580 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:06.580 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:06.580 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.839 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:06.839 "name": "raid_bdev1", 00:32:06.839 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:32:06.839 "strip_size_kb": 64, 00:32:06.839 "state": "online", 00:32:06.839 "raid_level": "raid5f", 00:32:06.839 "superblock": true, 00:32:06.839 "num_base_bdevs": 3, 00:32:06.839 "num_base_bdevs_discovered": 3, 00:32:06.839 "num_base_bdevs_operational": 3, 00:32:06.839 "process": { 00:32:06.839 "type": "rebuild", 00:32:06.839 "target": "spare", 00:32:06.839 "progress": { 00:32:06.839 "blocks": 24576, 00:32:06.839 "percent": 19 00:32:06.839 } 00:32:06.839 }, 00:32:06.839 "base_bdevs_list": [ 00:32:06.839 { 00:32:06.839 "name": "spare", 00:32:06.839 "uuid": "a4e65d5c-e875-5db8-8dc8-91f154be3a5a", 00:32:06.839 "is_configured": true, 00:32:06.839 "data_offset": 2048, 00:32:06.839 "data_size": 63488 00:32:06.839 }, 00:32:06.839 { 00:32:06.839 "name": "BaseBdev2", 00:32:06.839 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:32:06.839 "is_configured": true, 00:32:06.839 "data_offset": 2048, 00:32:06.839 "data_size": 63488 00:32:06.839 }, 00:32:06.839 { 00:32:06.839 "name": "BaseBdev3", 00:32:06.839 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:32:06.839 "is_configured": true, 00:32:06.839 "data_offset": 2048, 00:32:06.839 "data_size": 63488 00:32:06.839 } 00:32:06.839 ] 00:32:06.839 }' 00:32:06.839 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:06.839 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:06.839 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:06.839 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:06.839 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:07.098 [2024-07-12 07:42:40.904184] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:07.098 [2024-07-12 07:42:40.932310] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:07.098 [2024-07-12 07:42:40.932384] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:07.098 [2024-07-12 07:42:40.932400] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:07.098 [2024-07-12 07:42:40.932408] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:07.098 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:07.098 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:07.098 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:07.098 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:07.098 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:07.098 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:07.098 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:07.098 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:07.098 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:07.098 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:07.098 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:07.098 07:42:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.356 07:42:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:07.356 "name": "raid_bdev1", 00:32:07.356 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:32:07.356 "strip_size_kb": 64, 00:32:07.356 "state": "online", 00:32:07.356 "raid_level": "raid5f", 00:32:07.356 "superblock": true, 00:32:07.356 "num_base_bdevs": 3, 00:32:07.356 "num_base_bdevs_discovered": 2, 00:32:07.356 "num_base_bdevs_operational": 2, 00:32:07.356 "base_bdevs_list": [ 00:32:07.356 { 00:32:07.356 "name": null, 00:32:07.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.356 "is_configured": false, 00:32:07.356 "data_offset": 2048, 00:32:07.356 "data_size": 63488 00:32:07.356 }, 00:32:07.356 { 00:32:07.356 "name": "BaseBdev2", 00:32:07.356 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:32:07.356 "is_configured": true, 00:32:07.356 "data_offset": 2048, 00:32:07.356 "data_size": 63488 00:32:07.356 }, 00:32:07.356 { 00:32:07.356 "name": "BaseBdev3", 00:32:07.356 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:32:07.356 "is_configured": true, 00:32:07.356 "data_offset": 2048, 00:32:07.356 "data_size": 63488 00:32:07.356 } 00:32:07.356 ] 00:32:07.356 }' 00:32:07.356 07:42:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:07.356 07:42:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.292 07:42:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:08.292 07:42:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:08.292 07:42:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:08.292 07:42:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:08.292 07:42:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:08.292 07:42:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:08.292 07:42:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:08.292 07:42:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:08.292 "name": "raid_bdev1", 00:32:08.292 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:32:08.292 "strip_size_kb": 64, 00:32:08.292 "state": "online", 00:32:08.292 "raid_level": "raid5f", 00:32:08.292 "superblock": true, 00:32:08.292 "num_base_bdevs": 3, 00:32:08.292 "num_base_bdevs_discovered": 2, 00:32:08.292 "num_base_bdevs_operational": 2, 00:32:08.292 "base_bdevs_list": [ 00:32:08.292 { 00:32:08.292 "name": null, 00:32:08.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:08.292 "is_configured": false, 00:32:08.292 "data_offset": 2048, 00:32:08.292 "data_size": 63488 00:32:08.292 }, 00:32:08.292 { 00:32:08.292 "name": "BaseBdev2", 00:32:08.292 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:32:08.292 "is_configured": true, 00:32:08.292 "data_offset": 2048, 00:32:08.292 "data_size": 63488 00:32:08.292 }, 00:32:08.292 { 00:32:08.292 "name": "BaseBdev3", 00:32:08.292 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:32:08.292 "is_configured": true, 00:32:08.292 "data_offset": 2048, 00:32:08.292 "data_size": 63488 00:32:08.292 } 00:32:08.292 ] 00:32:08.292 }' 00:32:08.292 07:42:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:08.292 07:42:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:08.292 07:42:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:08.292 07:42:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:08.292 07:42:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:08.550 07:42:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:08.808 [2024-07-12 07:42:42.505494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:08.808 [2024-07-12 07:42:42.505587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:08.808 [2024-07-12 07:42:42.505655] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:32:08.808 [2024-07-12 07:42:42.505680] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:08.808 [2024-07-12 07:42:42.506189] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:08.808 [2024-07-12 07:42:42.506225] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:08.808 [2024-07-12 07:42:42.506321] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:08.808 [2024-07-12 07:42:42.506334] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:08.808 [2024-07-12 07:42:42.506343] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:08.808 BaseBdev1 00:32:08.808 07:42:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:32:09.743 07:42:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:09.743 07:42:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:09.743 07:42:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:09.743 07:42:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:09.743 07:42:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:09.743 07:42:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:09.743 07:42:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:09.743 07:42:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:09.743 07:42:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:09.743 07:42:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:09.743 07:42:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:09.743 07:42:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:10.001 07:42:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:10.001 "name": "raid_bdev1", 00:32:10.001 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:32:10.001 "strip_size_kb": 64, 00:32:10.001 "state": "online", 00:32:10.001 "raid_level": "raid5f", 00:32:10.001 "superblock": true, 00:32:10.001 "num_base_bdevs": 3, 00:32:10.001 "num_base_bdevs_discovered": 2, 00:32:10.001 "num_base_bdevs_operational": 2, 00:32:10.001 "base_bdevs_list": [ 00:32:10.001 { 00:32:10.001 "name": null, 00:32:10.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:10.001 "is_configured": false, 00:32:10.001 "data_offset": 2048, 00:32:10.001 "data_size": 63488 00:32:10.001 }, 00:32:10.001 { 00:32:10.001 "name": "BaseBdev2", 00:32:10.001 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:32:10.001 "is_configured": true, 00:32:10.001 "data_offset": 2048, 00:32:10.001 "data_size": 63488 00:32:10.001 }, 00:32:10.001 { 00:32:10.001 "name": "BaseBdev3", 00:32:10.001 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:32:10.001 "is_configured": true, 00:32:10.001 "data_offset": 2048, 00:32:10.001 "data_size": 63488 00:32:10.001 } 00:32:10.001 ] 00:32:10.001 }' 00:32:10.001 07:42:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:10.001 07:42:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.568 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:10.568 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:10.568 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:10.568 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:10.568 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:10.568 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:10.568 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:10.826 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:10.826 "name": "raid_bdev1", 00:32:10.826 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:32:10.826 "strip_size_kb": 64, 00:32:10.826 "state": "online", 00:32:10.826 "raid_level": "raid5f", 00:32:10.826 "superblock": true, 00:32:10.826 "num_base_bdevs": 3, 00:32:10.826 "num_base_bdevs_discovered": 2, 00:32:10.826 "num_base_bdevs_operational": 2, 00:32:10.826 "base_bdevs_list": [ 00:32:10.826 { 00:32:10.826 "name": null, 00:32:10.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:10.826 "is_configured": false, 00:32:10.826 "data_offset": 2048, 00:32:10.826 "data_size": 63488 00:32:10.826 }, 00:32:10.826 { 00:32:10.826 "name": "BaseBdev2", 00:32:10.826 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:32:10.826 "is_configured": true, 00:32:10.826 "data_offset": 2048, 00:32:10.826 "data_size": 63488 00:32:10.826 }, 00:32:10.826 { 00:32:10.826 "name": "BaseBdev3", 00:32:10.826 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:32:10.826 "is_configured": true, 00:32:10.826 "data_offset": 2048, 00:32:10.826 "data_size": 63488 00:32:10.826 } 00:32:10.826 ] 00:32:10.826 }' 00:32:10.826 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:10.826 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:10.826 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:11.085 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:11.085 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:11.085 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:32:11.085 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:11.085 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:11.085 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:11.085 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:11.085 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:11.085 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:11.085 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:11.085 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:11.085 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:11.085 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:11.085 [2024-07-12 07:42:44.958089] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:11.085 [2024-07-12 07:42:44.958311] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:11.085 [2024-07-12 07:42:44.958330] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:11.085 request: 00:32:11.085 { 00:32:11.085 "raid_bdev": "raid_bdev1", 00:32:11.085 "base_bdev": "BaseBdev1", 00:32:11.085 "method": "bdev_raid_add_base_bdev", 00:32:11.085 "req_id": 1 00:32:11.085 } 00:32:11.085 Got JSON-RPC error response 00:32:11.085 response: 00:32:11.085 { 00:32:11.085 "code": -22, 00:32:11.085 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:11.085 } 00:32:11.344 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:32:11.344 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:11.344 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:11.344 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:11.344 07:42:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:32:12.281 07:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:12.281 07:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:12.281 07:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:12.281 07:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:12.281 07:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:12.281 07:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:12.281 07:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:12.281 07:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:12.281 07:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:12.281 07:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:12.281 07:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:12.281 07:42:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:12.540 07:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:12.540 "name": "raid_bdev1", 00:32:12.540 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:32:12.540 "strip_size_kb": 64, 00:32:12.540 "state": "online", 00:32:12.540 "raid_level": "raid5f", 00:32:12.540 "superblock": true, 00:32:12.540 "num_base_bdevs": 3, 00:32:12.540 "num_base_bdevs_discovered": 2, 00:32:12.540 "num_base_bdevs_operational": 2, 00:32:12.540 "base_bdevs_list": [ 00:32:12.540 { 00:32:12.540 "name": null, 00:32:12.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:12.540 "is_configured": false, 00:32:12.540 "data_offset": 2048, 00:32:12.540 "data_size": 63488 00:32:12.540 }, 00:32:12.540 { 00:32:12.540 "name": "BaseBdev2", 00:32:12.540 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:32:12.540 "is_configured": true, 00:32:12.540 "data_offset": 2048, 00:32:12.540 "data_size": 63488 00:32:12.540 }, 00:32:12.540 { 00:32:12.540 "name": "BaseBdev3", 00:32:12.540 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:32:12.540 "is_configured": true, 00:32:12.540 "data_offset": 2048, 00:32:12.540 "data_size": 63488 00:32:12.540 } 00:32:12.540 ] 00:32:12.540 }' 00:32:12.540 07:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:12.540 07:42:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.108 07:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:13.108 07:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:13.108 07:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:13.108 07:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:13.108 07:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:13.108 07:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:13.108 07:42:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:13.367 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:13.367 "name": "raid_bdev1", 00:32:13.367 "uuid": "e7628c61-6bf0-4346-9a22-5634f1fb8f2b", 00:32:13.367 "strip_size_kb": 64, 00:32:13.367 "state": "online", 00:32:13.367 "raid_level": "raid5f", 00:32:13.367 "superblock": true, 00:32:13.367 "num_base_bdevs": 3, 00:32:13.367 "num_base_bdevs_discovered": 2, 00:32:13.367 "num_base_bdevs_operational": 2, 00:32:13.367 "base_bdevs_list": [ 00:32:13.367 { 00:32:13.367 "name": null, 00:32:13.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:13.367 "is_configured": false, 00:32:13.367 "data_offset": 2048, 00:32:13.367 "data_size": 63488 00:32:13.367 }, 00:32:13.367 { 00:32:13.367 "name": "BaseBdev2", 00:32:13.367 "uuid": "6ce5742f-2911-51e0-b982-9857d3a75349", 00:32:13.367 "is_configured": true, 00:32:13.367 "data_offset": 2048, 00:32:13.367 "data_size": 63488 00:32:13.367 }, 00:32:13.367 { 00:32:13.367 "name": "BaseBdev3", 00:32:13.367 "uuid": "c476552d-fa3a-5736-9359-fb4c78c42806", 00:32:13.367 "is_configured": true, 00:32:13.367 "data_offset": 2048, 00:32:13.367 "data_size": 63488 00:32:13.367 } 00:32:13.367 ] 00:32:13.367 }' 00:32:13.367 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:13.367 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:13.367 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:13.367 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:13.367 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 162570 00:32:13.367 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 162570 ']' 00:32:13.367 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 162570 00:32:13.367 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:32:13.367 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:13.367 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 162570 00:32:13.367 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:13.367 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:13.367 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 162570' 00:32:13.368 killing process with pid 162570 00:32:13.368 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 162570 00:32:13.368 Received shutdown signal, test time was about 60.000000 seconds 00:32:13.368 00:32:13.368 Latency(us) 00:32:13.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.368 =================================================================================================================== 00:32:13.368 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:13.368 [2024-07-12 07:42:47.207388] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:13.368 [2024-07-12 07:42:47.207505] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:13.368 [2024-07-12 07:42:47.207570] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:13.368 [2024-07-12 07:42:47.207582] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:32:13.368 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 162570 00:32:13.368 [2024-07-12 07:42:47.246593] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:13.627 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:32:13.627 00:32:13.627 real 0m32.119s 00:32:13.627 user 0m49.118s 00:32:13.627 sys 0m4.955s 00:32:13.627 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:13.627 07:42:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.627 ************************************ 00:32:13.627 END TEST raid5f_rebuild_test_sb 00:32:13.627 ************************************ 00:32:13.887 07:42:47 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:32:13.887 07:42:47 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:32:13.887 07:42:47 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:32:13.887 07:42:47 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:13.887 07:42:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:13.887 ************************************ 00:32:13.887 START TEST raid5f_state_function_test 00:32:13.887 ************************************ 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 4 false 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=163480 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 163480' 00:32:13.887 Process raid pid: 163480 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 163480 /var/tmp/spdk-raid.sock 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 163480 ']' 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:13.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:13.887 07:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:13.887 [2024-07-12 07:42:47.648966] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:13.887 [2024-07-12 07:42:47.649155] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:14.146 [2024-07-12 07:42:47.789907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.146 [2024-07-12 07:42:47.840095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.146 [2024-07-12 07:42:47.885750] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:14.146 07:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:14.146 07:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:32:14.146 07:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:14.406 [2024-07-12 07:42:48.170395] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:14.406 [2024-07-12 07:42:48.170471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:14.406 [2024-07-12 07:42:48.170481] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:14.406 [2024-07-12 07:42:48.170497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:14.406 [2024-07-12 07:42:48.170504] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:14.406 [2024-07-12 07:42:48.170539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:14.406 [2024-07-12 07:42:48.170546] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:14.406 [2024-07-12 07:42:48.170566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:14.406 07:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:14.406 07:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:14.406 07:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:14.406 07:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:14.406 07:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:14.406 07:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:14.406 07:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:14.406 07:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:14.406 07:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:14.406 07:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:14.406 07:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:14.406 07:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:14.665 07:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:14.665 "name": "Existed_Raid", 00:32:14.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.665 "strip_size_kb": 64, 00:32:14.665 "state": "configuring", 00:32:14.665 "raid_level": "raid5f", 00:32:14.665 "superblock": false, 00:32:14.665 "num_base_bdevs": 4, 00:32:14.665 "num_base_bdevs_discovered": 0, 00:32:14.665 "num_base_bdevs_operational": 4, 00:32:14.665 "base_bdevs_list": [ 00:32:14.665 { 00:32:14.665 "name": "BaseBdev1", 00:32:14.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.665 "is_configured": false, 00:32:14.665 "data_offset": 0, 00:32:14.665 "data_size": 0 00:32:14.665 }, 00:32:14.665 { 00:32:14.665 "name": "BaseBdev2", 00:32:14.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.665 "is_configured": false, 00:32:14.665 "data_offset": 0, 00:32:14.665 "data_size": 0 00:32:14.665 }, 00:32:14.665 { 00:32:14.665 "name": "BaseBdev3", 00:32:14.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.665 "is_configured": false, 00:32:14.665 "data_offset": 0, 00:32:14.665 "data_size": 0 00:32:14.665 }, 00:32:14.665 { 00:32:14.665 "name": "BaseBdev4", 00:32:14.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.665 "is_configured": false, 00:32:14.665 "data_offset": 0, 00:32:14.665 "data_size": 0 00:32:14.665 } 00:32:14.665 ] 00:32:14.665 }' 00:32:14.665 07:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:14.665 07:42:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:15.233 07:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:15.491 [2024-07-12 07:42:49.270444] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:15.491 [2024-07-12 07:42:49.270471] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:32:15.491 07:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:15.750 [2024-07-12 07:42:49.534489] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:15.750 [2024-07-12 07:42:49.534531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:15.750 [2024-07-12 07:42:49.534539] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:15.750 [2024-07-12 07:42:49.534561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:15.750 [2024-07-12 07:42:49.534568] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:15.750 [2024-07-12 07:42:49.534590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:15.750 [2024-07-12 07:42:49.534596] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:15.750 [2024-07-12 07:42:49.534621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:15.750 07:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:16.009 [2024-07-12 07:42:49.727552] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:16.009 BaseBdev1 00:32:16.009 07:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:32:16.009 07:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:32:16.009 07:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:16.009 07:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:16.009 07:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:16.009 07:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:16.009 07:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:16.268 07:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:16.268 [ 00:32:16.268 { 00:32:16.268 "name": "BaseBdev1", 00:32:16.268 "aliases": [ 00:32:16.268 "fe93d603-0824-4111-a6be-fcdbe4ee4680" 00:32:16.268 ], 00:32:16.268 "product_name": "Malloc disk", 00:32:16.268 "block_size": 512, 00:32:16.268 "num_blocks": 65536, 00:32:16.268 "uuid": "fe93d603-0824-4111-a6be-fcdbe4ee4680", 00:32:16.268 "assigned_rate_limits": { 00:32:16.268 "rw_ios_per_sec": 0, 00:32:16.268 "rw_mbytes_per_sec": 0, 00:32:16.268 "r_mbytes_per_sec": 0, 00:32:16.268 "w_mbytes_per_sec": 0 00:32:16.268 }, 00:32:16.268 "claimed": true, 00:32:16.268 "claim_type": "exclusive_write", 00:32:16.268 "zoned": false, 00:32:16.268 "supported_io_types": { 00:32:16.268 "read": true, 00:32:16.268 "write": true, 00:32:16.268 "unmap": true, 00:32:16.268 "write_zeroes": true, 00:32:16.268 "flush": true, 00:32:16.268 "reset": true, 00:32:16.268 "compare": false, 00:32:16.268 "compare_and_write": false, 00:32:16.268 "abort": true, 00:32:16.268 "nvme_admin": false, 00:32:16.268 "nvme_io": false 00:32:16.268 }, 00:32:16.268 "memory_domains": [ 00:32:16.268 { 00:32:16.268 "dma_device_id": "system", 00:32:16.268 "dma_device_type": 1 00:32:16.268 }, 00:32:16.268 { 00:32:16.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:16.268 "dma_device_type": 2 00:32:16.268 } 00:32:16.268 ], 00:32:16.268 "driver_specific": {} 00:32:16.268 } 00:32:16.268 ] 00:32:16.268 07:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:16.268 07:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:16.268 07:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:16.268 07:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:16.268 07:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:16.268 07:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:16.268 07:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:16.268 07:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:16.268 07:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:16.268 07:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:16.268 07:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:16.268 07:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:16.268 07:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:16.527 07:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:16.527 "name": "Existed_Raid", 00:32:16.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:16.527 "strip_size_kb": 64, 00:32:16.527 "state": "configuring", 00:32:16.527 "raid_level": "raid5f", 00:32:16.527 "superblock": false, 00:32:16.527 "num_base_bdevs": 4, 00:32:16.527 "num_base_bdevs_discovered": 1, 00:32:16.527 "num_base_bdevs_operational": 4, 00:32:16.527 "base_bdevs_list": [ 00:32:16.527 { 00:32:16.527 "name": "BaseBdev1", 00:32:16.527 "uuid": "fe93d603-0824-4111-a6be-fcdbe4ee4680", 00:32:16.527 "is_configured": true, 00:32:16.527 "data_offset": 0, 00:32:16.527 "data_size": 65536 00:32:16.527 }, 00:32:16.527 { 00:32:16.527 "name": "BaseBdev2", 00:32:16.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:16.527 "is_configured": false, 00:32:16.527 "data_offset": 0, 00:32:16.527 "data_size": 0 00:32:16.527 }, 00:32:16.527 { 00:32:16.527 "name": "BaseBdev3", 00:32:16.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:16.527 "is_configured": false, 00:32:16.527 "data_offset": 0, 00:32:16.527 "data_size": 0 00:32:16.527 }, 00:32:16.527 { 00:32:16.527 "name": "BaseBdev4", 00:32:16.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:16.527 "is_configured": false, 00:32:16.527 "data_offset": 0, 00:32:16.527 "data_size": 0 00:32:16.527 } 00:32:16.527 ] 00:32:16.527 }' 00:32:16.527 07:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:16.527 07:42:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.095 07:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:17.355 [2024-07-12 07:42:51.059810] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:17.355 [2024-07-12 07:42:51.059863] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:32:17.355 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:17.614 [2024-07-12 07:42:51.327903] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:17.614 [2024-07-12 07:42:51.329727] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:17.614 [2024-07-12 07:42:51.329790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:17.614 [2024-07-12 07:42:51.329799] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:17.614 [2024-07-12 07:42:51.329820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:17.614 [2024-07-12 07:42:51.329827] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:17.614 [2024-07-12 07:42:51.329845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:17.614 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:32:17.614 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:17.614 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:17.614 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:17.614 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:17.614 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:17.614 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:17.614 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:17.614 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:17.614 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:17.614 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:17.614 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:17.614 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.614 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:17.874 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:17.874 "name": "Existed_Raid", 00:32:17.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:17.874 "strip_size_kb": 64, 00:32:17.874 "state": "configuring", 00:32:17.874 "raid_level": "raid5f", 00:32:17.874 "superblock": false, 00:32:17.874 "num_base_bdevs": 4, 00:32:17.874 "num_base_bdevs_discovered": 1, 00:32:17.874 "num_base_bdevs_operational": 4, 00:32:17.874 "base_bdevs_list": [ 00:32:17.874 { 00:32:17.874 "name": "BaseBdev1", 00:32:17.874 "uuid": "fe93d603-0824-4111-a6be-fcdbe4ee4680", 00:32:17.874 "is_configured": true, 00:32:17.874 "data_offset": 0, 00:32:17.874 "data_size": 65536 00:32:17.874 }, 00:32:17.874 { 00:32:17.874 "name": "BaseBdev2", 00:32:17.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:17.874 "is_configured": false, 00:32:17.874 "data_offset": 0, 00:32:17.874 "data_size": 0 00:32:17.874 }, 00:32:17.874 { 00:32:17.874 "name": "BaseBdev3", 00:32:17.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:17.874 "is_configured": false, 00:32:17.874 "data_offset": 0, 00:32:17.874 "data_size": 0 00:32:17.874 }, 00:32:17.874 { 00:32:17.874 "name": "BaseBdev4", 00:32:17.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:17.874 "is_configured": false, 00:32:17.874 "data_offset": 0, 00:32:17.874 "data_size": 0 00:32:17.874 } 00:32:17.874 ] 00:32:17.874 }' 00:32:17.874 07:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:17.874 07:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.442 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:18.442 [2024-07-12 07:42:52.240739] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:18.442 BaseBdev2 00:32:18.442 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:18.442 07:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:32:18.442 07:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:18.442 07:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:18.442 07:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:18.442 07:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:18.442 07:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:18.701 07:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:18.963 [ 00:32:18.963 { 00:32:18.963 "name": "BaseBdev2", 00:32:18.963 "aliases": [ 00:32:18.963 "8da281a8-8fe8-442a-a8f0-a7d9bbbd361d" 00:32:18.963 ], 00:32:18.963 "product_name": "Malloc disk", 00:32:18.963 "block_size": 512, 00:32:18.963 "num_blocks": 65536, 00:32:18.963 "uuid": "8da281a8-8fe8-442a-a8f0-a7d9bbbd361d", 00:32:18.963 "assigned_rate_limits": { 00:32:18.963 "rw_ios_per_sec": 0, 00:32:18.963 "rw_mbytes_per_sec": 0, 00:32:18.963 "r_mbytes_per_sec": 0, 00:32:18.963 "w_mbytes_per_sec": 0 00:32:18.963 }, 00:32:18.963 "claimed": true, 00:32:18.963 "claim_type": "exclusive_write", 00:32:18.963 "zoned": false, 00:32:18.963 "supported_io_types": { 00:32:18.963 "read": true, 00:32:18.963 "write": true, 00:32:18.963 "unmap": true, 00:32:18.963 "write_zeroes": true, 00:32:18.963 "flush": true, 00:32:18.963 "reset": true, 00:32:18.963 "compare": false, 00:32:18.963 "compare_and_write": false, 00:32:18.963 "abort": true, 00:32:18.963 "nvme_admin": false, 00:32:18.963 "nvme_io": false 00:32:18.963 }, 00:32:18.963 "memory_domains": [ 00:32:18.963 { 00:32:18.963 "dma_device_id": "system", 00:32:18.963 "dma_device_type": 1 00:32:18.963 }, 00:32:18.963 { 00:32:18.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:18.963 "dma_device_type": 2 00:32:18.963 } 00:32:18.963 ], 00:32:18.963 "driver_specific": {} 00:32:18.963 } 00:32:18.963 ] 00:32:18.963 07:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:18.963 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:18.963 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:18.963 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:18.963 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:18.963 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:18.963 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:18.963 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:18.963 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:18.963 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:18.963 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:18.963 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:18.963 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:18.963 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.963 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:19.253 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:19.253 "name": "Existed_Raid", 00:32:19.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.253 "strip_size_kb": 64, 00:32:19.253 "state": "configuring", 00:32:19.253 "raid_level": "raid5f", 00:32:19.253 "superblock": false, 00:32:19.253 "num_base_bdevs": 4, 00:32:19.253 "num_base_bdevs_discovered": 2, 00:32:19.253 "num_base_bdevs_operational": 4, 00:32:19.253 "base_bdevs_list": [ 00:32:19.253 { 00:32:19.253 "name": "BaseBdev1", 00:32:19.253 "uuid": "fe93d603-0824-4111-a6be-fcdbe4ee4680", 00:32:19.253 "is_configured": true, 00:32:19.253 "data_offset": 0, 00:32:19.253 "data_size": 65536 00:32:19.253 }, 00:32:19.253 { 00:32:19.253 "name": "BaseBdev2", 00:32:19.253 "uuid": "8da281a8-8fe8-442a-a8f0-a7d9bbbd361d", 00:32:19.253 "is_configured": true, 00:32:19.253 "data_offset": 0, 00:32:19.253 "data_size": 65536 00:32:19.253 }, 00:32:19.253 { 00:32:19.253 "name": "BaseBdev3", 00:32:19.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.253 "is_configured": false, 00:32:19.253 "data_offset": 0, 00:32:19.253 "data_size": 0 00:32:19.253 }, 00:32:19.253 { 00:32:19.253 "name": "BaseBdev4", 00:32:19.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.253 "is_configured": false, 00:32:19.253 "data_offset": 0, 00:32:19.253 "data_size": 0 00:32:19.253 } 00:32:19.253 ] 00:32:19.253 }' 00:32:19.253 07:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:19.253 07:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.867 07:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:19.867 [2024-07-12 07:42:53.612009] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:19.867 BaseBdev3 00:32:19.867 07:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:32:19.867 07:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:32:19.867 07:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:19.867 07:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:19.867 07:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:19.867 07:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:19.867 07:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:20.125 07:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:20.383 [ 00:32:20.383 { 00:32:20.383 "name": "BaseBdev3", 00:32:20.383 "aliases": [ 00:32:20.383 "d680fccd-42c7-4097-bc3e-b59a6d233643" 00:32:20.383 ], 00:32:20.383 "product_name": "Malloc disk", 00:32:20.383 "block_size": 512, 00:32:20.383 "num_blocks": 65536, 00:32:20.383 "uuid": "d680fccd-42c7-4097-bc3e-b59a6d233643", 00:32:20.383 "assigned_rate_limits": { 00:32:20.383 "rw_ios_per_sec": 0, 00:32:20.383 "rw_mbytes_per_sec": 0, 00:32:20.383 "r_mbytes_per_sec": 0, 00:32:20.383 "w_mbytes_per_sec": 0 00:32:20.383 }, 00:32:20.383 "claimed": true, 00:32:20.383 "claim_type": "exclusive_write", 00:32:20.383 "zoned": false, 00:32:20.383 "supported_io_types": { 00:32:20.383 "read": true, 00:32:20.383 "write": true, 00:32:20.383 "unmap": true, 00:32:20.383 "write_zeroes": true, 00:32:20.383 "flush": true, 00:32:20.383 "reset": true, 00:32:20.383 "compare": false, 00:32:20.383 "compare_and_write": false, 00:32:20.383 "abort": true, 00:32:20.383 "nvme_admin": false, 00:32:20.383 "nvme_io": false 00:32:20.383 }, 00:32:20.383 "memory_domains": [ 00:32:20.383 { 00:32:20.383 "dma_device_id": "system", 00:32:20.383 "dma_device_type": 1 00:32:20.383 }, 00:32:20.383 { 00:32:20.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:20.383 "dma_device_type": 2 00:32:20.383 } 00:32:20.383 ], 00:32:20.383 "driver_specific": {} 00:32:20.383 } 00:32:20.383 ] 00:32:20.383 07:42:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:20.383 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:20.383 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:20.383 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:20.383 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:20.383 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:20.383 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:20.383 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:20.383 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:20.383 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:20.384 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:20.384 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:20.384 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:20.384 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.384 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:20.384 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:20.384 "name": "Existed_Raid", 00:32:20.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.384 "strip_size_kb": 64, 00:32:20.384 "state": "configuring", 00:32:20.384 "raid_level": "raid5f", 00:32:20.384 "superblock": false, 00:32:20.384 "num_base_bdevs": 4, 00:32:20.384 "num_base_bdevs_discovered": 3, 00:32:20.384 "num_base_bdevs_operational": 4, 00:32:20.384 "base_bdevs_list": [ 00:32:20.384 { 00:32:20.384 "name": "BaseBdev1", 00:32:20.384 "uuid": "fe93d603-0824-4111-a6be-fcdbe4ee4680", 00:32:20.384 "is_configured": true, 00:32:20.384 "data_offset": 0, 00:32:20.384 "data_size": 65536 00:32:20.384 }, 00:32:20.384 { 00:32:20.384 "name": "BaseBdev2", 00:32:20.384 "uuid": "8da281a8-8fe8-442a-a8f0-a7d9bbbd361d", 00:32:20.384 "is_configured": true, 00:32:20.384 "data_offset": 0, 00:32:20.384 "data_size": 65536 00:32:20.384 }, 00:32:20.384 { 00:32:20.384 "name": "BaseBdev3", 00:32:20.384 "uuid": "d680fccd-42c7-4097-bc3e-b59a6d233643", 00:32:20.384 "is_configured": true, 00:32:20.384 "data_offset": 0, 00:32:20.384 "data_size": 65536 00:32:20.384 }, 00:32:20.384 { 00:32:20.384 "name": "BaseBdev4", 00:32:20.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.384 "is_configured": false, 00:32:20.384 "data_offset": 0, 00:32:20.384 "data_size": 0 00:32:20.384 } 00:32:20.384 ] 00:32:20.384 }' 00:32:20.384 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:20.384 07:42:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.950 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:32:21.208 [2024-07-12 07:42:54.895268] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:21.208 [2024-07-12 07:42:54.895330] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:32:21.208 [2024-07-12 07:42:54.895338] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:21.208 [2024-07-12 07:42:54.895456] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:32:21.208 [2024-07-12 07:42:54.896157] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:32:21.208 [2024-07-12 07:42:54.896177] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:32:21.208 [2024-07-12 07:42:54.896376] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:21.208 BaseBdev4 00:32:21.208 07:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:32:21.208 07:42:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:32:21.208 07:42:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:21.208 07:42:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:21.208 07:42:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:21.208 07:42:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:21.208 07:42:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:21.208 07:42:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:21.466 [ 00:32:21.466 { 00:32:21.466 "name": "BaseBdev4", 00:32:21.466 "aliases": [ 00:32:21.466 "9186e0e3-6253-4b1f-bef5-c453b2d00890" 00:32:21.466 ], 00:32:21.466 "product_name": "Malloc disk", 00:32:21.466 "block_size": 512, 00:32:21.466 "num_blocks": 65536, 00:32:21.466 "uuid": "9186e0e3-6253-4b1f-bef5-c453b2d00890", 00:32:21.466 "assigned_rate_limits": { 00:32:21.466 "rw_ios_per_sec": 0, 00:32:21.466 "rw_mbytes_per_sec": 0, 00:32:21.466 "r_mbytes_per_sec": 0, 00:32:21.466 "w_mbytes_per_sec": 0 00:32:21.466 }, 00:32:21.466 "claimed": true, 00:32:21.466 "claim_type": "exclusive_write", 00:32:21.466 "zoned": false, 00:32:21.466 "supported_io_types": { 00:32:21.466 "read": true, 00:32:21.466 "write": true, 00:32:21.466 "unmap": true, 00:32:21.466 "write_zeroes": true, 00:32:21.466 "flush": true, 00:32:21.466 "reset": true, 00:32:21.466 "compare": false, 00:32:21.466 "compare_and_write": false, 00:32:21.466 "abort": true, 00:32:21.466 "nvme_admin": false, 00:32:21.466 "nvme_io": false 00:32:21.466 }, 00:32:21.466 "memory_domains": [ 00:32:21.466 { 00:32:21.466 "dma_device_id": "system", 00:32:21.466 "dma_device_type": 1 00:32:21.466 }, 00:32:21.466 { 00:32:21.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:21.466 "dma_device_type": 2 00:32:21.466 } 00:32:21.466 ], 00:32:21.466 "driver_specific": {} 00:32:21.466 } 00:32:21.466 ] 00:32:21.466 07:42:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:21.466 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:21.466 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:21.466 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:32:21.466 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:21.466 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:21.466 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:21.466 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:21.466 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:21.466 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:21.466 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:21.466 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:21.466 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:21.466 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:21.466 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:21.724 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:21.724 "name": "Existed_Raid", 00:32:21.724 "uuid": "b63f2e93-8376-4f7d-9704-4bcaf01bd22c", 00:32:21.724 "strip_size_kb": 64, 00:32:21.724 "state": "online", 00:32:21.724 "raid_level": "raid5f", 00:32:21.724 "superblock": false, 00:32:21.724 "num_base_bdevs": 4, 00:32:21.724 "num_base_bdevs_discovered": 4, 00:32:21.724 "num_base_bdevs_operational": 4, 00:32:21.724 "base_bdevs_list": [ 00:32:21.724 { 00:32:21.724 "name": "BaseBdev1", 00:32:21.724 "uuid": "fe93d603-0824-4111-a6be-fcdbe4ee4680", 00:32:21.724 "is_configured": true, 00:32:21.724 "data_offset": 0, 00:32:21.724 "data_size": 65536 00:32:21.724 }, 00:32:21.724 { 00:32:21.724 "name": "BaseBdev2", 00:32:21.724 "uuid": "8da281a8-8fe8-442a-a8f0-a7d9bbbd361d", 00:32:21.724 "is_configured": true, 00:32:21.724 "data_offset": 0, 00:32:21.724 "data_size": 65536 00:32:21.724 }, 00:32:21.724 { 00:32:21.724 "name": "BaseBdev3", 00:32:21.724 "uuid": "d680fccd-42c7-4097-bc3e-b59a6d233643", 00:32:21.724 "is_configured": true, 00:32:21.724 "data_offset": 0, 00:32:21.724 "data_size": 65536 00:32:21.724 }, 00:32:21.724 { 00:32:21.724 "name": "BaseBdev4", 00:32:21.725 "uuid": "9186e0e3-6253-4b1f-bef5-c453b2d00890", 00:32:21.725 "is_configured": true, 00:32:21.725 "data_offset": 0, 00:32:21.725 "data_size": 65536 00:32:21.725 } 00:32:21.725 ] 00:32:21.725 }' 00:32:21.725 07:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:21.725 07:42:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.293 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:32:22.293 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:22.293 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:22.293 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:22.293 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:22.293 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:32:22.293 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:22.294 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:22.555 [2024-07-12 07:42:56.267681] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:22.555 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:22.555 "name": "Existed_Raid", 00:32:22.555 "aliases": [ 00:32:22.555 "b63f2e93-8376-4f7d-9704-4bcaf01bd22c" 00:32:22.555 ], 00:32:22.555 "product_name": "Raid Volume", 00:32:22.555 "block_size": 512, 00:32:22.555 "num_blocks": 196608, 00:32:22.555 "uuid": "b63f2e93-8376-4f7d-9704-4bcaf01bd22c", 00:32:22.555 "assigned_rate_limits": { 00:32:22.555 "rw_ios_per_sec": 0, 00:32:22.555 "rw_mbytes_per_sec": 0, 00:32:22.555 "r_mbytes_per_sec": 0, 00:32:22.555 "w_mbytes_per_sec": 0 00:32:22.555 }, 00:32:22.555 "claimed": false, 00:32:22.555 "zoned": false, 00:32:22.555 "supported_io_types": { 00:32:22.555 "read": true, 00:32:22.555 "write": true, 00:32:22.555 "unmap": false, 00:32:22.555 "write_zeroes": true, 00:32:22.555 "flush": false, 00:32:22.555 "reset": true, 00:32:22.555 "compare": false, 00:32:22.555 "compare_and_write": false, 00:32:22.555 "abort": false, 00:32:22.555 "nvme_admin": false, 00:32:22.555 "nvme_io": false 00:32:22.555 }, 00:32:22.555 "driver_specific": { 00:32:22.555 "raid": { 00:32:22.555 "uuid": "b63f2e93-8376-4f7d-9704-4bcaf01bd22c", 00:32:22.555 "strip_size_kb": 64, 00:32:22.555 "state": "online", 00:32:22.555 "raid_level": "raid5f", 00:32:22.555 "superblock": false, 00:32:22.555 "num_base_bdevs": 4, 00:32:22.556 "num_base_bdevs_discovered": 4, 00:32:22.556 "num_base_bdevs_operational": 4, 00:32:22.556 "base_bdevs_list": [ 00:32:22.556 { 00:32:22.556 "name": "BaseBdev1", 00:32:22.556 "uuid": "fe93d603-0824-4111-a6be-fcdbe4ee4680", 00:32:22.556 "is_configured": true, 00:32:22.556 "data_offset": 0, 00:32:22.556 "data_size": 65536 00:32:22.556 }, 00:32:22.556 { 00:32:22.556 "name": "BaseBdev2", 00:32:22.556 "uuid": "8da281a8-8fe8-442a-a8f0-a7d9bbbd361d", 00:32:22.556 "is_configured": true, 00:32:22.556 "data_offset": 0, 00:32:22.556 "data_size": 65536 00:32:22.556 }, 00:32:22.556 { 00:32:22.556 "name": "BaseBdev3", 00:32:22.556 "uuid": "d680fccd-42c7-4097-bc3e-b59a6d233643", 00:32:22.556 "is_configured": true, 00:32:22.556 "data_offset": 0, 00:32:22.556 "data_size": 65536 00:32:22.556 }, 00:32:22.556 { 00:32:22.556 "name": "BaseBdev4", 00:32:22.556 "uuid": "9186e0e3-6253-4b1f-bef5-c453b2d00890", 00:32:22.556 "is_configured": true, 00:32:22.556 "data_offset": 0, 00:32:22.556 "data_size": 65536 00:32:22.556 } 00:32:22.556 ] 00:32:22.556 } 00:32:22.556 } 00:32:22.556 }' 00:32:22.556 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:22.556 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:32:22.556 BaseBdev2 00:32:22.556 BaseBdev3 00:32:22.556 BaseBdev4' 00:32:22.556 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:22.556 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:22.556 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:22.815 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:22.815 "name": "BaseBdev1", 00:32:22.815 "aliases": [ 00:32:22.815 "fe93d603-0824-4111-a6be-fcdbe4ee4680" 00:32:22.815 ], 00:32:22.815 "product_name": "Malloc disk", 00:32:22.815 "block_size": 512, 00:32:22.815 "num_blocks": 65536, 00:32:22.815 "uuid": "fe93d603-0824-4111-a6be-fcdbe4ee4680", 00:32:22.815 "assigned_rate_limits": { 00:32:22.815 "rw_ios_per_sec": 0, 00:32:22.815 "rw_mbytes_per_sec": 0, 00:32:22.815 "r_mbytes_per_sec": 0, 00:32:22.815 "w_mbytes_per_sec": 0 00:32:22.815 }, 00:32:22.815 "claimed": true, 00:32:22.815 "claim_type": "exclusive_write", 00:32:22.815 "zoned": false, 00:32:22.815 "supported_io_types": { 00:32:22.815 "read": true, 00:32:22.815 "write": true, 00:32:22.815 "unmap": true, 00:32:22.815 "write_zeroes": true, 00:32:22.815 "flush": true, 00:32:22.815 "reset": true, 00:32:22.815 "compare": false, 00:32:22.815 "compare_and_write": false, 00:32:22.815 "abort": true, 00:32:22.815 "nvme_admin": false, 00:32:22.815 "nvme_io": false 00:32:22.815 }, 00:32:22.815 "memory_domains": [ 00:32:22.815 { 00:32:22.815 "dma_device_id": "system", 00:32:22.815 "dma_device_type": 1 00:32:22.815 }, 00:32:22.815 { 00:32:22.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:22.815 "dma_device_type": 2 00:32:22.815 } 00:32:22.815 ], 00:32:22.815 "driver_specific": {} 00:32:22.815 }' 00:32:22.815 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.815 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.815 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:22.815 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.815 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.815 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:23.075 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:23.075 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:23.075 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:23.075 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:23.075 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:23.075 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:23.075 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:23.075 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:23.075 07:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:23.335 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:23.335 "name": "BaseBdev2", 00:32:23.335 "aliases": [ 00:32:23.335 "8da281a8-8fe8-442a-a8f0-a7d9bbbd361d" 00:32:23.335 ], 00:32:23.335 "product_name": "Malloc disk", 00:32:23.335 "block_size": 512, 00:32:23.335 "num_blocks": 65536, 00:32:23.335 "uuid": "8da281a8-8fe8-442a-a8f0-a7d9bbbd361d", 00:32:23.335 "assigned_rate_limits": { 00:32:23.335 "rw_ios_per_sec": 0, 00:32:23.335 "rw_mbytes_per_sec": 0, 00:32:23.335 "r_mbytes_per_sec": 0, 00:32:23.335 "w_mbytes_per_sec": 0 00:32:23.335 }, 00:32:23.335 "claimed": true, 00:32:23.335 "claim_type": "exclusive_write", 00:32:23.335 "zoned": false, 00:32:23.335 "supported_io_types": { 00:32:23.335 "read": true, 00:32:23.335 "write": true, 00:32:23.335 "unmap": true, 00:32:23.335 "write_zeroes": true, 00:32:23.335 "flush": true, 00:32:23.335 "reset": true, 00:32:23.335 "compare": false, 00:32:23.335 "compare_and_write": false, 00:32:23.335 "abort": true, 00:32:23.335 "nvme_admin": false, 00:32:23.335 "nvme_io": false 00:32:23.335 }, 00:32:23.335 "memory_domains": [ 00:32:23.335 { 00:32:23.335 "dma_device_id": "system", 00:32:23.335 "dma_device_type": 1 00:32:23.335 }, 00:32:23.335 { 00:32:23.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:23.335 "dma_device_type": 2 00:32:23.335 } 00:32:23.335 ], 00:32:23.335 "driver_specific": {} 00:32:23.335 }' 00:32:23.335 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:23.335 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:23.594 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:23.594 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:23.594 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:23.594 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:23.594 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:23.594 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:23.594 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:23.594 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:23.854 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:23.854 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:23.854 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:23.854 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:23.854 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:23.854 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:23.854 "name": "BaseBdev3", 00:32:23.854 "aliases": [ 00:32:23.854 "d680fccd-42c7-4097-bc3e-b59a6d233643" 00:32:23.854 ], 00:32:23.854 "product_name": "Malloc disk", 00:32:23.854 "block_size": 512, 00:32:23.854 "num_blocks": 65536, 00:32:23.854 "uuid": "d680fccd-42c7-4097-bc3e-b59a6d233643", 00:32:23.854 "assigned_rate_limits": { 00:32:23.854 "rw_ios_per_sec": 0, 00:32:23.854 "rw_mbytes_per_sec": 0, 00:32:23.854 "r_mbytes_per_sec": 0, 00:32:23.854 "w_mbytes_per_sec": 0 00:32:23.854 }, 00:32:23.854 "claimed": true, 00:32:23.854 "claim_type": "exclusive_write", 00:32:23.854 "zoned": false, 00:32:23.854 "supported_io_types": { 00:32:23.854 "read": true, 00:32:23.854 "write": true, 00:32:23.854 "unmap": true, 00:32:23.854 "write_zeroes": true, 00:32:23.854 "flush": true, 00:32:23.854 "reset": true, 00:32:23.854 "compare": false, 00:32:23.854 "compare_and_write": false, 00:32:23.854 "abort": true, 00:32:23.854 "nvme_admin": false, 00:32:23.854 "nvme_io": false 00:32:23.854 }, 00:32:23.854 "memory_domains": [ 00:32:23.854 { 00:32:23.854 "dma_device_id": "system", 00:32:23.854 "dma_device_type": 1 00:32:23.854 }, 00:32:23.854 { 00:32:23.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:23.854 "dma_device_type": 2 00:32:23.854 } 00:32:23.854 ], 00:32:23.854 "driver_specific": {} 00:32:23.854 }' 00:32:23.854 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:24.113 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:24.113 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:24.113 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:24.113 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:24.113 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:24.113 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:24.113 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:24.113 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:24.113 07:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:24.372 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:24.372 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:24.372 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:24.372 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:32:24.372 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:24.372 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:24.372 "name": "BaseBdev4", 00:32:24.372 "aliases": [ 00:32:24.372 "9186e0e3-6253-4b1f-bef5-c453b2d00890" 00:32:24.372 ], 00:32:24.372 "product_name": "Malloc disk", 00:32:24.372 "block_size": 512, 00:32:24.372 "num_blocks": 65536, 00:32:24.372 "uuid": "9186e0e3-6253-4b1f-bef5-c453b2d00890", 00:32:24.372 "assigned_rate_limits": { 00:32:24.372 "rw_ios_per_sec": 0, 00:32:24.372 "rw_mbytes_per_sec": 0, 00:32:24.372 "r_mbytes_per_sec": 0, 00:32:24.372 "w_mbytes_per_sec": 0 00:32:24.372 }, 00:32:24.372 "claimed": true, 00:32:24.372 "claim_type": "exclusive_write", 00:32:24.372 "zoned": false, 00:32:24.372 "supported_io_types": { 00:32:24.372 "read": true, 00:32:24.373 "write": true, 00:32:24.373 "unmap": true, 00:32:24.373 "write_zeroes": true, 00:32:24.373 "flush": true, 00:32:24.373 "reset": true, 00:32:24.373 "compare": false, 00:32:24.373 "compare_and_write": false, 00:32:24.373 "abort": true, 00:32:24.373 "nvme_admin": false, 00:32:24.373 "nvme_io": false 00:32:24.373 }, 00:32:24.373 "memory_domains": [ 00:32:24.373 { 00:32:24.373 "dma_device_id": "system", 00:32:24.373 "dma_device_type": 1 00:32:24.373 }, 00:32:24.373 { 00:32:24.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:24.373 "dma_device_type": 2 00:32:24.373 } 00:32:24.373 ], 00:32:24.373 "driver_specific": {} 00:32:24.373 }' 00:32:24.373 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:24.633 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:24.633 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:24.633 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:24.633 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:24.633 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:24.633 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:24.633 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:24.633 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:24.633 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:24.892 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:24.892 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:24.892 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:24.892 [2024-07-12 07:42:58.752048] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:25.151 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:32:25.151 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:32:25.151 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:25.151 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:32:25.151 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:32:25.151 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:25.151 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:25.151 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:25.151 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:25.151 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:25.152 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:25.152 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:25.152 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:25.152 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:25.152 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:25.152 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.152 07:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:25.411 07:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:25.411 "name": "Existed_Raid", 00:32:25.411 "uuid": "b63f2e93-8376-4f7d-9704-4bcaf01bd22c", 00:32:25.411 "strip_size_kb": 64, 00:32:25.411 "state": "online", 00:32:25.411 "raid_level": "raid5f", 00:32:25.411 "superblock": false, 00:32:25.411 "num_base_bdevs": 4, 00:32:25.411 "num_base_bdevs_discovered": 3, 00:32:25.411 "num_base_bdevs_operational": 3, 00:32:25.411 "base_bdevs_list": [ 00:32:25.411 { 00:32:25.411 "name": null, 00:32:25.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.411 "is_configured": false, 00:32:25.411 "data_offset": 0, 00:32:25.411 "data_size": 65536 00:32:25.411 }, 00:32:25.411 { 00:32:25.411 "name": "BaseBdev2", 00:32:25.411 "uuid": "8da281a8-8fe8-442a-a8f0-a7d9bbbd361d", 00:32:25.411 "is_configured": true, 00:32:25.411 "data_offset": 0, 00:32:25.411 "data_size": 65536 00:32:25.411 }, 00:32:25.411 { 00:32:25.411 "name": "BaseBdev3", 00:32:25.411 "uuid": "d680fccd-42c7-4097-bc3e-b59a6d233643", 00:32:25.411 "is_configured": true, 00:32:25.411 "data_offset": 0, 00:32:25.411 "data_size": 65536 00:32:25.411 }, 00:32:25.411 { 00:32:25.411 "name": "BaseBdev4", 00:32:25.411 "uuid": "9186e0e3-6253-4b1f-bef5-c453b2d00890", 00:32:25.411 "is_configured": true, 00:32:25.411 "data_offset": 0, 00:32:25.411 "data_size": 65536 00:32:25.411 } 00:32:25.411 ] 00:32:25.411 }' 00:32:25.411 07:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:25.411 07:42:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.978 07:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:32:25.978 07:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:25.978 07:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.978 07:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:25.978 07:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:25.978 07:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:25.978 07:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:26.237 [2024-07-12 07:42:59.988147] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:26.237 [2024-07-12 07:42:59.988385] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:26.237 [2024-07-12 07:43:00.000140] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:26.237 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:26.237 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:26.237 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.237 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:26.495 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:26.495 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:26.495 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:32:26.754 [2024-07-12 07:43:00.524281] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:26.754 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:26.754 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:26.754 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.754 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:27.013 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:27.013 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:27.013 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:32:27.013 [2024-07-12 07:43:00.883715] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:32:27.013 [2024-07-12 07:43:00.883894] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:32:27.272 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:27.272 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:27.272 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.272 07:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:32:27.272 07:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:32:27.272 07:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:32:27.272 07:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:32:27.272 07:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:32:27.272 07:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:27.272 07:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:27.531 BaseBdev2 00:32:27.531 07:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:32:27.531 07:43:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:32:27.531 07:43:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:27.531 07:43:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:27.531 07:43:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:27.531 07:43:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:27.531 07:43:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:27.789 07:43:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:28.048 [ 00:32:28.048 { 00:32:28.048 "name": "BaseBdev2", 00:32:28.048 "aliases": [ 00:32:28.048 "26bd7d5b-3f51-4146-b817-efc06dff6400" 00:32:28.048 ], 00:32:28.048 "product_name": "Malloc disk", 00:32:28.048 "block_size": 512, 00:32:28.048 "num_blocks": 65536, 00:32:28.048 "uuid": "26bd7d5b-3f51-4146-b817-efc06dff6400", 00:32:28.048 "assigned_rate_limits": { 00:32:28.048 "rw_ios_per_sec": 0, 00:32:28.048 "rw_mbytes_per_sec": 0, 00:32:28.048 "r_mbytes_per_sec": 0, 00:32:28.048 "w_mbytes_per_sec": 0 00:32:28.048 }, 00:32:28.048 "claimed": false, 00:32:28.048 "zoned": false, 00:32:28.048 "supported_io_types": { 00:32:28.048 "read": true, 00:32:28.048 "write": true, 00:32:28.048 "unmap": true, 00:32:28.048 "write_zeroes": true, 00:32:28.048 "flush": true, 00:32:28.048 "reset": true, 00:32:28.048 "compare": false, 00:32:28.048 "compare_and_write": false, 00:32:28.048 "abort": true, 00:32:28.048 "nvme_admin": false, 00:32:28.048 "nvme_io": false 00:32:28.048 }, 00:32:28.048 "memory_domains": [ 00:32:28.048 { 00:32:28.048 "dma_device_id": "system", 00:32:28.048 "dma_device_type": 1 00:32:28.048 }, 00:32:28.048 { 00:32:28.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:28.048 "dma_device_type": 2 00:32:28.048 } 00:32:28.048 ], 00:32:28.048 "driver_specific": {} 00:32:28.048 } 00:32:28.048 ] 00:32:28.048 07:43:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:28.048 07:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:28.048 07:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:28.048 07:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:28.048 BaseBdev3 00:32:28.048 07:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:32:28.048 07:43:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:32:28.048 07:43:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:28.048 07:43:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:28.048 07:43:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:28.048 07:43:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:28.049 07:43:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:28.307 07:43:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:28.566 [ 00:32:28.566 { 00:32:28.566 "name": "BaseBdev3", 00:32:28.566 "aliases": [ 00:32:28.566 "a3acd7cb-6d8b-4f8d-b57c-52c5992667f7" 00:32:28.566 ], 00:32:28.566 "product_name": "Malloc disk", 00:32:28.566 "block_size": 512, 00:32:28.566 "num_blocks": 65536, 00:32:28.566 "uuid": "a3acd7cb-6d8b-4f8d-b57c-52c5992667f7", 00:32:28.566 "assigned_rate_limits": { 00:32:28.566 "rw_ios_per_sec": 0, 00:32:28.566 "rw_mbytes_per_sec": 0, 00:32:28.566 "r_mbytes_per_sec": 0, 00:32:28.566 "w_mbytes_per_sec": 0 00:32:28.566 }, 00:32:28.566 "claimed": false, 00:32:28.566 "zoned": false, 00:32:28.566 "supported_io_types": { 00:32:28.566 "read": true, 00:32:28.566 "write": true, 00:32:28.566 "unmap": true, 00:32:28.566 "write_zeroes": true, 00:32:28.566 "flush": true, 00:32:28.566 "reset": true, 00:32:28.566 "compare": false, 00:32:28.566 "compare_and_write": false, 00:32:28.566 "abort": true, 00:32:28.566 "nvme_admin": false, 00:32:28.566 "nvme_io": false 00:32:28.566 }, 00:32:28.566 "memory_domains": [ 00:32:28.566 { 00:32:28.566 "dma_device_id": "system", 00:32:28.566 "dma_device_type": 1 00:32:28.566 }, 00:32:28.566 { 00:32:28.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:28.566 "dma_device_type": 2 00:32:28.566 } 00:32:28.566 ], 00:32:28.566 "driver_specific": {} 00:32:28.566 } 00:32:28.566 ] 00:32:28.566 07:43:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:28.566 07:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:28.566 07:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:28.566 07:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:32:28.824 BaseBdev4 00:32:28.824 07:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:32:28.824 07:43:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:32:28.824 07:43:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:28.824 07:43:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:28.824 07:43:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:28.824 07:43:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:28.824 07:43:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:29.083 07:43:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:29.083 [ 00:32:29.083 { 00:32:29.083 "name": "BaseBdev4", 00:32:29.083 "aliases": [ 00:32:29.083 "580ef58f-9416-46cf-bcdf-1092314d9fb2" 00:32:29.083 ], 00:32:29.083 "product_name": "Malloc disk", 00:32:29.083 "block_size": 512, 00:32:29.083 "num_blocks": 65536, 00:32:29.083 "uuid": "580ef58f-9416-46cf-bcdf-1092314d9fb2", 00:32:29.083 "assigned_rate_limits": { 00:32:29.083 "rw_ios_per_sec": 0, 00:32:29.083 "rw_mbytes_per_sec": 0, 00:32:29.083 "r_mbytes_per_sec": 0, 00:32:29.083 "w_mbytes_per_sec": 0 00:32:29.083 }, 00:32:29.083 "claimed": false, 00:32:29.083 "zoned": false, 00:32:29.083 "supported_io_types": { 00:32:29.083 "read": true, 00:32:29.083 "write": true, 00:32:29.083 "unmap": true, 00:32:29.083 "write_zeroes": true, 00:32:29.083 "flush": true, 00:32:29.083 "reset": true, 00:32:29.083 "compare": false, 00:32:29.083 "compare_and_write": false, 00:32:29.083 "abort": true, 00:32:29.083 "nvme_admin": false, 00:32:29.083 "nvme_io": false 00:32:29.083 }, 00:32:29.083 "memory_domains": [ 00:32:29.083 { 00:32:29.083 "dma_device_id": "system", 00:32:29.083 "dma_device_type": 1 00:32:29.083 }, 00:32:29.083 { 00:32:29.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.083 "dma_device_type": 2 00:32:29.083 } 00:32:29.083 ], 00:32:29.083 "driver_specific": {} 00:32:29.083 } 00:32:29.083 ] 00:32:29.083 07:43:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:29.083 07:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:29.083 07:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:29.083 07:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:29.352 [2024-07-12 07:43:03.093813] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:29.352 [2024-07-12 07:43:03.094104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:29.352 [2024-07-12 07:43:03.094287] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:29.352 [2024-07-12 07:43:03.096942] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:29.352 [2024-07-12 07:43:03.097114] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:29.352 07:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:29.352 07:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:29.352 07:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:29.352 07:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:29.352 07:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:29.352 07:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:29.352 07:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:29.352 07:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:29.352 07:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:29.352 07:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:29.352 07:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:29.352 07:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:29.616 07:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:29.616 "name": "Existed_Raid", 00:32:29.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:29.616 "strip_size_kb": 64, 00:32:29.616 "state": "configuring", 00:32:29.616 "raid_level": "raid5f", 00:32:29.616 "superblock": false, 00:32:29.616 "num_base_bdevs": 4, 00:32:29.616 "num_base_bdevs_discovered": 3, 00:32:29.617 "num_base_bdevs_operational": 4, 00:32:29.617 "base_bdevs_list": [ 00:32:29.617 { 00:32:29.617 "name": "BaseBdev1", 00:32:29.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:29.617 "is_configured": false, 00:32:29.617 "data_offset": 0, 00:32:29.617 "data_size": 0 00:32:29.617 }, 00:32:29.617 { 00:32:29.617 "name": "BaseBdev2", 00:32:29.617 "uuid": "26bd7d5b-3f51-4146-b817-efc06dff6400", 00:32:29.617 "is_configured": true, 00:32:29.617 "data_offset": 0, 00:32:29.617 "data_size": 65536 00:32:29.617 }, 00:32:29.617 { 00:32:29.617 "name": "BaseBdev3", 00:32:29.617 "uuid": "a3acd7cb-6d8b-4f8d-b57c-52c5992667f7", 00:32:29.617 "is_configured": true, 00:32:29.617 "data_offset": 0, 00:32:29.617 "data_size": 65536 00:32:29.617 }, 00:32:29.617 { 00:32:29.617 "name": "BaseBdev4", 00:32:29.617 "uuid": "580ef58f-9416-46cf-bcdf-1092314d9fb2", 00:32:29.617 "is_configured": true, 00:32:29.617 "data_offset": 0, 00:32:29.617 "data_size": 65536 00:32:29.617 } 00:32:29.617 ] 00:32:29.617 }' 00:32:29.617 07:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:29.617 07:43:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.183 07:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:32:30.441 [2024-07-12 07:43:04.113926] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:30.441 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:30.441 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:30.441 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:30.441 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:30.441 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:30.441 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:30.441 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:30.441 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:30.441 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:30.441 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:30.441 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:30.441 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:30.441 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:30.441 "name": "Existed_Raid", 00:32:30.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.441 "strip_size_kb": 64, 00:32:30.441 "state": "configuring", 00:32:30.441 "raid_level": "raid5f", 00:32:30.441 "superblock": false, 00:32:30.441 "num_base_bdevs": 4, 00:32:30.441 "num_base_bdevs_discovered": 2, 00:32:30.441 "num_base_bdevs_operational": 4, 00:32:30.441 "base_bdevs_list": [ 00:32:30.441 { 00:32:30.441 "name": "BaseBdev1", 00:32:30.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.441 "is_configured": false, 00:32:30.441 "data_offset": 0, 00:32:30.441 "data_size": 0 00:32:30.441 }, 00:32:30.441 { 00:32:30.441 "name": null, 00:32:30.441 "uuid": "26bd7d5b-3f51-4146-b817-efc06dff6400", 00:32:30.441 "is_configured": false, 00:32:30.441 "data_offset": 0, 00:32:30.441 "data_size": 65536 00:32:30.441 }, 00:32:30.441 { 00:32:30.441 "name": "BaseBdev3", 00:32:30.441 "uuid": "a3acd7cb-6d8b-4f8d-b57c-52c5992667f7", 00:32:30.441 "is_configured": true, 00:32:30.441 "data_offset": 0, 00:32:30.441 "data_size": 65536 00:32:30.441 }, 00:32:30.441 { 00:32:30.441 "name": "BaseBdev4", 00:32:30.441 "uuid": "580ef58f-9416-46cf-bcdf-1092314d9fb2", 00:32:30.441 "is_configured": true, 00:32:30.441 "data_offset": 0, 00:32:30.441 "data_size": 65536 00:32:30.441 } 00:32:30.441 ] 00:32:30.441 }' 00:32:30.441 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:30.441 07:43:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.377 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:31.377 07:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:31.377 07:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:32:31.377 07:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:31.636 [2024-07-12 07:43:05.451954] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:31.636 BaseBdev1 00:32:31.636 07:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:32:31.636 07:43:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:32:31.636 07:43:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:31.636 07:43:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:31.636 07:43:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:31.636 07:43:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:31.636 07:43:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:31.895 07:43:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:32.154 [ 00:32:32.154 { 00:32:32.154 "name": "BaseBdev1", 00:32:32.154 "aliases": [ 00:32:32.154 "517034c5-8f46-46ef-ab30-4530809dd4b5" 00:32:32.154 ], 00:32:32.154 "product_name": "Malloc disk", 00:32:32.154 "block_size": 512, 00:32:32.154 "num_blocks": 65536, 00:32:32.154 "uuid": "517034c5-8f46-46ef-ab30-4530809dd4b5", 00:32:32.154 "assigned_rate_limits": { 00:32:32.154 "rw_ios_per_sec": 0, 00:32:32.154 "rw_mbytes_per_sec": 0, 00:32:32.154 "r_mbytes_per_sec": 0, 00:32:32.154 "w_mbytes_per_sec": 0 00:32:32.154 }, 00:32:32.154 "claimed": true, 00:32:32.154 "claim_type": "exclusive_write", 00:32:32.154 "zoned": false, 00:32:32.154 "supported_io_types": { 00:32:32.154 "read": true, 00:32:32.154 "write": true, 00:32:32.154 "unmap": true, 00:32:32.154 "write_zeroes": true, 00:32:32.154 "flush": true, 00:32:32.154 "reset": true, 00:32:32.154 "compare": false, 00:32:32.154 "compare_and_write": false, 00:32:32.154 "abort": true, 00:32:32.154 "nvme_admin": false, 00:32:32.154 "nvme_io": false 00:32:32.154 }, 00:32:32.154 "memory_domains": [ 00:32:32.154 { 00:32:32.154 "dma_device_id": "system", 00:32:32.154 "dma_device_type": 1 00:32:32.154 }, 00:32:32.154 { 00:32:32.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:32.154 "dma_device_type": 2 00:32:32.154 } 00:32:32.154 ], 00:32:32.154 "driver_specific": {} 00:32:32.155 } 00:32:32.155 ] 00:32:32.155 07:43:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:32.155 07:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:32.155 07:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:32.155 07:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:32.155 07:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:32.155 07:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:32.155 07:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:32.155 07:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:32.155 07:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:32.155 07:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:32.155 07:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:32.155 07:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:32.155 07:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:32.414 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:32.414 "name": "Existed_Raid", 00:32:32.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.414 "strip_size_kb": 64, 00:32:32.414 "state": "configuring", 00:32:32.414 "raid_level": "raid5f", 00:32:32.414 "superblock": false, 00:32:32.414 "num_base_bdevs": 4, 00:32:32.414 "num_base_bdevs_discovered": 3, 00:32:32.414 "num_base_bdevs_operational": 4, 00:32:32.414 "base_bdevs_list": [ 00:32:32.414 { 00:32:32.414 "name": "BaseBdev1", 00:32:32.414 "uuid": "517034c5-8f46-46ef-ab30-4530809dd4b5", 00:32:32.414 "is_configured": true, 00:32:32.414 "data_offset": 0, 00:32:32.414 "data_size": 65536 00:32:32.414 }, 00:32:32.414 { 00:32:32.414 "name": null, 00:32:32.414 "uuid": "26bd7d5b-3f51-4146-b817-efc06dff6400", 00:32:32.414 "is_configured": false, 00:32:32.414 "data_offset": 0, 00:32:32.414 "data_size": 65536 00:32:32.414 }, 00:32:32.414 { 00:32:32.415 "name": "BaseBdev3", 00:32:32.415 "uuid": "a3acd7cb-6d8b-4f8d-b57c-52c5992667f7", 00:32:32.415 "is_configured": true, 00:32:32.415 "data_offset": 0, 00:32:32.415 "data_size": 65536 00:32:32.415 }, 00:32:32.415 { 00:32:32.415 "name": "BaseBdev4", 00:32:32.415 "uuid": "580ef58f-9416-46cf-bcdf-1092314d9fb2", 00:32:32.415 "is_configured": true, 00:32:32.415 "data_offset": 0, 00:32:32.415 "data_size": 65536 00:32:32.415 } 00:32:32.415 ] 00:32:32.415 }' 00:32:32.415 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:32.415 07:43:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.982 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:32.982 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:32.982 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:32:32.982 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:32:33.241 [2024-07-12 07:43:06.968290] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:33.241 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:33.241 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:33.241 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:33.241 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:33.241 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:33.241 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:33.241 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:33.241 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:33.241 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:33.241 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:33.241 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:33.241 07:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:33.499 07:43:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:33.499 "name": "Existed_Raid", 00:32:33.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.499 "strip_size_kb": 64, 00:32:33.499 "state": "configuring", 00:32:33.499 "raid_level": "raid5f", 00:32:33.499 "superblock": false, 00:32:33.499 "num_base_bdevs": 4, 00:32:33.499 "num_base_bdevs_discovered": 2, 00:32:33.499 "num_base_bdevs_operational": 4, 00:32:33.499 "base_bdevs_list": [ 00:32:33.499 { 00:32:33.499 "name": "BaseBdev1", 00:32:33.499 "uuid": "517034c5-8f46-46ef-ab30-4530809dd4b5", 00:32:33.499 "is_configured": true, 00:32:33.499 "data_offset": 0, 00:32:33.499 "data_size": 65536 00:32:33.499 }, 00:32:33.499 { 00:32:33.499 "name": null, 00:32:33.499 "uuid": "26bd7d5b-3f51-4146-b817-efc06dff6400", 00:32:33.499 "is_configured": false, 00:32:33.499 "data_offset": 0, 00:32:33.499 "data_size": 65536 00:32:33.499 }, 00:32:33.499 { 00:32:33.499 "name": null, 00:32:33.499 "uuid": "a3acd7cb-6d8b-4f8d-b57c-52c5992667f7", 00:32:33.499 "is_configured": false, 00:32:33.499 "data_offset": 0, 00:32:33.499 "data_size": 65536 00:32:33.499 }, 00:32:33.499 { 00:32:33.499 "name": "BaseBdev4", 00:32:33.499 "uuid": "580ef58f-9416-46cf-bcdf-1092314d9fb2", 00:32:33.499 "is_configured": true, 00:32:33.499 "data_offset": 0, 00:32:33.499 "data_size": 65536 00:32:33.499 } 00:32:33.499 ] 00:32:33.499 }' 00:32:33.499 07:43:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:33.499 07:43:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.068 07:43:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.068 07:43:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:34.327 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:32:34.327 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:34.587 [2024-07-12 07:43:08.213888] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:34.587 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:34.587 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:34.587 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:34.587 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:34.587 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:34.587 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:34.587 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:34.587 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:34.587 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:34.587 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:34.587 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:34.587 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.846 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:34.846 "name": "Existed_Raid", 00:32:34.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.846 "strip_size_kb": 64, 00:32:34.846 "state": "configuring", 00:32:34.846 "raid_level": "raid5f", 00:32:34.846 "superblock": false, 00:32:34.846 "num_base_bdevs": 4, 00:32:34.846 "num_base_bdevs_discovered": 3, 00:32:34.846 "num_base_bdevs_operational": 4, 00:32:34.846 "base_bdevs_list": [ 00:32:34.846 { 00:32:34.846 "name": "BaseBdev1", 00:32:34.846 "uuid": "517034c5-8f46-46ef-ab30-4530809dd4b5", 00:32:34.846 "is_configured": true, 00:32:34.846 "data_offset": 0, 00:32:34.846 "data_size": 65536 00:32:34.846 }, 00:32:34.846 { 00:32:34.846 "name": null, 00:32:34.846 "uuid": "26bd7d5b-3f51-4146-b817-efc06dff6400", 00:32:34.846 "is_configured": false, 00:32:34.846 "data_offset": 0, 00:32:34.846 "data_size": 65536 00:32:34.846 }, 00:32:34.846 { 00:32:34.846 "name": "BaseBdev3", 00:32:34.846 "uuid": "a3acd7cb-6d8b-4f8d-b57c-52c5992667f7", 00:32:34.846 "is_configured": true, 00:32:34.846 "data_offset": 0, 00:32:34.846 "data_size": 65536 00:32:34.846 }, 00:32:34.846 { 00:32:34.846 "name": "BaseBdev4", 00:32:34.846 "uuid": "580ef58f-9416-46cf-bcdf-1092314d9fb2", 00:32:34.846 "is_configured": true, 00:32:34.846 "data_offset": 0, 00:32:34.846 "data_size": 65536 00:32:34.846 } 00:32:34.846 ] 00:32:34.846 }' 00:32:34.846 07:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:34.846 07:43:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.413 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:35.413 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:35.413 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:32:35.413 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:35.672 [2024-07-12 07:43:09.362100] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:35.672 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:35.672 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:35.672 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:35.672 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:35.672 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:35.672 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:35.672 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:35.672 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:35.672 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:35.672 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:35.672 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:35.672 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:35.930 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:35.930 "name": "Existed_Raid", 00:32:35.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:35.930 "strip_size_kb": 64, 00:32:35.930 "state": "configuring", 00:32:35.930 "raid_level": "raid5f", 00:32:35.930 "superblock": false, 00:32:35.930 "num_base_bdevs": 4, 00:32:35.930 "num_base_bdevs_discovered": 2, 00:32:35.930 "num_base_bdevs_operational": 4, 00:32:35.930 "base_bdevs_list": [ 00:32:35.930 { 00:32:35.930 "name": null, 00:32:35.930 "uuid": "517034c5-8f46-46ef-ab30-4530809dd4b5", 00:32:35.930 "is_configured": false, 00:32:35.930 "data_offset": 0, 00:32:35.930 "data_size": 65536 00:32:35.930 }, 00:32:35.930 { 00:32:35.930 "name": null, 00:32:35.930 "uuid": "26bd7d5b-3f51-4146-b817-efc06dff6400", 00:32:35.930 "is_configured": false, 00:32:35.930 "data_offset": 0, 00:32:35.930 "data_size": 65536 00:32:35.930 }, 00:32:35.930 { 00:32:35.930 "name": "BaseBdev3", 00:32:35.930 "uuid": "a3acd7cb-6d8b-4f8d-b57c-52c5992667f7", 00:32:35.930 "is_configured": true, 00:32:35.930 "data_offset": 0, 00:32:35.930 "data_size": 65536 00:32:35.930 }, 00:32:35.930 { 00:32:35.930 "name": "BaseBdev4", 00:32:35.930 "uuid": "580ef58f-9416-46cf-bcdf-1092314d9fb2", 00:32:35.930 "is_configured": true, 00:32:35.930 "data_offset": 0, 00:32:35.930 "data_size": 65536 00:32:35.930 } 00:32:35.930 ] 00:32:35.930 }' 00:32:35.930 07:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:35.930 07:43:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.497 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.497 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:36.497 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:32:36.497 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:36.755 [2024-07-12 07:43:10.432810] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:36.755 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:36.755 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:36.755 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:36.755 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:36.755 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:36.755 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:36.755 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:36.755 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:36.755 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:36.755 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:36.755 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:36.755 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:37.013 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:37.013 "name": "Existed_Raid", 00:32:37.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.013 "strip_size_kb": 64, 00:32:37.013 "state": "configuring", 00:32:37.013 "raid_level": "raid5f", 00:32:37.013 "superblock": false, 00:32:37.013 "num_base_bdevs": 4, 00:32:37.013 "num_base_bdevs_discovered": 3, 00:32:37.013 "num_base_bdevs_operational": 4, 00:32:37.013 "base_bdevs_list": [ 00:32:37.013 { 00:32:37.013 "name": null, 00:32:37.013 "uuid": "517034c5-8f46-46ef-ab30-4530809dd4b5", 00:32:37.013 "is_configured": false, 00:32:37.013 "data_offset": 0, 00:32:37.013 "data_size": 65536 00:32:37.013 }, 00:32:37.013 { 00:32:37.013 "name": "BaseBdev2", 00:32:37.013 "uuid": "26bd7d5b-3f51-4146-b817-efc06dff6400", 00:32:37.013 "is_configured": true, 00:32:37.013 "data_offset": 0, 00:32:37.013 "data_size": 65536 00:32:37.013 }, 00:32:37.013 { 00:32:37.013 "name": "BaseBdev3", 00:32:37.013 "uuid": "a3acd7cb-6d8b-4f8d-b57c-52c5992667f7", 00:32:37.013 "is_configured": true, 00:32:37.013 "data_offset": 0, 00:32:37.013 "data_size": 65536 00:32:37.013 }, 00:32:37.013 { 00:32:37.013 "name": "BaseBdev4", 00:32:37.013 "uuid": "580ef58f-9416-46cf-bcdf-1092314d9fb2", 00:32:37.013 "is_configured": true, 00:32:37.013 "data_offset": 0, 00:32:37.013 "data_size": 65536 00:32:37.013 } 00:32:37.013 ] 00:32:37.013 }' 00:32:37.013 07:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:37.013 07:43:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:37.581 07:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:37.581 07:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:37.839 07:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:32:37.839 07:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:37.839 07:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:38.098 07:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 517034c5-8f46-46ef-ab30-4530809dd4b5 00:32:38.098 [2024-07-12 07:43:11.959984] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:38.098 [2024-07-12 07:43:11.960234] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:32:38.098 [2024-07-12 07:43:11.960273] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:38.098 [2024-07-12 07:43:11.960427] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:32:38.098 [2024-07-12 07:43:11.961151] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:32:38.098 [2024-07-12 07:43:11.961284] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:32:38.098 [2024-07-12 07:43:11.961537] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:38.098 NewBaseBdev 00:32:38.098 07:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:32:38.098 07:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:32:38.098 07:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:38.098 07:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:38.098 07:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:38.098 07:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:38.098 07:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:38.358 07:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:38.618 [ 00:32:38.618 { 00:32:38.618 "name": "NewBaseBdev", 00:32:38.618 "aliases": [ 00:32:38.618 "517034c5-8f46-46ef-ab30-4530809dd4b5" 00:32:38.618 ], 00:32:38.618 "product_name": "Malloc disk", 00:32:38.618 "block_size": 512, 00:32:38.618 "num_blocks": 65536, 00:32:38.618 "uuid": "517034c5-8f46-46ef-ab30-4530809dd4b5", 00:32:38.618 "assigned_rate_limits": { 00:32:38.618 "rw_ios_per_sec": 0, 00:32:38.618 "rw_mbytes_per_sec": 0, 00:32:38.618 "r_mbytes_per_sec": 0, 00:32:38.618 "w_mbytes_per_sec": 0 00:32:38.618 }, 00:32:38.618 "claimed": true, 00:32:38.618 "claim_type": "exclusive_write", 00:32:38.618 "zoned": false, 00:32:38.618 "supported_io_types": { 00:32:38.618 "read": true, 00:32:38.618 "write": true, 00:32:38.618 "unmap": true, 00:32:38.618 "write_zeroes": true, 00:32:38.618 "flush": true, 00:32:38.618 "reset": true, 00:32:38.618 "compare": false, 00:32:38.618 "compare_and_write": false, 00:32:38.618 "abort": true, 00:32:38.618 "nvme_admin": false, 00:32:38.618 "nvme_io": false 00:32:38.618 }, 00:32:38.618 "memory_domains": [ 00:32:38.618 { 00:32:38.618 "dma_device_id": "system", 00:32:38.618 "dma_device_type": 1 00:32:38.618 }, 00:32:38.618 { 00:32:38.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:38.618 "dma_device_type": 2 00:32:38.618 } 00:32:38.618 ], 00:32:38.618 "driver_specific": {} 00:32:38.618 } 00:32:38.618 ] 00:32:38.618 07:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:38.618 07:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:32:38.618 07:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:38.618 07:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:38.618 07:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:38.618 07:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:38.618 07:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:38.618 07:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:38.618 07:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:38.618 07:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:38.618 07:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:38.618 07:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:38.618 07:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:38.878 07:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:38.878 "name": "Existed_Raid", 00:32:38.878 "uuid": "81d8218b-df69-489a-a685-6a24f0d36264", 00:32:38.878 "strip_size_kb": 64, 00:32:38.878 "state": "online", 00:32:38.878 "raid_level": "raid5f", 00:32:38.878 "superblock": false, 00:32:38.878 "num_base_bdevs": 4, 00:32:38.878 "num_base_bdevs_discovered": 4, 00:32:38.878 "num_base_bdevs_operational": 4, 00:32:38.878 "base_bdevs_list": [ 00:32:38.878 { 00:32:38.878 "name": "NewBaseBdev", 00:32:38.878 "uuid": "517034c5-8f46-46ef-ab30-4530809dd4b5", 00:32:38.878 "is_configured": true, 00:32:38.878 "data_offset": 0, 00:32:38.878 "data_size": 65536 00:32:38.878 }, 00:32:38.878 { 00:32:38.878 "name": "BaseBdev2", 00:32:38.878 "uuid": "26bd7d5b-3f51-4146-b817-efc06dff6400", 00:32:38.878 "is_configured": true, 00:32:38.878 "data_offset": 0, 00:32:38.878 "data_size": 65536 00:32:38.878 }, 00:32:38.878 { 00:32:38.878 "name": "BaseBdev3", 00:32:38.878 "uuid": "a3acd7cb-6d8b-4f8d-b57c-52c5992667f7", 00:32:38.878 "is_configured": true, 00:32:38.878 "data_offset": 0, 00:32:38.878 "data_size": 65536 00:32:38.878 }, 00:32:38.878 { 00:32:38.878 "name": "BaseBdev4", 00:32:38.878 "uuid": "580ef58f-9416-46cf-bcdf-1092314d9fb2", 00:32:38.878 "is_configured": true, 00:32:38.878 "data_offset": 0, 00:32:38.878 "data_size": 65536 00:32:38.878 } 00:32:38.878 ] 00:32:38.878 }' 00:32:38.878 07:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:38.878 07:43:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.446 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:32:39.446 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:39.446 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:39.446 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:39.446 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:39.446 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:32:39.446 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:39.446 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:39.446 [2024-07-12 07:43:13.324389] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:39.704 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:39.704 "name": "Existed_Raid", 00:32:39.704 "aliases": [ 00:32:39.705 "81d8218b-df69-489a-a685-6a24f0d36264" 00:32:39.705 ], 00:32:39.705 "product_name": "Raid Volume", 00:32:39.705 "block_size": 512, 00:32:39.705 "num_blocks": 196608, 00:32:39.705 "uuid": "81d8218b-df69-489a-a685-6a24f0d36264", 00:32:39.705 "assigned_rate_limits": { 00:32:39.705 "rw_ios_per_sec": 0, 00:32:39.705 "rw_mbytes_per_sec": 0, 00:32:39.705 "r_mbytes_per_sec": 0, 00:32:39.705 "w_mbytes_per_sec": 0 00:32:39.705 }, 00:32:39.705 "claimed": false, 00:32:39.705 "zoned": false, 00:32:39.705 "supported_io_types": { 00:32:39.705 "read": true, 00:32:39.705 "write": true, 00:32:39.705 "unmap": false, 00:32:39.705 "write_zeroes": true, 00:32:39.705 "flush": false, 00:32:39.705 "reset": true, 00:32:39.705 "compare": false, 00:32:39.705 "compare_and_write": false, 00:32:39.705 "abort": false, 00:32:39.705 "nvme_admin": false, 00:32:39.705 "nvme_io": false 00:32:39.705 }, 00:32:39.705 "driver_specific": { 00:32:39.705 "raid": { 00:32:39.705 "uuid": "81d8218b-df69-489a-a685-6a24f0d36264", 00:32:39.705 "strip_size_kb": 64, 00:32:39.705 "state": "online", 00:32:39.705 "raid_level": "raid5f", 00:32:39.705 "superblock": false, 00:32:39.705 "num_base_bdevs": 4, 00:32:39.705 "num_base_bdevs_discovered": 4, 00:32:39.705 "num_base_bdevs_operational": 4, 00:32:39.705 "base_bdevs_list": [ 00:32:39.705 { 00:32:39.705 "name": "NewBaseBdev", 00:32:39.705 "uuid": "517034c5-8f46-46ef-ab30-4530809dd4b5", 00:32:39.705 "is_configured": true, 00:32:39.705 "data_offset": 0, 00:32:39.705 "data_size": 65536 00:32:39.705 }, 00:32:39.705 { 00:32:39.705 "name": "BaseBdev2", 00:32:39.705 "uuid": "26bd7d5b-3f51-4146-b817-efc06dff6400", 00:32:39.705 "is_configured": true, 00:32:39.705 "data_offset": 0, 00:32:39.705 "data_size": 65536 00:32:39.705 }, 00:32:39.705 { 00:32:39.705 "name": "BaseBdev3", 00:32:39.705 "uuid": "a3acd7cb-6d8b-4f8d-b57c-52c5992667f7", 00:32:39.705 "is_configured": true, 00:32:39.705 "data_offset": 0, 00:32:39.705 "data_size": 65536 00:32:39.705 }, 00:32:39.705 { 00:32:39.705 "name": "BaseBdev4", 00:32:39.705 "uuid": "580ef58f-9416-46cf-bcdf-1092314d9fb2", 00:32:39.705 "is_configured": true, 00:32:39.705 "data_offset": 0, 00:32:39.705 "data_size": 65536 00:32:39.705 } 00:32:39.705 ] 00:32:39.705 } 00:32:39.705 } 00:32:39.705 }' 00:32:39.705 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:39.705 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:32:39.705 BaseBdev2 00:32:39.705 BaseBdev3 00:32:39.705 BaseBdev4' 00:32:39.705 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:39.705 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:32:39.705 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:39.964 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:39.964 "name": "NewBaseBdev", 00:32:39.964 "aliases": [ 00:32:39.964 "517034c5-8f46-46ef-ab30-4530809dd4b5" 00:32:39.964 ], 00:32:39.964 "product_name": "Malloc disk", 00:32:39.964 "block_size": 512, 00:32:39.964 "num_blocks": 65536, 00:32:39.964 "uuid": "517034c5-8f46-46ef-ab30-4530809dd4b5", 00:32:39.964 "assigned_rate_limits": { 00:32:39.964 "rw_ios_per_sec": 0, 00:32:39.964 "rw_mbytes_per_sec": 0, 00:32:39.964 "r_mbytes_per_sec": 0, 00:32:39.964 "w_mbytes_per_sec": 0 00:32:39.964 }, 00:32:39.964 "claimed": true, 00:32:39.964 "claim_type": "exclusive_write", 00:32:39.964 "zoned": false, 00:32:39.964 "supported_io_types": { 00:32:39.964 "read": true, 00:32:39.964 "write": true, 00:32:39.964 "unmap": true, 00:32:39.964 "write_zeroes": true, 00:32:39.964 "flush": true, 00:32:39.964 "reset": true, 00:32:39.964 "compare": false, 00:32:39.964 "compare_and_write": false, 00:32:39.964 "abort": true, 00:32:39.964 "nvme_admin": false, 00:32:39.964 "nvme_io": false 00:32:39.964 }, 00:32:39.964 "memory_domains": [ 00:32:39.964 { 00:32:39.964 "dma_device_id": "system", 00:32:39.964 "dma_device_type": 1 00:32:39.964 }, 00:32:39.964 { 00:32:39.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:39.964 "dma_device_type": 2 00:32:39.964 } 00:32:39.964 ], 00:32:39.964 "driver_specific": {} 00:32:39.964 }' 00:32:39.964 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:39.964 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:39.964 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:39.964 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:39.964 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:39.964 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:39.964 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:40.223 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:40.223 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:40.223 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:40.223 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:40.223 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:40.223 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:40.223 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:40.223 07:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:40.481 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:40.481 "name": "BaseBdev2", 00:32:40.481 "aliases": [ 00:32:40.481 "26bd7d5b-3f51-4146-b817-efc06dff6400" 00:32:40.481 ], 00:32:40.481 "product_name": "Malloc disk", 00:32:40.481 "block_size": 512, 00:32:40.481 "num_blocks": 65536, 00:32:40.481 "uuid": "26bd7d5b-3f51-4146-b817-efc06dff6400", 00:32:40.481 "assigned_rate_limits": { 00:32:40.481 "rw_ios_per_sec": 0, 00:32:40.481 "rw_mbytes_per_sec": 0, 00:32:40.481 "r_mbytes_per_sec": 0, 00:32:40.481 "w_mbytes_per_sec": 0 00:32:40.481 }, 00:32:40.481 "claimed": true, 00:32:40.481 "claim_type": "exclusive_write", 00:32:40.481 "zoned": false, 00:32:40.481 "supported_io_types": { 00:32:40.481 "read": true, 00:32:40.481 "write": true, 00:32:40.481 "unmap": true, 00:32:40.481 "write_zeroes": true, 00:32:40.481 "flush": true, 00:32:40.481 "reset": true, 00:32:40.481 "compare": false, 00:32:40.481 "compare_and_write": false, 00:32:40.481 "abort": true, 00:32:40.481 "nvme_admin": false, 00:32:40.481 "nvme_io": false 00:32:40.481 }, 00:32:40.481 "memory_domains": [ 00:32:40.481 { 00:32:40.481 "dma_device_id": "system", 00:32:40.481 "dma_device_type": 1 00:32:40.481 }, 00:32:40.481 { 00:32:40.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:40.481 "dma_device_type": 2 00:32:40.481 } 00:32:40.481 ], 00:32:40.481 "driver_specific": {} 00:32:40.481 }' 00:32:40.481 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:40.481 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:40.481 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:40.481 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:40.481 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:40.481 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:40.481 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:40.740 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:40.740 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:40.740 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:40.740 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:40.740 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:40.740 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:40.740 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:40.740 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:40.999 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:40.999 "name": "BaseBdev3", 00:32:40.999 "aliases": [ 00:32:40.999 "a3acd7cb-6d8b-4f8d-b57c-52c5992667f7" 00:32:40.999 ], 00:32:40.999 "product_name": "Malloc disk", 00:32:40.999 "block_size": 512, 00:32:40.999 "num_blocks": 65536, 00:32:40.999 "uuid": "a3acd7cb-6d8b-4f8d-b57c-52c5992667f7", 00:32:40.999 "assigned_rate_limits": { 00:32:40.999 "rw_ios_per_sec": 0, 00:32:40.999 "rw_mbytes_per_sec": 0, 00:32:40.999 "r_mbytes_per_sec": 0, 00:32:40.999 "w_mbytes_per_sec": 0 00:32:40.999 }, 00:32:40.999 "claimed": true, 00:32:40.999 "claim_type": "exclusive_write", 00:32:40.999 "zoned": false, 00:32:40.999 "supported_io_types": { 00:32:40.999 "read": true, 00:32:40.999 "write": true, 00:32:40.999 "unmap": true, 00:32:40.999 "write_zeroes": true, 00:32:40.999 "flush": true, 00:32:40.999 "reset": true, 00:32:40.999 "compare": false, 00:32:40.999 "compare_and_write": false, 00:32:40.999 "abort": true, 00:32:40.999 "nvme_admin": false, 00:32:40.999 "nvme_io": false 00:32:40.999 }, 00:32:40.999 "memory_domains": [ 00:32:40.999 { 00:32:40.999 "dma_device_id": "system", 00:32:40.999 "dma_device_type": 1 00:32:40.999 }, 00:32:40.999 { 00:32:40.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:40.999 "dma_device_type": 2 00:32:40.999 } 00:32:40.999 ], 00:32:40.999 "driver_specific": {} 00:32:40.999 }' 00:32:40.999 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:40.999 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:40.999 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:40.999 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:41.258 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:41.258 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:41.258 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:41.258 07:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:41.258 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:41.258 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:41.258 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:41.258 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:41.258 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:41.258 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:41.258 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:32:41.516 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:41.516 "name": "BaseBdev4", 00:32:41.516 "aliases": [ 00:32:41.516 "580ef58f-9416-46cf-bcdf-1092314d9fb2" 00:32:41.516 ], 00:32:41.516 "product_name": "Malloc disk", 00:32:41.516 "block_size": 512, 00:32:41.516 "num_blocks": 65536, 00:32:41.516 "uuid": "580ef58f-9416-46cf-bcdf-1092314d9fb2", 00:32:41.516 "assigned_rate_limits": { 00:32:41.516 "rw_ios_per_sec": 0, 00:32:41.516 "rw_mbytes_per_sec": 0, 00:32:41.516 "r_mbytes_per_sec": 0, 00:32:41.516 "w_mbytes_per_sec": 0 00:32:41.516 }, 00:32:41.516 "claimed": true, 00:32:41.516 "claim_type": "exclusive_write", 00:32:41.516 "zoned": false, 00:32:41.516 "supported_io_types": { 00:32:41.516 "read": true, 00:32:41.516 "write": true, 00:32:41.516 "unmap": true, 00:32:41.516 "write_zeroes": true, 00:32:41.516 "flush": true, 00:32:41.516 "reset": true, 00:32:41.516 "compare": false, 00:32:41.516 "compare_and_write": false, 00:32:41.516 "abort": true, 00:32:41.516 "nvme_admin": false, 00:32:41.516 "nvme_io": false 00:32:41.516 }, 00:32:41.516 "memory_domains": [ 00:32:41.516 { 00:32:41.516 "dma_device_id": "system", 00:32:41.516 "dma_device_type": 1 00:32:41.516 }, 00:32:41.516 { 00:32:41.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:41.516 "dma_device_type": 2 00:32:41.516 } 00:32:41.516 ], 00:32:41.516 "driver_specific": {} 00:32:41.516 }' 00:32:41.516 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:41.775 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:41.775 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:41.775 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:41.775 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:41.775 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:41.775 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:41.775 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:41.775 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:41.775 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:42.033 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:42.033 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:42.033 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:42.293 [2024-07-12 07:43:15.964733] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:42.293 [2024-07-12 07:43:15.964898] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:42.293 [2024-07-12 07:43:15.965074] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:42.293 [2024-07-12 07:43:15.965378] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:42.293 [2024-07-12 07:43:15.965481] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:32:42.293 07:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 163480 00:32:42.293 07:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 163480 ']' 00:32:42.293 07:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # kill -0 163480 00:32:42.293 07:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # uname 00:32:42.293 07:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:42.293 07:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 163480 00:32:42.293 07:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:42.293 07:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:42.293 07:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 163480' 00:32:42.293 killing process with pid 163480 00:32:42.293 07:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@965 -- # kill 163480 00:32:42.293 07:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # wait 163480 00:32:42.293 [2024-07-12 07:43:16.022320] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:42.293 [2024-07-12 07:43:16.061529] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:42.553 ************************************ 00:32:42.553 END TEST raid5f_state_function_test 00:32:42.553 ************************************ 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:32:42.553 00:32:42.553 real 0m28.721s 00:32:42.553 user 0m53.367s 00:32:42.553 sys 0m4.990s 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.553 07:43:16 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:32:42.553 07:43:16 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:32:42.553 07:43:16 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:42.553 07:43:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:42.553 ************************************ 00:32:42.553 START TEST raid5f_state_function_test_sb 00:32:42.553 ************************************ 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 4 true 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=164513 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 164513' 00:32:42.553 Process raid pid: 164513 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 164513 /var/tmp/spdk-raid.sock 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 164513 ']' 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:42.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:42.553 07:43:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.824 [2024-07-12 07:43:16.446503] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:42.824 [2024-07-12 07:43:16.446880] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:42.824 [2024-07-12 07:43:16.588366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.824 [2024-07-12 07:43:16.633634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.824 [2024-07-12 07:43:16.676916] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:43.098 07:43:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:43.098 07:43:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:32:43.098 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:43.098 [2024-07-12 07:43:16.878078] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:43.098 [2024-07-12 07:43:16.878251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:43.098 [2024-07-12 07:43:16.878387] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:43.098 [2024-07-12 07:43:16.878438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:43.098 [2024-07-12 07:43:16.878619] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:43.098 [2024-07-12 07:43:16.878685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:43.098 [2024-07-12 07:43:16.878712] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:43.098 [2024-07-12 07:43:16.878764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:43.098 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:43.098 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:43.098 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:43.098 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:43.098 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:43.098 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:43.098 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:43.098 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:43.098 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:43.098 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:43.098 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:43.098 07:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.357 07:43:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:43.357 "name": "Existed_Raid", 00:32:43.357 "uuid": "6005fad5-5c4e-4da5-9c51-0dfe17203583", 00:32:43.357 "strip_size_kb": 64, 00:32:43.357 "state": "configuring", 00:32:43.357 "raid_level": "raid5f", 00:32:43.357 "superblock": true, 00:32:43.357 "num_base_bdevs": 4, 00:32:43.357 "num_base_bdevs_discovered": 0, 00:32:43.357 "num_base_bdevs_operational": 4, 00:32:43.357 "base_bdevs_list": [ 00:32:43.357 { 00:32:43.357 "name": "BaseBdev1", 00:32:43.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:43.357 "is_configured": false, 00:32:43.357 "data_offset": 0, 00:32:43.357 "data_size": 0 00:32:43.357 }, 00:32:43.357 { 00:32:43.357 "name": "BaseBdev2", 00:32:43.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:43.357 "is_configured": false, 00:32:43.357 "data_offset": 0, 00:32:43.357 "data_size": 0 00:32:43.357 }, 00:32:43.357 { 00:32:43.357 "name": "BaseBdev3", 00:32:43.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:43.357 "is_configured": false, 00:32:43.357 "data_offset": 0, 00:32:43.357 "data_size": 0 00:32:43.357 }, 00:32:43.357 { 00:32:43.357 "name": "BaseBdev4", 00:32:43.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:43.357 "is_configured": false, 00:32:43.357 "data_offset": 0, 00:32:43.357 "data_size": 0 00:32:43.357 } 00:32:43.357 ] 00:32:43.357 }' 00:32:43.357 07:43:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:43.357 07:43:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.924 07:43:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:44.184 [2024-07-12 07:43:17.978079] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:44.184 [2024-07-12 07:43:17.978240] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:32:44.184 07:43:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:44.443 [2024-07-12 07:43:18.222159] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:44.443 [2024-07-12 07:43:18.222399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:44.443 [2024-07-12 07:43:18.222478] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:44.443 [2024-07-12 07:43:18.222533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:44.443 [2024-07-12 07:43:18.222560] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:44.443 [2024-07-12 07:43:18.222660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:44.443 [2024-07-12 07:43:18.222690] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:44.443 [2024-07-12 07:43:18.222748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:44.443 07:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:44.703 [2024-07-12 07:43:18.411126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:44.703 BaseBdev1 00:32:44.703 07:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:32:44.703 07:43:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:32:44.703 07:43:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:44.703 07:43:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:44.703 07:43:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:44.703 07:43:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:44.703 07:43:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:44.963 07:43:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:45.221 [ 00:32:45.221 { 00:32:45.221 "name": "BaseBdev1", 00:32:45.221 "aliases": [ 00:32:45.222 "71261a2f-04bc-4d5c-b41c-d8e7a9d8eb1a" 00:32:45.222 ], 00:32:45.222 "product_name": "Malloc disk", 00:32:45.222 "block_size": 512, 00:32:45.222 "num_blocks": 65536, 00:32:45.222 "uuid": "71261a2f-04bc-4d5c-b41c-d8e7a9d8eb1a", 00:32:45.222 "assigned_rate_limits": { 00:32:45.222 "rw_ios_per_sec": 0, 00:32:45.222 "rw_mbytes_per_sec": 0, 00:32:45.222 "r_mbytes_per_sec": 0, 00:32:45.222 "w_mbytes_per_sec": 0 00:32:45.222 }, 00:32:45.222 "claimed": true, 00:32:45.222 "claim_type": "exclusive_write", 00:32:45.222 "zoned": false, 00:32:45.222 "supported_io_types": { 00:32:45.222 "read": true, 00:32:45.222 "write": true, 00:32:45.222 "unmap": true, 00:32:45.222 "write_zeroes": true, 00:32:45.222 "flush": true, 00:32:45.222 "reset": true, 00:32:45.222 "compare": false, 00:32:45.222 "compare_and_write": false, 00:32:45.222 "abort": true, 00:32:45.222 "nvme_admin": false, 00:32:45.222 "nvme_io": false 00:32:45.222 }, 00:32:45.222 "memory_domains": [ 00:32:45.222 { 00:32:45.222 "dma_device_id": "system", 00:32:45.222 "dma_device_type": 1 00:32:45.222 }, 00:32:45.222 { 00:32:45.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:45.222 "dma_device_type": 2 00:32:45.222 } 00:32:45.222 ], 00:32:45.222 "driver_specific": {} 00:32:45.222 } 00:32:45.222 ] 00:32:45.222 07:43:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:45.222 07:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:45.222 07:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:45.222 07:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:45.222 07:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:45.222 07:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:45.222 07:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:45.222 07:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:45.222 07:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:45.222 07:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:45.222 07:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:45.222 07:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.222 07:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:45.222 07:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:45.222 "name": "Existed_Raid", 00:32:45.222 "uuid": "176d245e-95ef-4ded-aa83-d6567706ddaf", 00:32:45.222 "strip_size_kb": 64, 00:32:45.222 "state": "configuring", 00:32:45.222 "raid_level": "raid5f", 00:32:45.222 "superblock": true, 00:32:45.222 "num_base_bdevs": 4, 00:32:45.222 "num_base_bdevs_discovered": 1, 00:32:45.222 "num_base_bdevs_operational": 4, 00:32:45.222 "base_bdevs_list": [ 00:32:45.222 { 00:32:45.222 "name": "BaseBdev1", 00:32:45.222 "uuid": "71261a2f-04bc-4d5c-b41c-d8e7a9d8eb1a", 00:32:45.222 "is_configured": true, 00:32:45.222 "data_offset": 2048, 00:32:45.222 "data_size": 63488 00:32:45.222 }, 00:32:45.222 { 00:32:45.222 "name": "BaseBdev2", 00:32:45.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:45.222 "is_configured": false, 00:32:45.222 "data_offset": 0, 00:32:45.222 "data_size": 0 00:32:45.222 }, 00:32:45.222 { 00:32:45.222 "name": "BaseBdev3", 00:32:45.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:45.222 "is_configured": false, 00:32:45.222 "data_offset": 0, 00:32:45.222 "data_size": 0 00:32:45.222 }, 00:32:45.222 { 00:32:45.222 "name": "BaseBdev4", 00:32:45.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:45.222 "is_configured": false, 00:32:45.222 "data_offset": 0, 00:32:45.222 "data_size": 0 00:32:45.222 } 00:32:45.222 ] 00:32:45.222 }' 00:32:45.222 07:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:45.222 07:43:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.791 07:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:46.051 [2024-07-12 07:43:19.731415] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:46.051 [2024-07-12 07:43:19.731626] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:32:46.051 07:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:46.310 [2024-07-12 07:43:19.995521] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:46.310 [2024-07-12 07:43:19.997719] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:46.310 [2024-07-12 07:43:19.997899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:46.310 [2024-07-12 07:43:19.997973] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:46.310 [2024-07-12 07:43:19.998025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:46.310 [2024-07-12 07:43:19.998182] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:46.310 [2024-07-12 07:43:19.998232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:46.310 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:32:46.310 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:46.310 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:46.310 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:46.310 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:46.310 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:46.311 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:46.311 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:46.311 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:46.311 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:46.311 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:46.311 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:46.311 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:46.311 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:46.570 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:46.570 "name": "Existed_Raid", 00:32:46.570 "uuid": "1df4f331-b484-4a5d-94d3-585831ec72bd", 00:32:46.570 "strip_size_kb": 64, 00:32:46.570 "state": "configuring", 00:32:46.570 "raid_level": "raid5f", 00:32:46.570 "superblock": true, 00:32:46.570 "num_base_bdevs": 4, 00:32:46.570 "num_base_bdevs_discovered": 1, 00:32:46.570 "num_base_bdevs_operational": 4, 00:32:46.570 "base_bdevs_list": [ 00:32:46.570 { 00:32:46.570 "name": "BaseBdev1", 00:32:46.570 "uuid": "71261a2f-04bc-4d5c-b41c-d8e7a9d8eb1a", 00:32:46.570 "is_configured": true, 00:32:46.570 "data_offset": 2048, 00:32:46.570 "data_size": 63488 00:32:46.570 }, 00:32:46.570 { 00:32:46.570 "name": "BaseBdev2", 00:32:46.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:46.570 "is_configured": false, 00:32:46.570 "data_offset": 0, 00:32:46.570 "data_size": 0 00:32:46.570 }, 00:32:46.570 { 00:32:46.570 "name": "BaseBdev3", 00:32:46.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:46.570 "is_configured": false, 00:32:46.570 "data_offset": 0, 00:32:46.570 "data_size": 0 00:32:46.570 }, 00:32:46.570 { 00:32:46.570 "name": "BaseBdev4", 00:32:46.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:46.570 "is_configured": false, 00:32:46.570 "data_offset": 0, 00:32:46.570 "data_size": 0 00:32:46.570 } 00:32:46.570 ] 00:32:46.570 }' 00:32:46.570 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:46.570 07:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:47.139 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:47.139 [2024-07-12 07:43:20.948390] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:47.139 BaseBdev2 00:32:47.139 07:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:47.139 07:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:32:47.139 07:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:47.139 07:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:47.139 07:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:47.139 07:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:47.139 07:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:47.398 07:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:47.657 [ 00:32:47.657 { 00:32:47.657 "name": "BaseBdev2", 00:32:47.657 "aliases": [ 00:32:47.657 "8f5ba05f-d37a-4791-ad8b-ccc588ea14d4" 00:32:47.657 ], 00:32:47.657 "product_name": "Malloc disk", 00:32:47.657 "block_size": 512, 00:32:47.657 "num_blocks": 65536, 00:32:47.657 "uuid": "8f5ba05f-d37a-4791-ad8b-ccc588ea14d4", 00:32:47.657 "assigned_rate_limits": { 00:32:47.657 "rw_ios_per_sec": 0, 00:32:47.657 "rw_mbytes_per_sec": 0, 00:32:47.657 "r_mbytes_per_sec": 0, 00:32:47.657 "w_mbytes_per_sec": 0 00:32:47.657 }, 00:32:47.657 "claimed": true, 00:32:47.657 "claim_type": "exclusive_write", 00:32:47.657 "zoned": false, 00:32:47.658 "supported_io_types": { 00:32:47.658 "read": true, 00:32:47.658 "write": true, 00:32:47.658 "unmap": true, 00:32:47.658 "write_zeroes": true, 00:32:47.658 "flush": true, 00:32:47.658 "reset": true, 00:32:47.658 "compare": false, 00:32:47.658 "compare_and_write": false, 00:32:47.658 "abort": true, 00:32:47.658 "nvme_admin": false, 00:32:47.658 "nvme_io": false 00:32:47.658 }, 00:32:47.658 "memory_domains": [ 00:32:47.658 { 00:32:47.658 "dma_device_id": "system", 00:32:47.658 "dma_device_type": 1 00:32:47.658 }, 00:32:47.658 { 00:32:47.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:47.658 "dma_device_type": 2 00:32:47.658 } 00:32:47.658 ], 00:32:47.658 "driver_specific": {} 00:32:47.658 } 00:32:47.658 ] 00:32:47.658 07:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:47.658 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:47.658 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:47.658 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:47.658 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:47.658 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:47.658 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:47.658 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:47.658 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:47.658 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:47.658 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:47.658 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:47.658 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:47.658 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:47.658 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:47.917 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:47.917 "name": "Existed_Raid", 00:32:47.918 "uuid": "1df4f331-b484-4a5d-94d3-585831ec72bd", 00:32:47.918 "strip_size_kb": 64, 00:32:47.918 "state": "configuring", 00:32:47.918 "raid_level": "raid5f", 00:32:47.918 "superblock": true, 00:32:47.918 "num_base_bdevs": 4, 00:32:47.918 "num_base_bdevs_discovered": 2, 00:32:47.918 "num_base_bdevs_operational": 4, 00:32:47.918 "base_bdevs_list": [ 00:32:47.918 { 00:32:47.918 "name": "BaseBdev1", 00:32:47.918 "uuid": "71261a2f-04bc-4d5c-b41c-d8e7a9d8eb1a", 00:32:47.918 "is_configured": true, 00:32:47.918 "data_offset": 2048, 00:32:47.918 "data_size": 63488 00:32:47.918 }, 00:32:47.918 { 00:32:47.918 "name": "BaseBdev2", 00:32:47.918 "uuid": "8f5ba05f-d37a-4791-ad8b-ccc588ea14d4", 00:32:47.918 "is_configured": true, 00:32:47.918 "data_offset": 2048, 00:32:47.918 "data_size": 63488 00:32:47.918 }, 00:32:47.918 { 00:32:47.918 "name": "BaseBdev3", 00:32:47.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:47.918 "is_configured": false, 00:32:47.918 "data_offset": 0, 00:32:47.918 "data_size": 0 00:32:47.918 }, 00:32:47.918 { 00:32:47.918 "name": "BaseBdev4", 00:32:47.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:47.918 "is_configured": false, 00:32:47.918 "data_offset": 0, 00:32:47.918 "data_size": 0 00:32:47.918 } 00:32:47.918 ] 00:32:47.918 }' 00:32:47.918 07:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:47.918 07:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:48.272 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:48.535 [2024-07-12 07:43:22.371619] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:48.535 BaseBdev3 00:32:48.535 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:32:48.535 07:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:32:48.535 07:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:48.535 07:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:48.535 07:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:48.535 07:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:48.535 07:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:48.794 07:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:49.053 [ 00:32:49.053 { 00:32:49.053 "name": "BaseBdev3", 00:32:49.053 "aliases": [ 00:32:49.053 "e42cb870-737b-4a2a-9d56-15c5d6b336fb" 00:32:49.053 ], 00:32:49.053 "product_name": "Malloc disk", 00:32:49.053 "block_size": 512, 00:32:49.053 "num_blocks": 65536, 00:32:49.053 "uuid": "e42cb870-737b-4a2a-9d56-15c5d6b336fb", 00:32:49.053 "assigned_rate_limits": { 00:32:49.053 "rw_ios_per_sec": 0, 00:32:49.053 "rw_mbytes_per_sec": 0, 00:32:49.053 "r_mbytes_per_sec": 0, 00:32:49.053 "w_mbytes_per_sec": 0 00:32:49.053 }, 00:32:49.053 "claimed": true, 00:32:49.054 "claim_type": "exclusive_write", 00:32:49.054 "zoned": false, 00:32:49.054 "supported_io_types": { 00:32:49.054 "read": true, 00:32:49.054 "write": true, 00:32:49.054 "unmap": true, 00:32:49.054 "write_zeroes": true, 00:32:49.054 "flush": true, 00:32:49.054 "reset": true, 00:32:49.054 "compare": false, 00:32:49.054 "compare_and_write": false, 00:32:49.054 "abort": true, 00:32:49.054 "nvme_admin": false, 00:32:49.054 "nvme_io": false 00:32:49.054 }, 00:32:49.054 "memory_domains": [ 00:32:49.054 { 00:32:49.054 "dma_device_id": "system", 00:32:49.054 "dma_device_type": 1 00:32:49.054 }, 00:32:49.054 { 00:32:49.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:49.054 "dma_device_type": 2 00:32:49.054 } 00:32:49.054 ], 00:32:49.054 "driver_specific": {} 00:32:49.054 } 00:32:49.054 ] 00:32:49.054 07:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:49.054 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:49.054 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:49.054 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:49.054 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:49.054 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:49.054 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:49.054 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:49.054 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:49.054 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:49.054 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:49.054 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:49.054 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:49.054 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.054 07:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:49.311 07:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:49.311 "name": "Existed_Raid", 00:32:49.311 "uuid": "1df4f331-b484-4a5d-94d3-585831ec72bd", 00:32:49.311 "strip_size_kb": 64, 00:32:49.311 "state": "configuring", 00:32:49.311 "raid_level": "raid5f", 00:32:49.311 "superblock": true, 00:32:49.311 "num_base_bdevs": 4, 00:32:49.311 "num_base_bdevs_discovered": 3, 00:32:49.311 "num_base_bdevs_operational": 4, 00:32:49.311 "base_bdevs_list": [ 00:32:49.311 { 00:32:49.311 "name": "BaseBdev1", 00:32:49.311 "uuid": "71261a2f-04bc-4d5c-b41c-d8e7a9d8eb1a", 00:32:49.311 "is_configured": true, 00:32:49.311 "data_offset": 2048, 00:32:49.311 "data_size": 63488 00:32:49.311 }, 00:32:49.311 { 00:32:49.311 "name": "BaseBdev2", 00:32:49.311 "uuid": "8f5ba05f-d37a-4791-ad8b-ccc588ea14d4", 00:32:49.311 "is_configured": true, 00:32:49.311 "data_offset": 2048, 00:32:49.311 "data_size": 63488 00:32:49.311 }, 00:32:49.311 { 00:32:49.311 "name": "BaseBdev3", 00:32:49.311 "uuid": "e42cb870-737b-4a2a-9d56-15c5d6b336fb", 00:32:49.311 "is_configured": true, 00:32:49.311 "data_offset": 2048, 00:32:49.311 "data_size": 63488 00:32:49.311 }, 00:32:49.311 { 00:32:49.311 "name": "BaseBdev4", 00:32:49.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.311 "is_configured": false, 00:32:49.311 "data_offset": 0, 00:32:49.311 "data_size": 0 00:32:49.311 } 00:32:49.311 ] 00:32:49.311 }' 00:32:49.311 07:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:49.311 07:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:49.876 07:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:32:49.876 [2024-07-12 07:43:23.730720] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:49.876 [2024-07-12 07:43:23.731159] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:32:49.876 [2024-07-12 07:43:23.731313] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:49.876 [2024-07-12 07:43:23.731467] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:32:49.876 BaseBdev4 00:32:49.876 [2024-07-12 07:43:23.732267] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:32:49.876 [2024-07-12 07:43:23.732390] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:32:49.876 [2024-07-12 07:43:23.732617] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:49.876 07:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:32:49.876 07:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:32:49.876 07:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:49.876 07:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:49.876 07:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:49.876 07:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:49.877 07:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:50.149 07:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:50.408 [ 00:32:50.408 { 00:32:50.408 "name": "BaseBdev4", 00:32:50.408 "aliases": [ 00:32:50.408 "6a0dbe38-4868-482f-992a-89deb30dde88" 00:32:50.408 ], 00:32:50.408 "product_name": "Malloc disk", 00:32:50.408 "block_size": 512, 00:32:50.408 "num_blocks": 65536, 00:32:50.408 "uuid": "6a0dbe38-4868-482f-992a-89deb30dde88", 00:32:50.408 "assigned_rate_limits": { 00:32:50.408 "rw_ios_per_sec": 0, 00:32:50.408 "rw_mbytes_per_sec": 0, 00:32:50.408 "r_mbytes_per_sec": 0, 00:32:50.408 "w_mbytes_per_sec": 0 00:32:50.408 }, 00:32:50.408 "claimed": true, 00:32:50.408 "claim_type": "exclusive_write", 00:32:50.408 "zoned": false, 00:32:50.408 "supported_io_types": { 00:32:50.408 "read": true, 00:32:50.408 "write": true, 00:32:50.408 "unmap": true, 00:32:50.408 "write_zeroes": true, 00:32:50.408 "flush": true, 00:32:50.408 "reset": true, 00:32:50.408 "compare": false, 00:32:50.408 "compare_and_write": false, 00:32:50.408 "abort": true, 00:32:50.408 "nvme_admin": false, 00:32:50.408 "nvme_io": false 00:32:50.408 }, 00:32:50.408 "memory_domains": [ 00:32:50.408 { 00:32:50.408 "dma_device_id": "system", 00:32:50.408 "dma_device_type": 1 00:32:50.408 }, 00:32:50.408 { 00:32:50.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:50.408 "dma_device_type": 2 00:32:50.408 } 00:32:50.408 ], 00:32:50.408 "driver_specific": {} 00:32:50.408 } 00:32:50.408 ] 00:32:50.408 07:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:50.408 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:50.408 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:50.408 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:32:50.408 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:50.408 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:50.408 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:50.408 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:50.408 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:50.408 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:50.408 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:50.408 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:50.408 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:50.408 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.408 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:50.667 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:50.667 "name": "Existed_Raid", 00:32:50.667 "uuid": "1df4f331-b484-4a5d-94d3-585831ec72bd", 00:32:50.667 "strip_size_kb": 64, 00:32:50.667 "state": "online", 00:32:50.667 "raid_level": "raid5f", 00:32:50.667 "superblock": true, 00:32:50.667 "num_base_bdevs": 4, 00:32:50.667 "num_base_bdevs_discovered": 4, 00:32:50.667 "num_base_bdevs_operational": 4, 00:32:50.667 "base_bdevs_list": [ 00:32:50.667 { 00:32:50.667 "name": "BaseBdev1", 00:32:50.667 "uuid": "71261a2f-04bc-4d5c-b41c-d8e7a9d8eb1a", 00:32:50.667 "is_configured": true, 00:32:50.667 "data_offset": 2048, 00:32:50.667 "data_size": 63488 00:32:50.667 }, 00:32:50.667 { 00:32:50.667 "name": "BaseBdev2", 00:32:50.667 "uuid": "8f5ba05f-d37a-4791-ad8b-ccc588ea14d4", 00:32:50.667 "is_configured": true, 00:32:50.667 "data_offset": 2048, 00:32:50.667 "data_size": 63488 00:32:50.667 }, 00:32:50.667 { 00:32:50.667 "name": "BaseBdev3", 00:32:50.667 "uuid": "e42cb870-737b-4a2a-9d56-15c5d6b336fb", 00:32:50.667 "is_configured": true, 00:32:50.667 "data_offset": 2048, 00:32:50.667 "data_size": 63488 00:32:50.667 }, 00:32:50.667 { 00:32:50.667 "name": "BaseBdev4", 00:32:50.667 "uuid": "6a0dbe38-4868-482f-992a-89deb30dde88", 00:32:50.667 "is_configured": true, 00:32:50.667 "data_offset": 2048, 00:32:50.667 "data_size": 63488 00:32:50.667 } 00:32:50.667 ] 00:32:50.667 }' 00:32:50.667 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:50.667 07:43:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:51.234 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:32:51.234 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:51.234 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:51.234 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:51.234 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:51.234 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:32:51.234 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:51.234 07:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:51.493 [2024-07-12 07:43:25.219173] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:51.493 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:51.493 "name": "Existed_Raid", 00:32:51.493 "aliases": [ 00:32:51.493 "1df4f331-b484-4a5d-94d3-585831ec72bd" 00:32:51.493 ], 00:32:51.493 "product_name": "Raid Volume", 00:32:51.493 "block_size": 512, 00:32:51.493 "num_blocks": 190464, 00:32:51.493 "uuid": "1df4f331-b484-4a5d-94d3-585831ec72bd", 00:32:51.493 "assigned_rate_limits": { 00:32:51.493 "rw_ios_per_sec": 0, 00:32:51.493 "rw_mbytes_per_sec": 0, 00:32:51.493 "r_mbytes_per_sec": 0, 00:32:51.493 "w_mbytes_per_sec": 0 00:32:51.493 }, 00:32:51.493 "claimed": false, 00:32:51.493 "zoned": false, 00:32:51.493 "supported_io_types": { 00:32:51.493 "read": true, 00:32:51.493 "write": true, 00:32:51.493 "unmap": false, 00:32:51.493 "write_zeroes": true, 00:32:51.493 "flush": false, 00:32:51.493 "reset": true, 00:32:51.493 "compare": false, 00:32:51.493 "compare_and_write": false, 00:32:51.493 "abort": false, 00:32:51.493 "nvme_admin": false, 00:32:51.493 "nvme_io": false 00:32:51.493 }, 00:32:51.493 "driver_specific": { 00:32:51.493 "raid": { 00:32:51.493 "uuid": "1df4f331-b484-4a5d-94d3-585831ec72bd", 00:32:51.493 "strip_size_kb": 64, 00:32:51.493 "state": "online", 00:32:51.493 "raid_level": "raid5f", 00:32:51.493 "superblock": true, 00:32:51.493 "num_base_bdevs": 4, 00:32:51.493 "num_base_bdevs_discovered": 4, 00:32:51.493 "num_base_bdevs_operational": 4, 00:32:51.493 "base_bdevs_list": [ 00:32:51.493 { 00:32:51.493 "name": "BaseBdev1", 00:32:51.493 "uuid": "71261a2f-04bc-4d5c-b41c-d8e7a9d8eb1a", 00:32:51.493 "is_configured": true, 00:32:51.493 "data_offset": 2048, 00:32:51.493 "data_size": 63488 00:32:51.493 }, 00:32:51.493 { 00:32:51.493 "name": "BaseBdev2", 00:32:51.493 "uuid": "8f5ba05f-d37a-4791-ad8b-ccc588ea14d4", 00:32:51.493 "is_configured": true, 00:32:51.493 "data_offset": 2048, 00:32:51.493 "data_size": 63488 00:32:51.493 }, 00:32:51.493 { 00:32:51.493 "name": "BaseBdev3", 00:32:51.493 "uuid": "e42cb870-737b-4a2a-9d56-15c5d6b336fb", 00:32:51.493 "is_configured": true, 00:32:51.493 "data_offset": 2048, 00:32:51.493 "data_size": 63488 00:32:51.493 }, 00:32:51.493 { 00:32:51.493 "name": "BaseBdev4", 00:32:51.493 "uuid": "6a0dbe38-4868-482f-992a-89deb30dde88", 00:32:51.493 "is_configured": true, 00:32:51.493 "data_offset": 2048, 00:32:51.493 "data_size": 63488 00:32:51.493 } 00:32:51.493 ] 00:32:51.493 } 00:32:51.493 } 00:32:51.493 }' 00:32:51.493 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:51.493 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:32:51.493 BaseBdev2 00:32:51.493 BaseBdev3 00:32:51.493 BaseBdev4' 00:32:51.493 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:51.493 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:51.493 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:51.752 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:51.752 "name": "BaseBdev1", 00:32:51.752 "aliases": [ 00:32:51.752 "71261a2f-04bc-4d5c-b41c-d8e7a9d8eb1a" 00:32:51.752 ], 00:32:51.752 "product_name": "Malloc disk", 00:32:51.752 "block_size": 512, 00:32:51.752 "num_blocks": 65536, 00:32:51.752 "uuid": "71261a2f-04bc-4d5c-b41c-d8e7a9d8eb1a", 00:32:51.752 "assigned_rate_limits": { 00:32:51.752 "rw_ios_per_sec": 0, 00:32:51.752 "rw_mbytes_per_sec": 0, 00:32:51.752 "r_mbytes_per_sec": 0, 00:32:51.752 "w_mbytes_per_sec": 0 00:32:51.752 }, 00:32:51.752 "claimed": true, 00:32:51.752 "claim_type": "exclusive_write", 00:32:51.752 "zoned": false, 00:32:51.752 "supported_io_types": { 00:32:51.752 "read": true, 00:32:51.752 "write": true, 00:32:51.752 "unmap": true, 00:32:51.752 "write_zeroes": true, 00:32:51.752 "flush": true, 00:32:51.752 "reset": true, 00:32:51.752 "compare": false, 00:32:51.752 "compare_and_write": false, 00:32:51.752 "abort": true, 00:32:51.752 "nvme_admin": false, 00:32:51.752 "nvme_io": false 00:32:51.752 }, 00:32:51.752 "memory_domains": [ 00:32:51.752 { 00:32:51.752 "dma_device_id": "system", 00:32:51.752 "dma_device_type": 1 00:32:51.752 }, 00:32:51.752 { 00:32:51.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:51.753 "dma_device_type": 2 00:32:51.753 } 00:32:51.753 ], 00:32:51.753 "driver_specific": {} 00:32:51.753 }' 00:32:51.753 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:51.753 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:51.753 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:51.753 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:52.012 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:52.012 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:52.012 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:52.012 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:52.012 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:52.012 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:52.012 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:52.012 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:52.012 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:52.012 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:52.012 07:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:52.580 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:52.580 "name": "BaseBdev2", 00:32:52.580 "aliases": [ 00:32:52.580 "8f5ba05f-d37a-4791-ad8b-ccc588ea14d4" 00:32:52.580 ], 00:32:52.580 "product_name": "Malloc disk", 00:32:52.580 "block_size": 512, 00:32:52.580 "num_blocks": 65536, 00:32:52.580 "uuid": "8f5ba05f-d37a-4791-ad8b-ccc588ea14d4", 00:32:52.580 "assigned_rate_limits": { 00:32:52.580 "rw_ios_per_sec": 0, 00:32:52.580 "rw_mbytes_per_sec": 0, 00:32:52.580 "r_mbytes_per_sec": 0, 00:32:52.580 "w_mbytes_per_sec": 0 00:32:52.580 }, 00:32:52.580 "claimed": true, 00:32:52.580 "claim_type": "exclusive_write", 00:32:52.580 "zoned": false, 00:32:52.580 "supported_io_types": { 00:32:52.580 "read": true, 00:32:52.580 "write": true, 00:32:52.580 "unmap": true, 00:32:52.580 "write_zeroes": true, 00:32:52.580 "flush": true, 00:32:52.580 "reset": true, 00:32:52.580 "compare": false, 00:32:52.580 "compare_and_write": false, 00:32:52.580 "abort": true, 00:32:52.580 "nvme_admin": false, 00:32:52.580 "nvme_io": false 00:32:52.580 }, 00:32:52.580 "memory_domains": [ 00:32:52.580 { 00:32:52.580 "dma_device_id": "system", 00:32:52.580 "dma_device_type": 1 00:32:52.580 }, 00:32:52.580 { 00:32:52.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:52.580 "dma_device_type": 2 00:32:52.580 } 00:32:52.580 ], 00:32:52.580 "driver_specific": {} 00:32:52.580 }' 00:32:52.580 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:52.580 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:52.580 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:52.580 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:52.580 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:52.580 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:52.580 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:52.580 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:52.580 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:52.580 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:52.839 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:52.839 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:52.839 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:52.839 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:52.839 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:53.097 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:53.097 "name": "BaseBdev3", 00:32:53.097 "aliases": [ 00:32:53.097 "e42cb870-737b-4a2a-9d56-15c5d6b336fb" 00:32:53.097 ], 00:32:53.097 "product_name": "Malloc disk", 00:32:53.097 "block_size": 512, 00:32:53.097 "num_blocks": 65536, 00:32:53.097 "uuid": "e42cb870-737b-4a2a-9d56-15c5d6b336fb", 00:32:53.097 "assigned_rate_limits": { 00:32:53.097 "rw_ios_per_sec": 0, 00:32:53.097 "rw_mbytes_per_sec": 0, 00:32:53.097 "r_mbytes_per_sec": 0, 00:32:53.097 "w_mbytes_per_sec": 0 00:32:53.097 }, 00:32:53.097 "claimed": true, 00:32:53.097 "claim_type": "exclusive_write", 00:32:53.097 "zoned": false, 00:32:53.097 "supported_io_types": { 00:32:53.097 "read": true, 00:32:53.097 "write": true, 00:32:53.097 "unmap": true, 00:32:53.097 "write_zeroes": true, 00:32:53.097 "flush": true, 00:32:53.097 "reset": true, 00:32:53.097 "compare": false, 00:32:53.097 "compare_and_write": false, 00:32:53.097 "abort": true, 00:32:53.097 "nvme_admin": false, 00:32:53.097 "nvme_io": false 00:32:53.097 }, 00:32:53.097 "memory_domains": [ 00:32:53.097 { 00:32:53.097 "dma_device_id": "system", 00:32:53.097 "dma_device_type": 1 00:32:53.097 }, 00:32:53.097 { 00:32:53.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:53.097 "dma_device_type": 2 00:32:53.097 } 00:32:53.097 ], 00:32:53.097 "driver_specific": {} 00:32:53.097 }' 00:32:53.097 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:53.097 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:53.097 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:53.097 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:53.097 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:53.097 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:53.097 07:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:53.356 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:53.356 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:53.356 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:53.356 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:53.356 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:53.356 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:53.356 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:32:53.356 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:53.615 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:53.615 "name": "BaseBdev4", 00:32:53.615 "aliases": [ 00:32:53.615 "6a0dbe38-4868-482f-992a-89deb30dde88" 00:32:53.615 ], 00:32:53.615 "product_name": "Malloc disk", 00:32:53.615 "block_size": 512, 00:32:53.615 "num_blocks": 65536, 00:32:53.615 "uuid": "6a0dbe38-4868-482f-992a-89deb30dde88", 00:32:53.615 "assigned_rate_limits": { 00:32:53.615 "rw_ios_per_sec": 0, 00:32:53.615 "rw_mbytes_per_sec": 0, 00:32:53.615 "r_mbytes_per_sec": 0, 00:32:53.615 "w_mbytes_per_sec": 0 00:32:53.615 }, 00:32:53.615 "claimed": true, 00:32:53.615 "claim_type": "exclusive_write", 00:32:53.615 "zoned": false, 00:32:53.615 "supported_io_types": { 00:32:53.615 "read": true, 00:32:53.615 "write": true, 00:32:53.615 "unmap": true, 00:32:53.615 "write_zeroes": true, 00:32:53.615 "flush": true, 00:32:53.615 "reset": true, 00:32:53.615 "compare": false, 00:32:53.615 "compare_and_write": false, 00:32:53.615 "abort": true, 00:32:53.615 "nvme_admin": false, 00:32:53.615 "nvme_io": false 00:32:53.615 }, 00:32:53.615 "memory_domains": [ 00:32:53.615 { 00:32:53.615 "dma_device_id": "system", 00:32:53.615 "dma_device_type": 1 00:32:53.615 }, 00:32:53.615 { 00:32:53.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:53.615 "dma_device_type": 2 00:32:53.615 } 00:32:53.615 ], 00:32:53.615 "driver_specific": {} 00:32:53.615 }' 00:32:53.615 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:53.615 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:53.874 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:53.874 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:53.874 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:53.874 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:53.874 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:53.874 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:53.874 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:53.874 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:53.874 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:54.133 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:54.133 07:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:54.392 [2024-07-12 07:43:28.023678] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:54.392 "name": "Existed_Raid", 00:32:54.392 "uuid": "1df4f331-b484-4a5d-94d3-585831ec72bd", 00:32:54.392 "strip_size_kb": 64, 00:32:54.392 "state": "online", 00:32:54.392 "raid_level": "raid5f", 00:32:54.392 "superblock": true, 00:32:54.392 "num_base_bdevs": 4, 00:32:54.392 "num_base_bdevs_discovered": 3, 00:32:54.392 "num_base_bdevs_operational": 3, 00:32:54.392 "base_bdevs_list": [ 00:32:54.392 { 00:32:54.392 "name": null, 00:32:54.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.392 "is_configured": false, 00:32:54.392 "data_offset": 2048, 00:32:54.392 "data_size": 63488 00:32:54.392 }, 00:32:54.392 { 00:32:54.392 "name": "BaseBdev2", 00:32:54.392 "uuid": "8f5ba05f-d37a-4791-ad8b-ccc588ea14d4", 00:32:54.392 "is_configured": true, 00:32:54.392 "data_offset": 2048, 00:32:54.392 "data_size": 63488 00:32:54.392 }, 00:32:54.392 { 00:32:54.392 "name": "BaseBdev3", 00:32:54.392 "uuid": "e42cb870-737b-4a2a-9d56-15c5d6b336fb", 00:32:54.392 "is_configured": true, 00:32:54.392 "data_offset": 2048, 00:32:54.392 "data_size": 63488 00:32:54.392 }, 00:32:54.392 { 00:32:54.392 "name": "BaseBdev4", 00:32:54.392 "uuid": "6a0dbe38-4868-482f-992a-89deb30dde88", 00:32:54.392 "is_configured": true, 00:32:54.392 "data_offset": 2048, 00:32:54.392 "data_size": 63488 00:32:54.392 } 00:32:54.392 ] 00:32:54.392 }' 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:54.392 07:43:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.327 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:32:55.327 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:55.328 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:55.328 07:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:55.328 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:55.328 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:55.328 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:55.587 [2024-07-12 07:43:29.272016] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:55.587 [2024-07-12 07:43:29.272263] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:55.587 [2024-07-12 07:43:29.283955] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:55.587 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:55.587 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:55.587 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:55.587 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:55.845 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:55.845 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:55.845 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:32:55.845 [2024-07-12 07:43:29.712126] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:56.104 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:56.104 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:56.104 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:56.104 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:56.363 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:56.363 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:56.363 07:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:32:56.363 [2024-07-12 07:43:30.219932] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:32:56.363 [2024-07-12 07:43:30.220150] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:32:56.621 07:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:56.621 07:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:56.621 07:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:56.621 07:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:32:56.621 07:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:32:56.621 07:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:32:56.622 07:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:32:56.622 07:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:32:56.622 07:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:56.622 07:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:56.880 BaseBdev2 00:32:56.880 07:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:32:56.880 07:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:32:56.880 07:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:56.880 07:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:56.880 07:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:56.880 07:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:56.880 07:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:57.139 07:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:57.139 [ 00:32:57.139 { 00:32:57.139 "name": "BaseBdev2", 00:32:57.139 "aliases": [ 00:32:57.139 "532caa0d-bf57-461c-8962-0fe0daf0a49a" 00:32:57.139 ], 00:32:57.139 "product_name": "Malloc disk", 00:32:57.139 "block_size": 512, 00:32:57.139 "num_blocks": 65536, 00:32:57.139 "uuid": "532caa0d-bf57-461c-8962-0fe0daf0a49a", 00:32:57.139 "assigned_rate_limits": { 00:32:57.139 "rw_ios_per_sec": 0, 00:32:57.139 "rw_mbytes_per_sec": 0, 00:32:57.139 "r_mbytes_per_sec": 0, 00:32:57.139 "w_mbytes_per_sec": 0 00:32:57.139 }, 00:32:57.139 "claimed": false, 00:32:57.139 "zoned": false, 00:32:57.139 "supported_io_types": { 00:32:57.139 "read": true, 00:32:57.139 "write": true, 00:32:57.139 "unmap": true, 00:32:57.139 "write_zeroes": true, 00:32:57.139 "flush": true, 00:32:57.139 "reset": true, 00:32:57.139 "compare": false, 00:32:57.139 "compare_and_write": false, 00:32:57.139 "abort": true, 00:32:57.139 "nvme_admin": false, 00:32:57.139 "nvme_io": false 00:32:57.139 }, 00:32:57.139 "memory_domains": [ 00:32:57.139 { 00:32:57.139 "dma_device_id": "system", 00:32:57.139 "dma_device_type": 1 00:32:57.139 }, 00:32:57.139 { 00:32:57.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:57.139 "dma_device_type": 2 00:32:57.139 } 00:32:57.139 ], 00:32:57.139 "driver_specific": {} 00:32:57.139 } 00:32:57.139 ] 00:32:57.139 07:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:57.139 07:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:57.139 07:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:57.139 07:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:57.397 BaseBdev3 00:32:57.397 07:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:32:57.397 07:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:32:57.397 07:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:57.397 07:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:57.397 07:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:57.397 07:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:57.397 07:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:57.656 07:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:57.915 [ 00:32:57.915 { 00:32:57.915 "name": "BaseBdev3", 00:32:57.915 "aliases": [ 00:32:57.915 "56282226-8375-4bce-bfc6-4acb0527a459" 00:32:57.915 ], 00:32:57.915 "product_name": "Malloc disk", 00:32:57.915 "block_size": 512, 00:32:57.915 "num_blocks": 65536, 00:32:57.915 "uuid": "56282226-8375-4bce-bfc6-4acb0527a459", 00:32:57.915 "assigned_rate_limits": { 00:32:57.915 "rw_ios_per_sec": 0, 00:32:57.915 "rw_mbytes_per_sec": 0, 00:32:57.915 "r_mbytes_per_sec": 0, 00:32:57.915 "w_mbytes_per_sec": 0 00:32:57.915 }, 00:32:57.915 "claimed": false, 00:32:57.915 "zoned": false, 00:32:57.915 "supported_io_types": { 00:32:57.915 "read": true, 00:32:57.915 "write": true, 00:32:57.915 "unmap": true, 00:32:57.915 "write_zeroes": true, 00:32:57.915 "flush": true, 00:32:57.915 "reset": true, 00:32:57.915 "compare": false, 00:32:57.915 "compare_and_write": false, 00:32:57.915 "abort": true, 00:32:57.915 "nvme_admin": false, 00:32:57.915 "nvme_io": false 00:32:57.915 }, 00:32:57.915 "memory_domains": [ 00:32:57.915 { 00:32:57.915 "dma_device_id": "system", 00:32:57.915 "dma_device_type": 1 00:32:57.915 }, 00:32:57.915 { 00:32:57.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:57.915 "dma_device_type": 2 00:32:57.915 } 00:32:57.915 ], 00:32:57.915 "driver_specific": {} 00:32:57.915 } 00:32:57.915 ] 00:32:57.915 07:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:57.915 07:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:57.915 07:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:57.915 07:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:32:58.174 BaseBdev4 00:32:58.174 07:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:32:58.174 07:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:32:58.174 07:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:58.174 07:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:58.174 07:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:58.174 07:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:58.174 07:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:58.433 07:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:58.433 [ 00:32:58.433 { 00:32:58.433 "name": "BaseBdev4", 00:32:58.433 "aliases": [ 00:32:58.433 "c95c8cb6-e3a0-4995-b3d3-e9a6e0849bf9" 00:32:58.433 ], 00:32:58.433 "product_name": "Malloc disk", 00:32:58.433 "block_size": 512, 00:32:58.433 "num_blocks": 65536, 00:32:58.433 "uuid": "c95c8cb6-e3a0-4995-b3d3-e9a6e0849bf9", 00:32:58.433 "assigned_rate_limits": { 00:32:58.433 "rw_ios_per_sec": 0, 00:32:58.433 "rw_mbytes_per_sec": 0, 00:32:58.433 "r_mbytes_per_sec": 0, 00:32:58.433 "w_mbytes_per_sec": 0 00:32:58.433 }, 00:32:58.433 "claimed": false, 00:32:58.433 "zoned": false, 00:32:58.433 "supported_io_types": { 00:32:58.433 "read": true, 00:32:58.433 "write": true, 00:32:58.433 "unmap": true, 00:32:58.433 "write_zeroes": true, 00:32:58.433 "flush": true, 00:32:58.433 "reset": true, 00:32:58.433 "compare": false, 00:32:58.433 "compare_and_write": false, 00:32:58.433 "abort": true, 00:32:58.433 "nvme_admin": false, 00:32:58.433 "nvme_io": false 00:32:58.433 }, 00:32:58.433 "memory_domains": [ 00:32:58.433 { 00:32:58.433 "dma_device_id": "system", 00:32:58.433 "dma_device_type": 1 00:32:58.433 }, 00:32:58.433 { 00:32:58.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:58.433 "dma_device_type": 2 00:32:58.433 } 00:32:58.433 ], 00:32:58.433 "driver_specific": {} 00:32:58.433 } 00:32:58.433 ] 00:32:58.433 07:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:58.433 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:58.433 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:58.433 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:58.692 [2024-07-12 07:43:32.397983] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:58.692 [2024-07-12 07:43:32.398200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:58.692 [2024-07-12 07:43:32.398297] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:58.692 [2024-07-12 07:43:32.400234] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:58.692 [2024-07-12 07:43:32.400406] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:58.692 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:58.692 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:58.692 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:58.692 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:58.692 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:58.692 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:58.692 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:58.692 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:58.692 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:58.692 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:58.692 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.692 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:58.952 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:58.952 "name": "Existed_Raid", 00:32:58.952 "uuid": "6b2cd142-ee41-4dab-bc11-76892c964cc3", 00:32:58.952 "strip_size_kb": 64, 00:32:58.952 "state": "configuring", 00:32:58.952 "raid_level": "raid5f", 00:32:58.952 "superblock": true, 00:32:58.952 "num_base_bdevs": 4, 00:32:58.952 "num_base_bdevs_discovered": 3, 00:32:58.952 "num_base_bdevs_operational": 4, 00:32:58.952 "base_bdevs_list": [ 00:32:58.952 { 00:32:58.952 "name": "BaseBdev1", 00:32:58.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.952 "is_configured": false, 00:32:58.952 "data_offset": 0, 00:32:58.952 "data_size": 0 00:32:58.952 }, 00:32:58.952 { 00:32:58.952 "name": "BaseBdev2", 00:32:58.952 "uuid": "532caa0d-bf57-461c-8962-0fe0daf0a49a", 00:32:58.952 "is_configured": true, 00:32:58.952 "data_offset": 2048, 00:32:58.952 "data_size": 63488 00:32:58.952 }, 00:32:58.952 { 00:32:58.952 "name": "BaseBdev3", 00:32:58.952 "uuid": "56282226-8375-4bce-bfc6-4acb0527a459", 00:32:58.952 "is_configured": true, 00:32:58.952 "data_offset": 2048, 00:32:58.952 "data_size": 63488 00:32:58.952 }, 00:32:58.952 { 00:32:58.952 "name": "BaseBdev4", 00:32:58.952 "uuid": "c95c8cb6-e3a0-4995-b3d3-e9a6e0849bf9", 00:32:58.952 "is_configured": true, 00:32:58.952 "data_offset": 2048, 00:32:58.952 "data_size": 63488 00:32:58.952 } 00:32:58.952 ] 00:32:58.952 }' 00:32:58.952 07:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:58.952 07:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:59.520 07:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:32:59.520 [2024-07-12 07:43:33.278103] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:59.520 07:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:59.520 07:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:59.520 07:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:59.520 07:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:59.520 07:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:59.520 07:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:59.520 07:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:59.520 07:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:59.520 07:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:59.520 07:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:59.520 07:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:59.520 07:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:59.780 07:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:59.780 "name": "Existed_Raid", 00:32:59.780 "uuid": "6b2cd142-ee41-4dab-bc11-76892c964cc3", 00:32:59.780 "strip_size_kb": 64, 00:32:59.780 "state": "configuring", 00:32:59.780 "raid_level": "raid5f", 00:32:59.780 "superblock": true, 00:32:59.780 "num_base_bdevs": 4, 00:32:59.780 "num_base_bdevs_discovered": 2, 00:32:59.780 "num_base_bdevs_operational": 4, 00:32:59.780 "base_bdevs_list": [ 00:32:59.780 { 00:32:59.780 "name": "BaseBdev1", 00:32:59.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.780 "is_configured": false, 00:32:59.780 "data_offset": 0, 00:32:59.780 "data_size": 0 00:32:59.780 }, 00:32:59.780 { 00:32:59.780 "name": null, 00:32:59.780 "uuid": "532caa0d-bf57-461c-8962-0fe0daf0a49a", 00:32:59.780 "is_configured": false, 00:32:59.780 "data_offset": 2048, 00:32:59.780 "data_size": 63488 00:32:59.780 }, 00:32:59.780 { 00:32:59.780 "name": "BaseBdev3", 00:32:59.780 "uuid": "56282226-8375-4bce-bfc6-4acb0527a459", 00:32:59.780 "is_configured": true, 00:32:59.780 "data_offset": 2048, 00:32:59.780 "data_size": 63488 00:32:59.780 }, 00:32:59.780 { 00:32:59.780 "name": "BaseBdev4", 00:32:59.780 "uuid": "c95c8cb6-e3a0-4995-b3d3-e9a6e0849bf9", 00:32:59.780 "is_configured": true, 00:32:59.780 "data_offset": 2048, 00:32:59.780 "data_size": 63488 00:32:59.780 } 00:32:59.780 ] 00:32:59.780 }' 00:32:59.780 07:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:59.780 07:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:00.348 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:00.348 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:00.607 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:33:00.607 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:00.867 [2024-07-12 07:43:34.501073] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:00.867 BaseBdev1 00:33:00.867 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:33:00.867 07:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:33:00.867 07:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:00.867 07:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:00.867 07:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:00.867 07:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:00.867 07:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:00.867 07:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:01.127 [ 00:33:01.127 { 00:33:01.127 "name": "BaseBdev1", 00:33:01.127 "aliases": [ 00:33:01.127 "f88eced2-cb55-40f4-821d-a07987794b46" 00:33:01.127 ], 00:33:01.127 "product_name": "Malloc disk", 00:33:01.127 "block_size": 512, 00:33:01.127 "num_blocks": 65536, 00:33:01.127 "uuid": "f88eced2-cb55-40f4-821d-a07987794b46", 00:33:01.127 "assigned_rate_limits": { 00:33:01.127 "rw_ios_per_sec": 0, 00:33:01.127 "rw_mbytes_per_sec": 0, 00:33:01.127 "r_mbytes_per_sec": 0, 00:33:01.127 "w_mbytes_per_sec": 0 00:33:01.127 }, 00:33:01.127 "claimed": true, 00:33:01.127 "claim_type": "exclusive_write", 00:33:01.127 "zoned": false, 00:33:01.127 "supported_io_types": { 00:33:01.127 "read": true, 00:33:01.127 "write": true, 00:33:01.127 "unmap": true, 00:33:01.127 "write_zeroes": true, 00:33:01.127 "flush": true, 00:33:01.127 "reset": true, 00:33:01.127 "compare": false, 00:33:01.127 "compare_and_write": false, 00:33:01.127 "abort": true, 00:33:01.127 "nvme_admin": false, 00:33:01.127 "nvme_io": false 00:33:01.127 }, 00:33:01.127 "memory_domains": [ 00:33:01.127 { 00:33:01.127 "dma_device_id": "system", 00:33:01.127 "dma_device_type": 1 00:33:01.127 }, 00:33:01.127 { 00:33:01.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:01.127 "dma_device_type": 2 00:33:01.127 } 00:33:01.127 ], 00:33:01.127 "driver_specific": {} 00:33:01.127 } 00:33:01.127 ] 00:33:01.127 07:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:01.127 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:01.127 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:01.127 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:01.127 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:01.127 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:01.128 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:01.128 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:01.128 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:01.128 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:01.128 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:01.128 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:01.128 07:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:01.387 07:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:01.387 "name": "Existed_Raid", 00:33:01.387 "uuid": "6b2cd142-ee41-4dab-bc11-76892c964cc3", 00:33:01.387 "strip_size_kb": 64, 00:33:01.387 "state": "configuring", 00:33:01.387 "raid_level": "raid5f", 00:33:01.387 "superblock": true, 00:33:01.387 "num_base_bdevs": 4, 00:33:01.387 "num_base_bdevs_discovered": 3, 00:33:01.387 "num_base_bdevs_operational": 4, 00:33:01.387 "base_bdevs_list": [ 00:33:01.387 { 00:33:01.387 "name": "BaseBdev1", 00:33:01.387 "uuid": "f88eced2-cb55-40f4-821d-a07987794b46", 00:33:01.387 "is_configured": true, 00:33:01.387 "data_offset": 2048, 00:33:01.387 "data_size": 63488 00:33:01.387 }, 00:33:01.387 { 00:33:01.387 "name": null, 00:33:01.387 "uuid": "532caa0d-bf57-461c-8962-0fe0daf0a49a", 00:33:01.387 "is_configured": false, 00:33:01.387 "data_offset": 2048, 00:33:01.387 "data_size": 63488 00:33:01.387 }, 00:33:01.387 { 00:33:01.387 "name": "BaseBdev3", 00:33:01.387 "uuid": "56282226-8375-4bce-bfc6-4acb0527a459", 00:33:01.387 "is_configured": true, 00:33:01.387 "data_offset": 2048, 00:33:01.387 "data_size": 63488 00:33:01.387 }, 00:33:01.387 { 00:33:01.387 "name": "BaseBdev4", 00:33:01.387 "uuid": "c95c8cb6-e3a0-4995-b3d3-e9a6e0849bf9", 00:33:01.387 "is_configured": true, 00:33:01.387 "data_offset": 2048, 00:33:01.387 "data_size": 63488 00:33:01.387 } 00:33:01.387 ] 00:33:01.387 }' 00:33:01.387 07:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:01.387 07:43:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:01.956 07:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:01.956 07:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:02.215 07:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:33:02.215 07:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:33:02.475 [2024-07-12 07:43:36.135246] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:02.475 07:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:02.475 07:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:02.475 07:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:02.475 07:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:02.475 07:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:02.475 07:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:02.475 07:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:02.475 07:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:02.475 07:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:02.475 07:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:02.475 07:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:02.475 07:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:02.734 07:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:02.734 "name": "Existed_Raid", 00:33:02.734 "uuid": "6b2cd142-ee41-4dab-bc11-76892c964cc3", 00:33:02.734 "strip_size_kb": 64, 00:33:02.734 "state": "configuring", 00:33:02.734 "raid_level": "raid5f", 00:33:02.734 "superblock": true, 00:33:02.734 "num_base_bdevs": 4, 00:33:02.734 "num_base_bdevs_discovered": 2, 00:33:02.734 "num_base_bdevs_operational": 4, 00:33:02.734 "base_bdevs_list": [ 00:33:02.734 { 00:33:02.734 "name": "BaseBdev1", 00:33:02.734 "uuid": "f88eced2-cb55-40f4-821d-a07987794b46", 00:33:02.734 "is_configured": true, 00:33:02.734 "data_offset": 2048, 00:33:02.734 "data_size": 63488 00:33:02.734 }, 00:33:02.734 { 00:33:02.734 "name": null, 00:33:02.734 "uuid": "532caa0d-bf57-461c-8962-0fe0daf0a49a", 00:33:02.734 "is_configured": false, 00:33:02.734 "data_offset": 2048, 00:33:02.734 "data_size": 63488 00:33:02.734 }, 00:33:02.734 { 00:33:02.734 "name": null, 00:33:02.734 "uuid": "56282226-8375-4bce-bfc6-4acb0527a459", 00:33:02.734 "is_configured": false, 00:33:02.734 "data_offset": 2048, 00:33:02.734 "data_size": 63488 00:33:02.734 }, 00:33:02.734 { 00:33:02.734 "name": "BaseBdev4", 00:33:02.734 "uuid": "c95c8cb6-e3a0-4995-b3d3-e9a6e0849bf9", 00:33:02.734 "is_configured": true, 00:33:02.734 "data_offset": 2048, 00:33:02.734 "data_size": 63488 00:33:02.734 } 00:33:02.734 ] 00:33:02.734 }' 00:33:02.734 07:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:02.734 07:43:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:03.303 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.303 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:03.561 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:33:03.561 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:03.561 [2024-07-12 07:43:37.435481] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:03.820 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:03.820 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:03.820 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:03.820 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:03.820 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:03.820 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:03.820 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:03.820 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:03.820 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:03.820 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:03.820 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.820 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:03.820 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:03.820 "name": "Existed_Raid", 00:33:03.820 "uuid": "6b2cd142-ee41-4dab-bc11-76892c964cc3", 00:33:03.820 "strip_size_kb": 64, 00:33:03.820 "state": "configuring", 00:33:03.820 "raid_level": "raid5f", 00:33:03.820 "superblock": true, 00:33:03.820 "num_base_bdevs": 4, 00:33:03.820 "num_base_bdevs_discovered": 3, 00:33:03.820 "num_base_bdevs_operational": 4, 00:33:03.820 "base_bdevs_list": [ 00:33:03.820 { 00:33:03.820 "name": "BaseBdev1", 00:33:03.820 "uuid": "f88eced2-cb55-40f4-821d-a07987794b46", 00:33:03.820 "is_configured": true, 00:33:03.820 "data_offset": 2048, 00:33:03.820 "data_size": 63488 00:33:03.820 }, 00:33:03.820 { 00:33:03.820 "name": null, 00:33:03.820 "uuid": "532caa0d-bf57-461c-8962-0fe0daf0a49a", 00:33:03.820 "is_configured": false, 00:33:03.820 "data_offset": 2048, 00:33:03.820 "data_size": 63488 00:33:03.820 }, 00:33:03.820 { 00:33:03.820 "name": "BaseBdev3", 00:33:03.820 "uuid": "56282226-8375-4bce-bfc6-4acb0527a459", 00:33:03.820 "is_configured": true, 00:33:03.820 "data_offset": 2048, 00:33:03.820 "data_size": 63488 00:33:03.820 }, 00:33:03.820 { 00:33:03.820 "name": "BaseBdev4", 00:33:03.820 "uuid": "c95c8cb6-e3a0-4995-b3d3-e9a6e0849bf9", 00:33:03.820 "is_configured": true, 00:33:03.820 "data_offset": 2048, 00:33:03.820 "data_size": 63488 00:33:03.820 } 00:33:03.820 ] 00:33:03.820 }' 00:33:03.820 07:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:03.820 07:43:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:04.389 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:04.389 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:04.648 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:33:04.648 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:04.907 [2024-07-12 07:43:38.638960] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:04.907 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:04.907 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:04.907 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:04.907 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:04.907 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:04.907 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:04.907 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:04.907 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:04.907 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:04.907 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:04.907 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:04.907 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:05.166 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:05.166 "name": "Existed_Raid", 00:33:05.166 "uuid": "6b2cd142-ee41-4dab-bc11-76892c964cc3", 00:33:05.166 "strip_size_kb": 64, 00:33:05.166 "state": "configuring", 00:33:05.166 "raid_level": "raid5f", 00:33:05.166 "superblock": true, 00:33:05.166 "num_base_bdevs": 4, 00:33:05.166 "num_base_bdevs_discovered": 2, 00:33:05.166 "num_base_bdevs_operational": 4, 00:33:05.166 "base_bdevs_list": [ 00:33:05.166 { 00:33:05.166 "name": null, 00:33:05.166 "uuid": "f88eced2-cb55-40f4-821d-a07987794b46", 00:33:05.166 "is_configured": false, 00:33:05.166 "data_offset": 2048, 00:33:05.166 "data_size": 63488 00:33:05.166 }, 00:33:05.166 { 00:33:05.166 "name": null, 00:33:05.166 "uuid": "532caa0d-bf57-461c-8962-0fe0daf0a49a", 00:33:05.166 "is_configured": false, 00:33:05.166 "data_offset": 2048, 00:33:05.166 "data_size": 63488 00:33:05.166 }, 00:33:05.166 { 00:33:05.166 "name": "BaseBdev3", 00:33:05.166 "uuid": "56282226-8375-4bce-bfc6-4acb0527a459", 00:33:05.166 "is_configured": true, 00:33:05.166 "data_offset": 2048, 00:33:05.166 "data_size": 63488 00:33:05.166 }, 00:33:05.166 { 00:33:05.166 "name": "BaseBdev4", 00:33:05.166 "uuid": "c95c8cb6-e3a0-4995-b3d3-e9a6e0849bf9", 00:33:05.166 "is_configured": true, 00:33:05.166 "data_offset": 2048, 00:33:05.166 "data_size": 63488 00:33:05.166 } 00:33:05.166 ] 00:33:05.166 }' 00:33:05.166 07:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:05.166 07:43:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:05.734 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:05.734 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:05.734 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:33:05.734 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:05.993 [2024-07-12 07:43:39.695076] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:05.993 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:05.993 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:05.994 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:05.994 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:05.994 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:05.994 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:05.994 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:05.994 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:05.994 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:05.994 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:05.994 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:05.994 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:06.253 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:06.253 "name": "Existed_Raid", 00:33:06.253 "uuid": "6b2cd142-ee41-4dab-bc11-76892c964cc3", 00:33:06.253 "strip_size_kb": 64, 00:33:06.253 "state": "configuring", 00:33:06.253 "raid_level": "raid5f", 00:33:06.253 "superblock": true, 00:33:06.253 "num_base_bdevs": 4, 00:33:06.253 "num_base_bdevs_discovered": 3, 00:33:06.253 "num_base_bdevs_operational": 4, 00:33:06.253 "base_bdevs_list": [ 00:33:06.253 { 00:33:06.253 "name": null, 00:33:06.253 "uuid": "f88eced2-cb55-40f4-821d-a07987794b46", 00:33:06.253 "is_configured": false, 00:33:06.253 "data_offset": 2048, 00:33:06.253 "data_size": 63488 00:33:06.253 }, 00:33:06.253 { 00:33:06.253 "name": "BaseBdev2", 00:33:06.253 "uuid": "532caa0d-bf57-461c-8962-0fe0daf0a49a", 00:33:06.253 "is_configured": true, 00:33:06.253 "data_offset": 2048, 00:33:06.253 "data_size": 63488 00:33:06.253 }, 00:33:06.253 { 00:33:06.253 "name": "BaseBdev3", 00:33:06.253 "uuid": "56282226-8375-4bce-bfc6-4acb0527a459", 00:33:06.253 "is_configured": true, 00:33:06.253 "data_offset": 2048, 00:33:06.253 "data_size": 63488 00:33:06.253 }, 00:33:06.253 { 00:33:06.253 "name": "BaseBdev4", 00:33:06.253 "uuid": "c95c8cb6-e3a0-4995-b3d3-e9a6e0849bf9", 00:33:06.253 "is_configured": true, 00:33:06.253 "data_offset": 2048, 00:33:06.253 "data_size": 63488 00:33:06.253 } 00:33:06.253 ] 00:33:06.253 }' 00:33:06.253 07:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:06.253 07:43:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:06.821 07:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:06.821 07:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:07.080 07:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:33:07.080 07:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:07.080 07:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:07.340 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f88eced2-cb55-40f4-821d-a07987794b46 00:33:07.340 [2024-07-12 07:43:41.179592] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:07.340 [2024-07-12 07:43:41.179990] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:33:07.340 NewBaseBdev 00:33:07.340 [2024-07-12 07:43:41.181061] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:07.340 [2024-07-12 07:43:41.181246] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:33:07.340 [2024-07-12 07:43:41.182059] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:33:07.340 [2024-07-12 07:43:41.182176] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008180 00:33:07.340 [2024-07-12 07:43:41.182370] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:07.340 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:33:07.340 07:43:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:33:07.340 07:43:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:07.340 07:43:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:07.340 07:43:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:07.340 07:43:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:07.340 07:43:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:07.599 07:43:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:07.858 [ 00:33:07.858 { 00:33:07.858 "name": "NewBaseBdev", 00:33:07.858 "aliases": [ 00:33:07.858 "f88eced2-cb55-40f4-821d-a07987794b46" 00:33:07.858 ], 00:33:07.858 "product_name": "Malloc disk", 00:33:07.858 "block_size": 512, 00:33:07.858 "num_blocks": 65536, 00:33:07.858 "uuid": "f88eced2-cb55-40f4-821d-a07987794b46", 00:33:07.858 "assigned_rate_limits": { 00:33:07.858 "rw_ios_per_sec": 0, 00:33:07.858 "rw_mbytes_per_sec": 0, 00:33:07.858 "r_mbytes_per_sec": 0, 00:33:07.858 "w_mbytes_per_sec": 0 00:33:07.858 }, 00:33:07.858 "claimed": true, 00:33:07.858 "claim_type": "exclusive_write", 00:33:07.858 "zoned": false, 00:33:07.858 "supported_io_types": { 00:33:07.858 "read": true, 00:33:07.858 "write": true, 00:33:07.858 "unmap": true, 00:33:07.858 "write_zeroes": true, 00:33:07.858 "flush": true, 00:33:07.858 "reset": true, 00:33:07.858 "compare": false, 00:33:07.858 "compare_and_write": false, 00:33:07.858 "abort": true, 00:33:07.858 "nvme_admin": false, 00:33:07.858 "nvme_io": false 00:33:07.858 }, 00:33:07.858 "memory_domains": [ 00:33:07.858 { 00:33:07.858 "dma_device_id": "system", 00:33:07.858 "dma_device_type": 1 00:33:07.858 }, 00:33:07.858 { 00:33:07.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:07.858 "dma_device_type": 2 00:33:07.858 } 00:33:07.858 ], 00:33:07.858 "driver_specific": {} 00:33:07.858 } 00:33:07.858 ] 00:33:07.858 07:43:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:07.858 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:33:07.858 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:07.858 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:07.858 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:07.858 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:07.858 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:07.858 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:07.858 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:07.859 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:07.859 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:07.859 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:07.859 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:08.117 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:08.118 "name": "Existed_Raid", 00:33:08.118 "uuid": "6b2cd142-ee41-4dab-bc11-76892c964cc3", 00:33:08.118 "strip_size_kb": 64, 00:33:08.118 "state": "online", 00:33:08.118 "raid_level": "raid5f", 00:33:08.118 "superblock": true, 00:33:08.118 "num_base_bdevs": 4, 00:33:08.118 "num_base_bdevs_discovered": 4, 00:33:08.118 "num_base_bdevs_operational": 4, 00:33:08.118 "base_bdevs_list": [ 00:33:08.118 { 00:33:08.118 "name": "NewBaseBdev", 00:33:08.118 "uuid": "f88eced2-cb55-40f4-821d-a07987794b46", 00:33:08.118 "is_configured": true, 00:33:08.118 "data_offset": 2048, 00:33:08.118 "data_size": 63488 00:33:08.118 }, 00:33:08.118 { 00:33:08.118 "name": "BaseBdev2", 00:33:08.118 "uuid": "532caa0d-bf57-461c-8962-0fe0daf0a49a", 00:33:08.118 "is_configured": true, 00:33:08.118 "data_offset": 2048, 00:33:08.118 "data_size": 63488 00:33:08.118 }, 00:33:08.118 { 00:33:08.118 "name": "BaseBdev3", 00:33:08.118 "uuid": "56282226-8375-4bce-bfc6-4acb0527a459", 00:33:08.118 "is_configured": true, 00:33:08.118 "data_offset": 2048, 00:33:08.118 "data_size": 63488 00:33:08.118 }, 00:33:08.118 { 00:33:08.118 "name": "BaseBdev4", 00:33:08.118 "uuid": "c95c8cb6-e3a0-4995-b3d3-e9a6e0849bf9", 00:33:08.118 "is_configured": true, 00:33:08.118 "data_offset": 2048, 00:33:08.118 "data_size": 63488 00:33:08.118 } 00:33:08.118 ] 00:33:08.118 }' 00:33:08.118 07:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:08.118 07:43:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:08.686 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:33:08.686 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:08.686 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:08.686 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:08.686 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:08.686 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:33:08.686 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:08.686 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:08.945 [2024-07-12 07:43:42.629571] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:08.945 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:08.945 "name": "Existed_Raid", 00:33:08.945 "aliases": [ 00:33:08.945 "6b2cd142-ee41-4dab-bc11-76892c964cc3" 00:33:08.945 ], 00:33:08.945 "product_name": "Raid Volume", 00:33:08.945 "block_size": 512, 00:33:08.945 "num_blocks": 190464, 00:33:08.945 "uuid": "6b2cd142-ee41-4dab-bc11-76892c964cc3", 00:33:08.945 "assigned_rate_limits": { 00:33:08.945 "rw_ios_per_sec": 0, 00:33:08.945 "rw_mbytes_per_sec": 0, 00:33:08.945 "r_mbytes_per_sec": 0, 00:33:08.945 "w_mbytes_per_sec": 0 00:33:08.945 }, 00:33:08.945 "claimed": false, 00:33:08.945 "zoned": false, 00:33:08.945 "supported_io_types": { 00:33:08.945 "read": true, 00:33:08.945 "write": true, 00:33:08.945 "unmap": false, 00:33:08.945 "write_zeroes": true, 00:33:08.945 "flush": false, 00:33:08.945 "reset": true, 00:33:08.945 "compare": false, 00:33:08.945 "compare_and_write": false, 00:33:08.945 "abort": false, 00:33:08.945 "nvme_admin": false, 00:33:08.945 "nvme_io": false 00:33:08.945 }, 00:33:08.945 "driver_specific": { 00:33:08.945 "raid": { 00:33:08.945 "uuid": "6b2cd142-ee41-4dab-bc11-76892c964cc3", 00:33:08.945 "strip_size_kb": 64, 00:33:08.945 "state": "online", 00:33:08.945 "raid_level": "raid5f", 00:33:08.945 "superblock": true, 00:33:08.945 "num_base_bdevs": 4, 00:33:08.945 "num_base_bdevs_discovered": 4, 00:33:08.945 "num_base_bdevs_operational": 4, 00:33:08.945 "base_bdevs_list": [ 00:33:08.945 { 00:33:08.945 "name": "NewBaseBdev", 00:33:08.945 "uuid": "f88eced2-cb55-40f4-821d-a07987794b46", 00:33:08.945 "is_configured": true, 00:33:08.945 "data_offset": 2048, 00:33:08.945 "data_size": 63488 00:33:08.945 }, 00:33:08.945 { 00:33:08.945 "name": "BaseBdev2", 00:33:08.945 "uuid": "532caa0d-bf57-461c-8962-0fe0daf0a49a", 00:33:08.945 "is_configured": true, 00:33:08.945 "data_offset": 2048, 00:33:08.945 "data_size": 63488 00:33:08.945 }, 00:33:08.945 { 00:33:08.945 "name": "BaseBdev3", 00:33:08.945 "uuid": "56282226-8375-4bce-bfc6-4acb0527a459", 00:33:08.945 "is_configured": true, 00:33:08.945 "data_offset": 2048, 00:33:08.945 "data_size": 63488 00:33:08.946 }, 00:33:08.946 { 00:33:08.946 "name": "BaseBdev4", 00:33:08.946 "uuid": "c95c8cb6-e3a0-4995-b3d3-e9a6e0849bf9", 00:33:08.946 "is_configured": true, 00:33:08.946 "data_offset": 2048, 00:33:08.946 "data_size": 63488 00:33:08.946 } 00:33:08.946 ] 00:33:08.946 } 00:33:08.946 } 00:33:08.946 }' 00:33:08.946 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:08.946 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:33:08.946 BaseBdev2 00:33:08.946 BaseBdev3 00:33:08.946 BaseBdev4' 00:33:08.946 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:08.946 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:33:08.946 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:09.205 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:09.205 "name": "NewBaseBdev", 00:33:09.205 "aliases": [ 00:33:09.205 "f88eced2-cb55-40f4-821d-a07987794b46" 00:33:09.205 ], 00:33:09.205 "product_name": "Malloc disk", 00:33:09.205 "block_size": 512, 00:33:09.205 "num_blocks": 65536, 00:33:09.205 "uuid": "f88eced2-cb55-40f4-821d-a07987794b46", 00:33:09.205 "assigned_rate_limits": { 00:33:09.205 "rw_ios_per_sec": 0, 00:33:09.205 "rw_mbytes_per_sec": 0, 00:33:09.205 "r_mbytes_per_sec": 0, 00:33:09.205 "w_mbytes_per_sec": 0 00:33:09.205 }, 00:33:09.205 "claimed": true, 00:33:09.205 "claim_type": "exclusive_write", 00:33:09.205 "zoned": false, 00:33:09.205 "supported_io_types": { 00:33:09.205 "read": true, 00:33:09.205 "write": true, 00:33:09.205 "unmap": true, 00:33:09.205 "write_zeroes": true, 00:33:09.205 "flush": true, 00:33:09.205 "reset": true, 00:33:09.205 "compare": false, 00:33:09.205 "compare_and_write": false, 00:33:09.205 "abort": true, 00:33:09.205 "nvme_admin": false, 00:33:09.205 "nvme_io": false 00:33:09.205 }, 00:33:09.205 "memory_domains": [ 00:33:09.205 { 00:33:09.205 "dma_device_id": "system", 00:33:09.205 "dma_device_type": 1 00:33:09.205 }, 00:33:09.205 { 00:33:09.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:09.205 "dma_device_type": 2 00:33:09.205 } 00:33:09.205 ], 00:33:09.205 "driver_specific": {} 00:33:09.205 }' 00:33:09.205 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:09.205 07:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:09.205 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:09.205 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:09.205 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:09.464 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:09.464 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:09.464 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:09.464 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:09.464 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:09.464 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:09.464 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:09.464 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:09.464 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:09.464 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:09.722 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:09.722 "name": "BaseBdev2", 00:33:09.722 "aliases": [ 00:33:09.722 "532caa0d-bf57-461c-8962-0fe0daf0a49a" 00:33:09.722 ], 00:33:09.722 "product_name": "Malloc disk", 00:33:09.722 "block_size": 512, 00:33:09.722 "num_blocks": 65536, 00:33:09.722 "uuid": "532caa0d-bf57-461c-8962-0fe0daf0a49a", 00:33:09.722 "assigned_rate_limits": { 00:33:09.722 "rw_ios_per_sec": 0, 00:33:09.722 "rw_mbytes_per_sec": 0, 00:33:09.722 "r_mbytes_per_sec": 0, 00:33:09.722 "w_mbytes_per_sec": 0 00:33:09.722 }, 00:33:09.722 "claimed": true, 00:33:09.722 "claim_type": "exclusive_write", 00:33:09.722 "zoned": false, 00:33:09.722 "supported_io_types": { 00:33:09.722 "read": true, 00:33:09.722 "write": true, 00:33:09.722 "unmap": true, 00:33:09.722 "write_zeroes": true, 00:33:09.722 "flush": true, 00:33:09.722 "reset": true, 00:33:09.722 "compare": false, 00:33:09.722 "compare_and_write": false, 00:33:09.722 "abort": true, 00:33:09.722 "nvme_admin": false, 00:33:09.722 "nvme_io": false 00:33:09.722 }, 00:33:09.722 "memory_domains": [ 00:33:09.722 { 00:33:09.722 "dma_device_id": "system", 00:33:09.722 "dma_device_type": 1 00:33:09.722 }, 00:33:09.722 { 00:33:09.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:09.722 "dma_device_type": 2 00:33:09.722 } 00:33:09.722 ], 00:33:09.722 "driver_specific": {} 00:33:09.722 }' 00:33:09.722 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:09.722 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:09.722 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:09.722 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:09.981 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:09.981 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:09.981 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:09.981 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:09.981 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:09.981 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:09.981 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:09.981 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:09.981 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:09.981 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:33:09.981 07:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:10.240 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:10.240 "name": "BaseBdev3", 00:33:10.240 "aliases": [ 00:33:10.240 "56282226-8375-4bce-bfc6-4acb0527a459" 00:33:10.240 ], 00:33:10.240 "product_name": "Malloc disk", 00:33:10.240 "block_size": 512, 00:33:10.240 "num_blocks": 65536, 00:33:10.240 "uuid": "56282226-8375-4bce-bfc6-4acb0527a459", 00:33:10.240 "assigned_rate_limits": { 00:33:10.240 "rw_ios_per_sec": 0, 00:33:10.240 "rw_mbytes_per_sec": 0, 00:33:10.240 "r_mbytes_per_sec": 0, 00:33:10.240 "w_mbytes_per_sec": 0 00:33:10.240 }, 00:33:10.240 "claimed": true, 00:33:10.240 "claim_type": "exclusive_write", 00:33:10.240 "zoned": false, 00:33:10.240 "supported_io_types": { 00:33:10.240 "read": true, 00:33:10.240 "write": true, 00:33:10.240 "unmap": true, 00:33:10.240 "write_zeroes": true, 00:33:10.240 "flush": true, 00:33:10.240 "reset": true, 00:33:10.240 "compare": false, 00:33:10.240 "compare_and_write": false, 00:33:10.240 "abort": true, 00:33:10.240 "nvme_admin": false, 00:33:10.240 "nvme_io": false 00:33:10.240 }, 00:33:10.240 "memory_domains": [ 00:33:10.240 { 00:33:10.240 "dma_device_id": "system", 00:33:10.240 "dma_device_type": 1 00:33:10.240 }, 00:33:10.240 { 00:33:10.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:10.240 "dma_device_type": 2 00:33:10.240 } 00:33:10.240 ], 00:33:10.240 "driver_specific": {} 00:33:10.240 }' 00:33:10.240 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:10.240 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:10.240 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:10.240 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:10.498 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:10.498 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:10.498 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:10.498 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:10.498 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:10.498 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:10.498 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:10.756 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:10.756 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:10.756 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:33:10.756 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:10.756 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:10.756 "name": "BaseBdev4", 00:33:10.756 "aliases": [ 00:33:10.756 "c95c8cb6-e3a0-4995-b3d3-e9a6e0849bf9" 00:33:10.756 ], 00:33:10.756 "product_name": "Malloc disk", 00:33:10.756 "block_size": 512, 00:33:10.756 "num_blocks": 65536, 00:33:10.756 "uuid": "c95c8cb6-e3a0-4995-b3d3-e9a6e0849bf9", 00:33:10.756 "assigned_rate_limits": { 00:33:10.756 "rw_ios_per_sec": 0, 00:33:10.756 "rw_mbytes_per_sec": 0, 00:33:10.756 "r_mbytes_per_sec": 0, 00:33:10.756 "w_mbytes_per_sec": 0 00:33:10.756 }, 00:33:10.756 "claimed": true, 00:33:10.756 "claim_type": "exclusive_write", 00:33:10.756 "zoned": false, 00:33:10.756 "supported_io_types": { 00:33:10.756 "read": true, 00:33:10.756 "write": true, 00:33:10.756 "unmap": true, 00:33:10.756 "write_zeroes": true, 00:33:10.756 "flush": true, 00:33:10.756 "reset": true, 00:33:10.756 "compare": false, 00:33:10.756 "compare_and_write": false, 00:33:10.756 "abort": true, 00:33:10.756 "nvme_admin": false, 00:33:10.756 "nvme_io": false 00:33:10.756 }, 00:33:10.756 "memory_domains": [ 00:33:10.756 { 00:33:10.756 "dma_device_id": "system", 00:33:10.756 "dma_device_type": 1 00:33:10.756 }, 00:33:10.756 { 00:33:10.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:10.757 "dma_device_type": 2 00:33:10.757 } 00:33:10.757 ], 00:33:10.757 "driver_specific": {} 00:33:10.757 }' 00:33:10.757 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:11.016 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:11.016 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:11.016 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:11.016 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:11.016 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:11.016 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:11.016 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:11.275 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:11.275 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:11.275 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:11.275 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:11.275 07:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:11.535 [2024-07-12 07:43:45.245915] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:11.535 [2024-07-12 07:43:45.245952] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:11.535 [2024-07-12 07:43:45.246033] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:11.535 [2024-07-12 07:43:45.246318] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:11.535 [2024-07-12 07:43:45.246329] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name Existed_Raid, state offline 00:33:11.535 07:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 164513 00:33:11.535 07:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 164513 ']' 00:33:11.535 07:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 164513 00:33:11.535 07:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:33:11.535 07:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:11.535 07:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 164513 00:33:11.535 07:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:11.535 07:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:11.535 07:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 164513' 00:33:11.535 killing process with pid 164513 00:33:11.535 07:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 164513 00:33:11.535 [2024-07-12 07:43:45.297896] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:11.535 07:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 164513 00:33:11.535 [2024-07-12 07:43:45.369028] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:12.104 07:43:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:33:12.104 00:33:12.104 real 0m29.382s 00:33:12.104 user 0m54.339s 00:33:12.104 sys 0m5.233s 00:33:12.104 07:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:12.104 07:43:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.104 ************************************ 00:33:12.104 END TEST raid5f_state_function_test_sb 00:33:12.104 ************************************ 00:33:12.104 07:43:45 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:33:12.104 07:43:45 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:33:12.104 07:43:45 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:12.104 07:43:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:12.104 ************************************ 00:33:12.104 START TEST raid5f_superblock_test 00:33:12.104 ************************************ 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid5f 4 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=165550 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 165550 /var/tmp/spdk-raid.sock 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 165550 ']' 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:12.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:12.104 07:43:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:12.104 [2024-07-12 07:43:45.926789] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:12.104 [2024-07-12 07:43:45.927067] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165550 ] 00:33:12.365 [2024-07-12 07:43:46.086754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.365 [2024-07-12 07:43:46.157495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.365 [2024-07-12 07:43:46.222922] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:13.303 07:43:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:13.303 07:43:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:33:13.303 07:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:33:13.303 07:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:13.303 07:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:33:13.303 07:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:33:13.303 07:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:13.303 07:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:13.303 07:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:13.303 07:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:13.303 07:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:33:13.303 malloc1 00:33:13.303 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:13.562 [2024-07-12 07:43:47.310162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:13.562 [2024-07-12 07:43:47.310353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:13.562 [2024-07-12 07:43:47.310471] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:33:13.562 [2024-07-12 07:43:47.310597] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:13.562 [2024-07-12 07:43:47.313023] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:13.562 [2024-07-12 07:43:47.313179] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:13.562 pt1 00:33:13.562 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:13.562 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:13.562 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:33:13.562 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:33:13.563 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:13.563 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:13.563 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:13.563 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:13.563 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:33:13.822 malloc2 00:33:13.822 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:14.082 [2024-07-12 07:43:47.839127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:14.082 [2024-07-12 07:43:47.839279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:14.082 [2024-07-12 07:43:47.839358] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:33:14.082 [2024-07-12 07:43:47.839463] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:14.082 [2024-07-12 07:43:47.841764] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:14.082 [2024-07-12 07:43:47.841928] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:14.082 pt2 00:33:14.082 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:14.082 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:14.082 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:33:14.082 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:33:14.082 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:33:14.082 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:14.082 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:14.082 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:14.082 07:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:33:14.341 malloc3 00:33:14.341 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:14.341 [2024-07-12 07:43:48.211073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:14.341 [2024-07-12 07:43:48.211223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:14.341 [2024-07-12 07:43:48.211285] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:14.341 [2024-07-12 07:43:48.211418] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:14.341 [2024-07-12 07:43:48.213653] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:14.341 [2024-07-12 07:43:48.213807] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:14.341 pt3 00:33:14.600 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:14.600 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:14.600 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:33:14.600 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:33:14.600 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:33:14.600 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:14.600 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:14.600 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:14.600 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:33:14.600 malloc4 00:33:14.600 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:14.860 [2024-07-12 07:43:48.567593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:14.860 [2024-07-12 07:43:48.567759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:14.860 [2024-07-12 07:43:48.567816] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:14.860 [2024-07-12 07:43:48.567920] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:14.860 [2024-07-12 07:43:48.570115] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:14.860 [2024-07-12 07:43:48.570258] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:14.860 pt4 00:33:14.860 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:14.860 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:14.860 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:33:15.120 [2024-07-12 07:43:48.807681] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:15.120 [2024-07-12 07:43:48.809748] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:15.120 [2024-07-12 07:43:48.809923] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:15.120 [2024-07-12 07:43:48.810048] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:15.120 [2024-07-12 07:43:48.810256] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:33:15.120 [2024-07-12 07:43:48.810365] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:15.120 [2024-07-12 07:43:48.810522] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:33:15.120 [2024-07-12 07:43:48.811257] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:33:15.120 [2024-07-12 07:43:48.811370] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:33:15.120 [2024-07-12 07:43:48.811600] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:15.120 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:15.120 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:15.120 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:15.120 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:15.120 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:15.120 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:15.120 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:15.120 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:15.120 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:15.120 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:15.120 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:15.120 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:15.379 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:15.379 "name": "raid_bdev1", 00:33:15.379 "uuid": "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e", 00:33:15.379 "strip_size_kb": 64, 00:33:15.379 "state": "online", 00:33:15.379 "raid_level": "raid5f", 00:33:15.379 "superblock": true, 00:33:15.379 "num_base_bdevs": 4, 00:33:15.379 "num_base_bdevs_discovered": 4, 00:33:15.379 "num_base_bdevs_operational": 4, 00:33:15.379 "base_bdevs_list": [ 00:33:15.379 { 00:33:15.379 "name": "pt1", 00:33:15.379 "uuid": "b0304711-199b-51ca-9a58-0483da122f54", 00:33:15.379 "is_configured": true, 00:33:15.379 "data_offset": 2048, 00:33:15.379 "data_size": 63488 00:33:15.379 }, 00:33:15.379 { 00:33:15.379 "name": "pt2", 00:33:15.379 "uuid": "516dd88c-e9e0-57e7-beb9-6849529b7e9e", 00:33:15.379 "is_configured": true, 00:33:15.379 "data_offset": 2048, 00:33:15.379 "data_size": 63488 00:33:15.379 }, 00:33:15.379 { 00:33:15.379 "name": "pt3", 00:33:15.379 "uuid": "9474a2c5-d3f3-532e-9900-dfec3dc21ccc", 00:33:15.379 "is_configured": true, 00:33:15.379 "data_offset": 2048, 00:33:15.379 "data_size": 63488 00:33:15.379 }, 00:33:15.379 { 00:33:15.379 "name": "pt4", 00:33:15.379 "uuid": "990a23fd-904a-575f-a5dd-f31b6878bde0", 00:33:15.379 "is_configured": true, 00:33:15.379 "data_offset": 2048, 00:33:15.379 "data_size": 63488 00:33:15.379 } 00:33:15.379 ] 00:33:15.379 }' 00:33:15.379 07:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:15.379 07:43:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:15.948 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:33:15.948 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:15.948 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:15.948 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:15.948 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:15.948 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:33:15.948 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:15.948 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:15.948 [2024-07-12 07:43:49.707952] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:15.948 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:15.948 "name": "raid_bdev1", 00:33:15.948 "aliases": [ 00:33:15.948 "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e" 00:33:15.948 ], 00:33:15.948 "product_name": "Raid Volume", 00:33:15.948 "block_size": 512, 00:33:15.948 "num_blocks": 190464, 00:33:15.948 "uuid": "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e", 00:33:15.948 "assigned_rate_limits": { 00:33:15.948 "rw_ios_per_sec": 0, 00:33:15.948 "rw_mbytes_per_sec": 0, 00:33:15.948 "r_mbytes_per_sec": 0, 00:33:15.948 "w_mbytes_per_sec": 0 00:33:15.948 }, 00:33:15.948 "claimed": false, 00:33:15.948 "zoned": false, 00:33:15.948 "supported_io_types": { 00:33:15.948 "read": true, 00:33:15.948 "write": true, 00:33:15.948 "unmap": false, 00:33:15.948 "write_zeroes": true, 00:33:15.948 "flush": false, 00:33:15.948 "reset": true, 00:33:15.948 "compare": false, 00:33:15.948 "compare_and_write": false, 00:33:15.948 "abort": false, 00:33:15.948 "nvme_admin": false, 00:33:15.948 "nvme_io": false 00:33:15.948 }, 00:33:15.948 "driver_specific": { 00:33:15.948 "raid": { 00:33:15.948 "uuid": "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e", 00:33:15.948 "strip_size_kb": 64, 00:33:15.948 "state": "online", 00:33:15.948 "raid_level": "raid5f", 00:33:15.948 "superblock": true, 00:33:15.948 "num_base_bdevs": 4, 00:33:15.948 "num_base_bdevs_discovered": 4, 00:33:15.948 "num_base_bdevs_operational": 4, 00:33:15.948 "base_bdevs_list": [ 00:33:15.948 { 00:33:15.948 "name": "pt1", 00:33:15.948 "uuid": "b0304711-199b-51ca-9a58-0483da122f54", 00:33:15.948 "is_configured": true, 00:33:15.948 "data_offset": 2048, 00:33:15.948 "data_size": 63488 00:33:15.948 }, 00:33:15.948 { 00:33:15.948 "name": "pt2", 00:33:15.948 "uuid": "516dd88c-e9e0-57e7-beb9-6849529b7e9e", 00:33:15.948 "is_configured": true, 00:33:15.948 "data_offset": 2048, 00:33:15.948 "data_size": 63488 00:33:15.948 }, 00:33:15.948 { 00:33:15.948 "name": "pt3", 00:33:15.948 "uuid": "9474a2c5-d3f3-532e-9900-dfec3dc21ccc", 00:33:15.948 "is_configured": true, 00:33:15.948 "data_offset": 2048, 00:33:15.948 "data_size": 63488 00:33:15.948 }, 00:33:15.948 { 00:33:15.948 "name": "pt4", 00:33:15.948 "uuid": "990a23fd-904a-575f-a5dd-f31b6878bde0", 00:33:15.948 "is_configured": true, 00:33:15.948 "data_offset": 2048, 00:33:15.948 "data_size": 63488 00:33:15.948 } 00:33:15.948 ] 00:33:15.948 } 00:33:15.948 } 00:33:15.948 }' 00:33:15.948 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:15.948 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:15.948 pt2 00:33:15.948 pt3 00:33:15.948 pt4' 00:33:15.948 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:15.948 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:15.948 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:16.207 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:16.207 "name": "pt1", 00:33:16.207 "aliases": [ 00:33:16.207 "b0304711-199b-51ca-9a58-0483da122f54" 00:33:16.207 ], 00:33:16.207 "product_name": "passthru", 00:33:16.207 "block_size": 512, 00:33:16.207 "num_blocks": 65536, 00:33:16.207 "uuid": "b0304711-199b-51ca-9a58-0483da122f54", 00:33:16.207 "assigned_rate_limits": { 00:33:16.207 "rw_ios_per_sec": 0, 00:33:16.207 "rw_mbytes_per_sec": 0, 00:33:16.207 "r_mbytes_per_sec": 0, 00:33:16.207 "w_mbytes_per_sec": 0 00:33:16.207 }, 00:33:16.207 "claimed": true, 00:33:16.207 "claim_type": "exclusive_write", 00:33:16.207 "zoned": false, 00:33:16.207 "supported_io_types": { 00:33:16.207 "read": true, 00:33:16.207 "write": true, 00:33:16.207 "unmap": true, 00:33:16.207 "write_zeroes": true, 00:33:16.207 "flush": true, 00:33:16.207 "reset": true, 00:33:16.207 "compare": false, 00:33:16.207 "compare_and_write": false, 00:33:16.207 "abort": true, 00:33:16.207 "nvme_admin": false, 00:33:16.207 "nvme_io": false 00:33:16.207 }, 00:33:16.207 "memory_domains": [ 00:33:16.207 { 00:33:16.207 "dma_device_id": "system", 00:33:16.207 "dma_device_type": 1 00:33:16.207 }, 00:33:16.207 { 00:33:16.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:16.207 "dma_device_type": 2 00:33:16.207 } 00:33:16.207 ], 00:33:16.207 "driver_specific": { 00:33:16.207 "passthru": { 00:33:16.207 "name": "pt1", 00:33:16.207 "base_bdev_name": "malloc1" 00:33:16.207 } 00:33:16.207 } 00:33:16.207 }' 00:33:16.207 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:16.207 07:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:16.207 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:16.207 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:16.207 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:16.466 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:16.466 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:16.466 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:16.466 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:16.466 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:16.466 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:16.466 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:16.466 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:16.466 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:16.466 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:16.726 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:16.726 "name": "pt2", 00:33:16.726 "aliases": [ 00:33:16.726 "516dd88c-e9e0-57e7-beb9-6849529b7e9e" 00:33:16.726 ], 00:33:16.726 "product_name": "passthru", 00:33:16.726 "block_size": 512, 00:33:16.726 "num_blocks": 65536, 00:33:16.726 "uuid": "516dd88c-e9e0-57e7-beb9-6849529b7e9e", 00:33:16.726 "assigned_rate_limits": { 00:33:16.726 "rw_ios_per_sec": 0, 00:33:16.726 "rw_mbytes_per_sec": 0, 00:33:16.726 "r_mbytes_per_sec": 0, 00:33:16.726 "w_mbytes_per_sec": 0 00:33:16.726 }, 00:33:16.726 "claimed": true, 00:33:16.726 "claim_type": "exclusive_write", 00:33:16.726 "zoned": false, 00:33:16.726 "supported_io_types": { 00:33:16.726 "read": true, 00:33:16.726 "write": true, 00:33:16.726 "unmap": true, 00:33:16.726 "write_zeroes": true, 00:33:16.726 "flush": true, 00:33:16.726 "reset": true, 00:33:16.726 "compare": false, 00:33:16.726 "compare_and_write": false, 00:33:16.726 "abort": true, 00:33:16.726 "nvme_admin": false, 00:33:16.726 "nvme_io": false 00:33:16.726 }, 00:33:16.726 "memory_domains": [ 00:33:16.726 { 00:33:16.726 "dma_device_id": "system", 00:33:16.726 "dma_device_type": 1 00:33:16.726 }, 00:33:16.726 { 00:33:16.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:16.726 "dma_device_type": 2 00:33:16.726 } 00:33:16.726 ], 00:33:16.726 "driver_specific": { 00:33:16.726 "passthru": { 00:33:16.726 "name": "pt2", 00:33:16.726 "base_bdev_name": "malloc2" 00:33:16.726 } 00:33:16.726 } 00:33:16.726 }' 00:33:16.726 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:16.726 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:16.987 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:16.987 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:16.987 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:16.987 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:16.987 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:16.987 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:16.987 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:16.987 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:16.987 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:17.246 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:17.246 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:17.247 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:33:17.247 07:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:17.247 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:17.247 "name": "pt3", 00:33:17.247 "aliases": [ 00:33:17.247 "9474a2c5-d3f3-532e-9900-dfec3dc21ccc" 00:33:17.247 ], 00:33:17.247 "product_name": "passthru", 00:33:17.247 "block_size": 512, 00:33:17.247 "num_blocks": 65536, 00:33:17.247 "uuid": "9474a2c5-d3f3-532e-9900-dfec3dc21ccc", 00:33:17.247 "assigned_rate_limits": { 00:33:17.247 "rw_ios_per_sec": 0, 00:33:17.247 "rw_mbytes_per_sec": 0, 00:33:17.247 "r_mbytes_per_sec": 0, 00:33:17.247 "w_mbytes_per_sec": 0 00:33:17.247 }, 00:33:17.247 "claimed": true, 00:33:17.247 "claim_type": "exclusive_write", 00:33:17.247 "zoned": false, 00:33:17.247 "supported_io_types": { 00:33:17.247 "read": true, 00:33:17.247 "write": true, 00:33:17.247 "unmap": true, 00:33:17.247 "write_zeroes": true, 00:33:17.247 "flush": true, 00:33:17.247 "reset": true, 00:33:17.247 "compare": false, 00:33:17.247 "compare_and_write": false, 00:33:17.247 "abort": true, 00:33:17.247 "nvme_admin": false, 00:33:17.247 "nvme_io": false 00:33:17.247 }, 00:33:17.247 "memory_domains": [ 00:33:17.247 { 00:33:17.247 "dma_device_id": "system", 00:33:17.247 "dma_device_type": 1 00:33:17.247 }, 00:33:17.247 { 00:33:17.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:17.247 "dma_device_type": 2 00:33:17.247 } 00:33:17.247 ], 00:33:17.247 "driver_specific": { 00:33:17.247 "passthru": { 00:33:17.247 "name": "pt3", 00:33:17.247 "base_bdev_name": "malloc3" 00:33:17.247 } 00:33:17.247 } 00:33:17.247 }' 00:33:17.247 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:17.247 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:17.506 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:17.506 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:17.506 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:17.506 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:17.506 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:17.506 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:17.506 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:17.506 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:17.506 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:17.766 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:17.766 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:17.766 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:33:17.766 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:17.766 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:17.766 "name": "pt4", 00:33:17.766 "aliases": [ 00:33:17.766 "990a23fd-904a-575f-a5dd-f31b6878bde0" 00:33:17.766 ], 00:33:17.766 "product_name": "passthru", 00:33:17.766 "block_size": 512, 00:33:17.766 "num_blocks": 65536, 00:33:17.766 "uuid": "990a23fd-904a-575f-a5dd-f31b6878bde0", 00:33:17.766 "assigned_rate_limits": { 00:33:17.766 "rw_ios_per_sec": 0, 00:33:17.766 "rw_mbytes_per_sec": 0, 00:33:17.766 "r_mbytes_per_sec": 0, 00:33:17.766 "w_mbytes_per_sec": 0 00:33:17.766 }, 00:33:17.766 "claimed": true, 00:33:17.766 "claim_type": "exclusive_write", 00:33:17.766 "zoned": false, 00:33:17.766 "supported_io_types": { 00:33:17.766 "read": true, 00:33:17.766 "write": true, 00:33:17.766 "unmap": true, 00:33:17.766 "write_zeroes": true, 00:33:17.766 "flush": true, 00:33:17.766 "reset": true, 00:33:17.766 "compare": false, 00:33:17.766 "compare_and_write": false, 00:33:17.766 "abort": true, 00:33:17.766 "nvme_admin": false, 00:33:17.766 "nvme_io": false 00:33:17.766 }, 00:33:17.766 "memory_domains": [ 00:33:17.766 { 00:33:17.766 "dma_device_id": "system", 00:33:17.766 "dma_device_type": 1 00:33:17.766 }, 00:33:17.766 { 00:33:17.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:17.766 "dma_device_type": 2 00:33:17.766 } 00:33:17.766 ], 00:33:17.766 "driver_specific": { 00:33:17.766 "passthru": { 00:33:17.766 "name": "pt4", 00:33:17.766 "base_bdev_name": "malloc4" 00:33:17.766 } 00:33:17.766 } 00:33:17.766 }' 00:33:17.766 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:17.766 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:18.032 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:18.032 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:18.032 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:18.032 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:18.032 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:18.032 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:18.032 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:18.032 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:18.032 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:18.292 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:18.292 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:18.292 07:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:33:18.550 [2024-07-12 07:43:52.212375] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:18.550 07:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e 00:33:18.550 07:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e ']' 00:33:18.550 07:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:18.808 [2024-07-12 07:43:52.492338] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:18.808 [2024-07-12 07:43:52.492362] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:18.808 [2024-07-12 07:43:52.492452] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:18.808 [2024-07-12 07:43:52.492533] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:18.808 [2024-07-12 07:43:52.492543] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:33:18.808 07:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:33:18.808 07:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.808 07:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:33:18.808 07:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:33:18.808 07:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:18.808 07:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:19.067 07:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:19.067 07:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:19.326 07:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:19.326 07:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:33:19.585 07:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:19.585 07:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:33:19.844 07:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:33:19.844 07:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:19.844 07:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:33:19.844 07:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:33:19.844 07:43:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:33:19.844 07:43:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:33:19.844 07:43:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:19.844 07:43:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:19.844 07:43:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:19.844 07:43:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:19.844 07:43:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:19.844 07:43:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:19.844 07:43:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:19.844 07:43:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:19.844 07:43:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:33:20.104 [2024-07-12 07:43:53.864528] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:20.104 [2024-07-12 07:43:53.866427] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:20.104 [2024-07-12 07:43:53.866473] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:33:20.104 [2024-07-12 07:43:53.866499] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:33:20.104 [2024-07-12 07:43:53.866539] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:20.104 [2024-07-12 07:43:53.866614] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:20.104 [2024-07-12 07:43:53.866659] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:33:20.104 [2024-07-12 07:43:53.866721] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:33:20.104 [2024-07-12 07:43:53.866741] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:20.104 [2024-07-12 07:43:53.866750] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:33:20.104 request: 00:33:20.104 { 00:33:20.104 "name": "raid_bdev1", 00:33:20.104 "raid_level": "raid5f", 00:33:20.104 "base_bdevs": [ 00:33:20.104 "malloc1", 00:33:20.104 "malloc2", 00:33:20.104 "malloc3", 00:33:20.104 "malloc4" 00:33:20.104 ], 00:33:20.104 "superblock": false, 00:33:20.104 "strip_size_kb": 64, 00:33:20.104 "method": "bdev_raid_create", 00:33:20.104 "req_id": 1 00:33:20.104 } 00:33:20.104 Got JSON-RPC error response 00:33:20.104 response: 00:33:20.104 { 00:33:20.104 "code": -17, 00:33:20.104 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:20.104 } 00:33:20.104 07:43:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:33:20.104 07:43:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:20.104 07:43:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:20.104 07:43:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:20.104 07:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.104 07:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:33:20.364 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:33:20.364 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:33:20.364 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:20.623 [2024-07-12 07:43:54.394859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:20.623 [2024-07-12 07:43:54.394934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:20.623 [2024-07-12 07:43:54.394984] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:20.623 [2024-07-12 07:43:54.395010] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:20.623 [2024-07-12 07:43:54.397179] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:20.623 [2024-07-12 07:43:54.397236] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:20.623 [2024-07-12 07:43:54.397342] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:20.623 [2024-07-12 07:43:54.397394] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:20.623 pt1 00:33:20.623 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:33:20.623 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:20.623 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:20.623 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:20.623 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:20.623 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:20.623 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:20.623 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:20.623 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:20.623 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:20.623 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.623 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.881 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:20.881 "name": "raid_bdev1", 00:33:20.881 "uuid": "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e", 00:33:20.881 "strip_size_kb": 64, 00:33:20.881 "state": "configuring", 00:33:20.881 "raid_level": "raid5f", 00:33:20.881 "superblock": true, 00:33:20.881 "num_base_bdevs": 4, 00:33:20.881 "num_base_bdevs_discovered": 1, 00:33:20.881 "num_base_bdevs_operational": 4, 00:33:20.881 "base_bdevs_list": [ 00:33:20.881 { 00:33:20.881 "name": "pt1", 00:33:20.881 "uuid": "b0304711-199b-51ca-9a58-0483da122f54", 00:33:20.881 "is_configured": true, 00:33:20.881 "data_offset": 2048, 00:33:20.881 "data_size": 63488 00:33:20.881 }, 00:33:20.881 { 00:33:20.881 "name": null, 00:33:20.882 "uuid": "516dd88c-e9e0-57e7-beb9-6849529b7e9e", 00:33:20.882 "is_configured": false, 00:33:20.882 "data_offset": 2048, 00:33:20.882 "data_size": 63488 00:33:20.882 }, 00:33:20.882 { 00:33:20.882 "name": null, 00:33:20.882 "uuid": "9474a2c5-d3f3-532e-9900-dfec3dc21ccc", 00:33:20.882 "is_configured": false, 00:33:20.882 "data_offset": 2048, 00:33:20.882 "data_size": 63488 00:33:20.882 }, 00:33:20.882 { 00:33:20.882 "name": null, 00:33:20.882 "uuid": "990a23fd-904a-575f-a5dd-f31b6878bde0", 00:33:20.882 "is_configured": false, 00:33:20.882 "data_offset": 2048, 00:33:20.882 "data_size": 63488 00:33:20.882 } 00:33:20.882 ] 00:33:20.882 }' 00:33:20.882 07:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:20.882 07:43:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:21.448 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:33:21.449 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:21.706 [2024-07-12 07:43:55.475061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:21.706 [2024-07-12 07:43:55.475130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:21.706 [2024-07-12 07:43:55.475166] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:33:21.706 [2024-07-12 07:43:55.475186] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:21.706 [2024-07-12 07:43:55.475568] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:21.706 [2024-07-12 07:43:55.475615] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:21.706 [2024-07-12 07:43:55.475715] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:21.706 [2024-07-12 07:43:55.475738] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:21.706 pt2 00:33:21.706 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:21.965 [2024-07-12 07:43:55.739103] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:33:21.965 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:33:21.965 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:21.965 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:21.965 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:21.965 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:21.965 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:21.965 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:21.965 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:21.965 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:21.965 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:21.965 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.965 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.224 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:22.224 "name": "raid_bdev1", 00:33:22.224 "uuid": "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e", 00:33:22.224 "strip_size_kb": 64, 00:33:22.224 "state": "configuring", 00:33:22.224 "raid_level": "raid5f", 00:33:22.224 "superblock": true, 00:33:22.224 "num_base_bdevs": 4, 00:33:22.224 "num_base_bdevs_discovered": 1, 00:33:22.224 "num_base_bdevs_operational": 4, 00:33:22.224 "base_bdevs_list": [ 00:33:22.224 { 00:33:22.224 "name": "pt1", 00:33:22.224 "uuid": "b0304711-199b-51ca-9a58-0483da122f54", 00:33:22.224 "is_configured": true, 00:33:22.224 "data_offset": 2048, 00:33:22.224 "data_size": 63488 00:33:22.224 }, 00:33:22.224 { 00:33:22.224 "name": null, 00:33:22.224 "uuid": "516dd88c-e9e0-57e7-beb9-6849529b7e9e", 00:33:22.224 "is_configured": false, 00:33:22.224 "data_offset": 2048, 00:33:22.225 "data_size": 63488 00:33:22.225 }, 00:33:22.225 { 00:33:22.225 "name": null, 00:33:22.225 "uuid": "9474a2c5-d3f3-532e-9900-dfec3dc21ccc", 00:33:22.225 "is_configured": false, 00:33:22.225 "data_offset": 2048, 00:33:22.225 "data_size": 63488 00:33:22.225 }, 00:33:22.225 { 00:33:22.225 "name": null, 00:33:22.225 "uuid": "990a23fd-904a-575f-a5dd-f31b6878bde0", 00:33:22.225 "is_configured": false, 00:33:22.225 "data_offset": 2048, 00:33:22.225 "data_size": 63488 00:33:22.225 } 00:33:22.225 ] 00:33:22.225 }' 00:33:22.225 07:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:22.225 07:43:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:22.793 07:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:33:22.793 07:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:22.793 07:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:23.053 [2024-07-12 07:43:56.703267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:23.053 [2024-07-12 07:43:56.703331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:23.053 [2024-07-12 07:43:56.703381] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:33:23.053 [2024-07-12 07:43:56.703401] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:23.053 [2024-07-12 07:43:56.703814] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:23.053 [2024-07-12 07:43:56.703872] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:23.053 [2024-07-12 07:43:56.703945] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:23.053 [2024-07-12 07:43:56.703980] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:23.053 pt2 00:33:23.053 07:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:33:23.053 07:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:23.053 07:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:23.313 [2024-07-12 07:43:56.975311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:23.313 [2024-07-12 07:43:56.975377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:23.313 [2024-07-12 07:43:56.975425] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:33:23.313 [2024-07-12 07:43:56.975450] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:23.313 [2024-07-12 07:43:56.975812] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:23.313 [2024-07-12 07:43:56.975858] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:23.313 [2024-07-12 07:43:56.975922] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:23.313 [2024-07-12 07:43:56.975944] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:23.313 pt3 00:33:23.313 07:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:33:23.313 07:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:23.313 07:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:23.573 [2024-07-12 07:43:57.223333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:23.573 [2024-07-12 07:43:57.223384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:23.573 [2024-07-12 07:43:57.223409] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:23.573 [2024-07-12 07:43:57.223431] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:23.573 [2024-07-12 07:43:57.223725] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:23.573 [2024-07-12 07:43:57.223763] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:23.573 [2024-07-12 07:43:57.223814] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:33:23.573 [2024-07-12 07:43:57.223829] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:23.573 [2024-07-12 07:43:57.223936] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:33:23.573 [2024-07-12 07:43:57.223945] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:23.573 [2024-07-12 07:43:57.223997] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:33:23.573 [2024-07-12 07:43:57.224557] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:33:23.573 [2024-07-12 07:43:57.224573] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:33:23.573 [2024-07-12 07:43:57.224657] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:23.573 pt4 00:33:23.573 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:33:23.573 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:23.573 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:23.573 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:23.573 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:23.573 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:23.573 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:23.573 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:23.573 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:23.573 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:23.573 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:23.573 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:23.573 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.573 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.833 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:23.833 "name": "raid_bdev1", 00:33:23.833 "uuid": "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e", 00:33:23.833 "strip_size_kb": 64, 00:33:23.833 "state": "online", 00:33:23.833 "raid_level": "raid5f", 00:33:23.833 "superblock": true, 00:33:23.833 "num_base_bdevs": 4, 00:33:23.833 "num_base_bdevs_discovered": 4, 00:33:23.833 "num_base_bdevs_operational": 4, 00:33:23.833 "base_bdevs_list": [ 00:33:23.833 { 00:33:23.833 "name": "pt1", 00:33:23.833 "uuid": "b0304711-199b-51ca-9a58-0483da122f54", 00:33:23.833 "is_configured": true, 00:33:23.833 "data_offset": 2048, 00:33:23.833 "data_size": 63488 00:33:23.833 }, 00:33:23.833 { 00:33:23.833 "name": "pt2", 00:33:23.833 "uuid": "516dd88c-e9e0-57e7-beb9-6849529b7e9e", 00:33:23.833 "is_configured": true, 00:33:23.833 "data_offset": 2048, 00:33:23.833 "data_size": 63488 00:33:23.833 }, 00:33:23.833 { 00:33:23.833 "name": "pt3", 00:33:23.833 "uuid": "9474a2c5-d3f3-532e-9900-dfec3dc21ccc", 00:33:23.833 "is_configured": true, 00:33:23.833 "data_offset": 2048, 00:33:23.833 "data_size": 63488 00:33:23.833 }, 00:33:23.833 { 00:33:23.833 "name": "pt4", 00:33:23.833 "uuid": "990a23fd-904a-575f-a5dd-f31b6878bde0", 00:33:23.833 "is_configured": true, 00:33:23.833 "data_offset": 2048, 00:33:23.833 "data_size": 63488 00:33:23.833 } 00:33:23.833 ] 00:33:23.833 }' 00:33:23.833 07:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:23.833 07:43:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.401 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:33:24.401 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:24.401 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:24.401 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:24.401 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:24.401 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:33:24.401 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:24.401 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:24.401 [2024-07-12 07:43:58.271660] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:24.662 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:24.662 "name": "raid_bdev1", 00:33:24.662 "aliases": [ 00:33:24.662 "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e" 00:33:24.662 ], 00:33:24.662 "product_name": "Raid Volume", 00:33:24.662 "block_size": 512, 00:33:24.662 "num_blocks": 190464, 00:33:24.662 "uuid": "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e", 00:33:24.662 "assigned_rate_limits": { 00:33:24.662 "rw_ios_per_sec": 0, 00:33:24.662 "rw_mbytes_per_sec": 0, 00:33:24.662 "r_mbytes_per_sec": 0, 00:33:24.662 "w_mbytes_per_sec": 0 00:33:24.662 }, 00:33:24.662 "claimed": false, 00:33:24.662 "zoned": false, 00:33:24.662 "supported_io_types": { 00:33:24.662 "read": true, 00:33:24.662 "write": true, 00:33:24.662 "unmap": false, 00:33:24.662 "write_zeroes": true, 00:33:24.662 "flush": false, 00:33:24.662 "reset": true, 00:33:24.662 "compare": false, 00:33:24.662 "compare_and_write": false, 00:33:24.662 "abort": false, 00:33:24.662 "nvme_admin": false, 00:33:24.662 "nvme_io": false 00:33:24.662 }, 00:33:24.662 "driver_specific": { 00:33:24.662 "raid": { 00:33:24.662 "uuid": "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e", 00:33:24.662 "strip_size_kb": 64, 00:33:24.662 "state": "online", 00:33:24.662 "raid_level": "raid5f", 00:33:24.662 "superblock": true, 00:33:24.662 "num_base_bdevs": 4, 00:33:24.662 "num_base_bdevs_discovered": 4, 00:33:24.662 "num_base_bdevs_operational": 4, 00:33:24.662 "base_bdevs_list": [ 00:33:24.662 { 00:33:24.662 "name": "pt1", 00:33:24.662 "uuid": "b0304711-199b-51ca-9a58-0483da122f54", 00:33:24.662 "is_configured": true, 00:33:24.662 "data_offset": 2048, 00:33:24.662 "data_size": 63488 00:33:24.662 }, 00:33:24.662 { 00:33:24.662 "name": "pt2", 00:33:24.662 "uuid": "516dd88c-e9e0-57e7-beb9-6849529b7e9e", 00:33:24.662 "is_configured": true, 00:33:24.662 "data_offset": 2048, 00:33:24.662 "data_size": 63488 00:33:24.662 }, 00:33:24.662 { 00:33:24.662 "name": "pt3", 00:33:24.662 "uuid": "9474a2c5-d3f3-532e-9900-dfec3dc21ccc", 00:33:24.662 "is_configured": true, 00:33:24.662 "data_offset": 2048, 00:33:24.662 "data_size": 63488 00:33:24.662 }, 00:33:24.662 { 00:33:24.662 "name": "pt4", 00:33:24.662 "uuid": "990a23fd-904a-575f-a5dd-f31b6878bde0", 00:33:24.662 "is_configured": true, 00:33:24.662 "data_offset": 2048, 00:33:24.662 "data_size": 63488 00:33:24.662 } 00:33:24.662 ] 00:33:24.662 } 00:33:24.662 } 00:33:24.662 }' 00:33:24.662 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:24.662 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:24.662 pt2 00:33:24.662 pt3 00:33:24.662 pt4' 00:33:24.662 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:24.662 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:24.662 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:24.922 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:24.922 "name": "pt1", 00:33:24.922 "aliases": [ 00:33:24.922 "b0304711-199b-51ca-9a58-0483da122f54" 00:33:24.922 ], 00:33:24.922 "product_name": "passthru", 00:33:24.922 "block_size": 512, 00:33:24.922 "num_blocks": 65536, 00:33:24.922 "uuid": "b0304711-199b-51ca-9a58-0483da122f54", 00:33:24.922 "assigned_rate_limits": { 00:33:24.922 "rw_ios_per_sec": 0, 00:33:24.922 "rw_mbytes_per_sec": 0, 00:33:24.922 "r_mbytes_per_sec": 0, 00:33:24.922 "w_mbytes_per_sec": 0 00:33:24.922 }, 00:33:24.922 "claimed": true, 00:33:24.922 "claim_type": "exclusive_write", 00:33:24.922 "zoned": false, 00:33:24.922 "supported_io_types": { 00:33:24.922 "read": true, 00:33:24.922 "write": true, 00:33:24.922 "unmap": true, 00:33:24.922 "write_zeroes": true, 00:33:24.922 "flush": true, 00:33:24.922 "reset": true, 00:33:24.922 "compare": false, 00:33:24.922 "compare_and_write": false, 00:33:24.922 "abort": true, 00:33:24.922 "nvme_admin": false, 00:33:24.922 "nvme_io": false 00:33:24.922 }, 00:33:24.922 "memory_domains": [ 00:33:24.922 { 00:33:24.922 "dma_device_id": "system", 00:33:24.922 "dma_device_type": 1 00:33:24.922 }, 00:33:24.922 { 00:33:24.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:24.922 "dma_device_type": 2 00:33:24.922 } 00:33:24.922 ], 00:33:24.922 "driver_specific": { 00:33:24.922 "passthru": { 00:33:24.922 "name": "pt1", 00:33:24.922 "base_bdev_name": "malloc1" 00:33:24.922 } 00:33:24.922 } 00:33:24.922 }' 00:33:24.922 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:24.922 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:24.922 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:24.922 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:24.922 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:24.922 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:24.922 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:24.922 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:25.181 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:25.181 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:25.181 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:25.181 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:25.181 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:25.181 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:25.181 07:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:25.441 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:25.441 "name": "pt2", 00:33:25.441 "aliases": [ 00:33:25.441 "516dd88c-e9e0-57e7-beb9-6849529b7e9e" 00:33:25.441 ], 00:33:25.441 "product_name": "passthru", 00:33:25.441 "block_size": 512, 00:33:25.441 "num_blocks": 65536, 00:33:25.441 "uuid": "516dd88c-e9e0-57e7-beb9-6849529b7e9e", 00:33:25.441 "assigned_rate_limits": { 00:33:25.441 "rw_ios_per_sec": 0, 00:33:25.441 "rw_mbytes_per_sec": 0, 00:33:25.441 "r_mbytes_per_sec": 0, 00:33:25.441 "w_mbytes_per_sec": 0 00:33:25.441 }, 00:33:25.441 "claimed": true, 00:33:25.441 "claim_type": "exclusive_write", 00:33:25.441 "zoned": false, 00:33:25.441 "supported_io_types": { 00:33:25.441 "read": true, 00:33:25.441 "write": true, 00:33:25.441 "unmap": true, 00:33:25.441 "write_zeroes": true, 00:33:25.441 "flush": true, 00:33:25.441 "reset": true, 00:33:25.441 "compare": false, 00:33:25.441 "compare_and_write": false, 00:33:25.441 "abort": true, 00:33:25.441 "nvme_admin": false, 00:33:25.441 "nvme_io": false 00:33:25.441 }, 00:33:25.441 "memory_domains": [ 00:33:25.441 { 00:33:25.441 "dma_device_id": "system", 00:33:25.441 "dma_device_type": 1 00:33:25.441 }, 00:33:25.441 { 00:33:25.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:25.441 "dma_device_type": 2 00:33:25.441 } 00:33:25.441 ], 00:33:25.441 "driver_specific": { 00:33:25.441 "passthru": { 00:33:25.441 "name": "pt2", 00:33:25.441 "base_bdev_name": "malloc2" 00:33:25.441 } 00:33:25.441 } 00:33:25.441 }' 00:33:25.441 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:25.441 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:25.441 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:25.441 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:25.441 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:25.700 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:25.700 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:25.700 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:25.700 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:25.700 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:25.700 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:25.701 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:25.701 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:25.701 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:25.701 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:33:25.965 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:25.965 "name": "pt3", 00:33:25.965 "aliases": [ 00:33:25.965 "9474a2c5-d3f3-532e-9900-dfec3dc21ccc" 00:33:25.965 ], 00:33:25.965 "product_name": "passthru", 00:33:25.965 "block_size": 512, 00:33:25.965 "num_blocks": 65536, 00:33:25.965 "uuid": "9474a2c5-d3f3-532e-9900-dfec3dc21ccc", 00:33:25.965 "assigned_rate_limits": { 00:33:25.965 "rw_ios_per_sec": 0, 00:33:25.965 "rw_mbytes_per_sec": 0, 00:33:25.965 "r_mbytes_per_sec": 0, 00:33:25.965 "w_mbytes_per_sec": 0 00:33:25.965 }, 00:33:25.965 "claimed": true, 00:33:25.965 "claim_type": "exclusive_write", 00:33:25.965 "zoned": false, 00:33:25.965 "supported_io_types": { 00:33:25.965 "read": true, 00:33:25.965 "write": true, 00:33:25.965 "unmap": true, 00:33:25.965 "write_zeroes": true, 00:33:25.965 "flush": true, 00:33:25.965 "reset": true, 00:33:25.965 "compare": false, 00:33:25.965 "compare_and_write": false, 00:33:25.965 "abort": true, 00:33:25.965 "nvme_admin": false, 00:33:25.965 "nvme_io": false 00:33:25.965 }, 00:33:25.965 "memory_domains": [ 00:33:25.965 { 00:33:25.965 "dma_device_id": "system", 00:33:25.965 "dma_device_type": 1 00:33:25.965 }, 00:33:25.965 { 00:33:25.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:25.965 "dma_device_type": 2 00:33:25.965 } 00:33:25.965 ], 00:33:25.965 "driver_specific": { 00:33:25.965 "passthru": { 00:33:25.965 "name": "pt3", 00:33:25.965 "base_bdev_name": "malloc3" 00:33:25.965 } 00:33:25.965 } 00:33:25.965 }' 00:33:25.965 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:25.965 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:26.241 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:26.241 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:26.241 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:26.241 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:26.241 07:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:26.241 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:26.241 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:26.241 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:26.241 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:26.523 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:26.523 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:26.523 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:33:26.523 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:26.523 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:26.523 "name": "pt4", 00:33:26.523 "aliases": [ 00:33:26.523 "990a23fd-904a-575f-a5dd-f31b6878bde0" 00:33:26.523 ], 00:33:26.523 "product_name": "passthru", 00:33:26.523 "block_size": 512, 00:33:26.523 "num_blocks": 65536, 00:33:26.523 "uuid": "990a23fd-904a-575f-a5dd-f31b6878bde0", 00:33:26.523 "assigned_rate_limits": { 00:33:26.523 "rw_ios_per_sec": 0, 00:33:26.523 "rw_mbytes_per_sec": 0, 00:33:26.523 "r_mbytes_per_sec": 0, 00:33:26.523 "w_mbytes_per_sec": 0 00:33:26.523 }, 00:33:26.523 "claimed": true, 00:33:26.523 "claim_type": "exclusive_write", 00:33:26.523 "zoned": false, 00:33:26.523 "supported_io_types": { 00:33:26.523 "read": true, 00:33:26.523 "write": true, 00:33:26.523 "unmap": true, 00:33:26.523 "write_zeroes": true, 00:33:26.523 "flush": true, 00:33:26.523 "reset": true, 00:33:26.523 "compare": false, 00:33:26.523 "compare_and_write": false, 00:33:26.523 "abort": true, 00:33:26.523 "nvme_admin": false, 00:33:26.523 "nvme_io": false 00:33:26.523 }, 00:33:26.523 "memory_domains": [ 00:33:26.523 { 00:33:26.523 "dma_device_id": "system", 00:33:26.523 "dma_device_type": 1 00:33:26.523 }, 00:33:26.523 { 00:33:26.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:26.523 "dma_device_type": 2 00:33:26.523 } 00:33:26.523 ], 00:33:26.523 "driver_specific": { 00:33:26.523 "passthru": { 00:33:26.523 "name": "pt4", 00:33:26.523 "base_bdev_name": "malloc4" 00:33:26.523 } 00:33:26.523 } 00:33:26.523 }' 00:33:26.523 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:26.523 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:26.803 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:26.803 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:26.803 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:26.803 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:26.803 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:26.803 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:26.803 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:26.803 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:26.803 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:26.803 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:27.076 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:27.076 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:33:27.076 [2024-07-12 07:44:00.936144] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:27.076 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e '!=' 3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e ']' 00:33:27.442 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:33:27.442 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:27.442 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:33:27.442 07:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:27.442 [2024-07-12 07:44:01.124106] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:27.442 07:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:27.442 07:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:27.442 07:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:27.442 07:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:27.442 07:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:27.442 07:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:27.442 07:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:27.442 07:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:27.442 07:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:27.442 07:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:27.442 07:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.442 07:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.700 07:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:27.700 "name": "raid_bdev1", 00:33:27.700 "uuid": "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e", 00:33:27.700 "strip_size_kb": 64, 00:33:27.700 "state": "online", 00:33:27.700 "raid_level": "raid5f", 00:33:27.700 "superblock": true, 00:33:27.700 "num_base_bdevs": 4, 00:33:27.700 "num_base_bdevs_discovered": 3, 00:33:27.700 "num_base_bdevs_operational": 3, 00:33:27.700 "base_bdevs_list": [ 00:33:27.700 { 00:33:27.700 "name": null, 00:33:27.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:27.700 "is_configured": false, 00:33:27.700 "data_offset": 2048, 00:33:27.700 "data_size": 63488 00:33:27.700 }, 00:33:27.700 { 00:33:27.700 "name": "pt2", 00:33:27.700 "uuid": "516dd88c-e9e0-57e7-beb9-6849529b7e9e", 00:33:27.700 "is_configured": true, 00:33:27.700 "data_offset": 2048, 00:33:27.700 "data_size": 63488 00:33:27.700 }, 00:33:27.700 { 00:33:27.700 "name": "pt3", 00:33:27.700 "uuid": "9474a2c5-d3f3-532e-9900-dfec3dc21ccc", 00:33:27.700 "is_configured": true, 00:33:27.700 "data_offset": 2048, 00:33:27.700 "data_size": 63488 00:33:27.700 }, 00:33:27.700 { 00:33:27.700 "name": "pt4", 00:33:27.700 "uuid": "990a23fd-904a-575f-a5dd-f31b6878bde0", 00:33:27.700 "is_configured": true, 00:33:27.700 "data_offset": 2048, 00:33:27.700 "data_size": 63488 00:33:27.700 } 00:33:27.700 ] 00:33:27.700 }' 00:33:27.700 07:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:27.700 07:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.266 07:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:28.523 [2024-07-12 07:44:02.149847] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:28.523 [2024-07-12 07:44:02.149890] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:28.523 [2024-07-12 07:44:02.149988] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:28.523 [2024-07-12 07:44:02.150074] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:28.523 [2024-07-12 07:44:02.150085] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:33:28.523 07:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:28.523 07:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:33:28.781 07:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:33:28.781 07:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:33:28.781 07:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:33:28.781 07:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:28.781 07:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:29.038 07:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:33:29.038 07:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:29.038 07:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:33:29.297 07:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:33:29.297 07:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:29.297 07:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:33:29.297 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:33:29.297 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:29.297 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:33:29.297 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:33:29.297 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:29.555 [2024-07-12 07:44:03.346038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:29.555 [2024-07-12 07:44:03.346146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.555 [2024-07-12 07:44:03.346192] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:33:29.555 [2024-07-12 07:44:03.346226] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.555 [2024-07-12 07:44:03.349030] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.555 [2024-07-12 07:44:03.349105] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:29.555 [2024-07-12 07:44:03.349211] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:29.555 [2024-07-12 07:44:03.349256] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:29.555 pt2 00:33:29.556 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:33:29.556 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:29.556 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:29.556 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:29.556 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:29.556 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:29.556 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:29.556 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:29.556 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:29.556 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:29.556 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.556 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.815 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:29.815 "name": "raid_bdev1", 00:33:29.815 "uuid": "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e", 00:33:29.815 "strip_size_kb": 64, 00:33:29.815 "state": "configuring", 00:33:29.815 "raid_level": "raid5f", 00:33:29.815 "superblock": true, 00:33:29.815 "num_base_bdevs": 4, 00:33:29.815 "num_base_bdevs_discovered": 1, 00:33:29.815 "num_base_bdevs_operational": 3, 00:33:29.815 "base_bdevs_list": [ 00:33:29.815 { 00:33:29.815 "name": null, 00:33:29.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:29.815 "is_configured": false, 00:33:29.815 "data_offset": 2048, 00:33:29.815 "data_size": 63488 00:33:29.815 }, 00:33:29.815 { 00:33:29.815 "name": "pt2", 00:33:29.815 "uuid": "516dd88c-e9e0-57e7-beb9-6849529b7e9e", 00:33:29.815 "is_configured": true, 00:33:29.815 "data_offset": 2048, 00:33:29.815 "data_size": 63488 00:33:29.815 }, 00:33:29.815 { 00:33:29.815 "name": null, 00:33:29.815 "uuid": "9474a2c5-d3f3-532e-9900-dfec3dc21ccc", 00:33:29.815 "is_configured": false, 00:33:29.815 "data_offset": 2048, 00:33:29.815 "data_size": 63488 00:33:29.815 }, 00:33:29.815 { 00:33:29.815 "name": null, 00:33:29.815 "uuid": "990a23fd-904a-575f-a5dd-f31b6878bde0", 00:33:29.815 "is_configured": false, 00:33:29.815 "data_offset": 2048, 00:33:29.815 "data_size": 63488 00:33:29.815 } 00:33:29.815 ] 00:33:29.815 }' 00:33:29.815 07:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:29.815 07:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.381 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:33:30.381 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:33:30.381 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:30.639 [2024-07-12 07:44:04.454200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:30.639 [2024-07-12 07:44:04.454304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:30.639 [2024-07-12 07:44:04.454352] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:33:30.639 [2024-07-12 07:44:04.454376] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:30.639 [2024-07-12 07:44:04.454879] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:30.639 [2024-07-12 07:44:04.454932] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:30.639 [2024-07-12 07:44:04.455030] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:30.639 [2024-07-12 07:44:04.455056] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:30.639 pt3 00:33:30.640 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:33:30.640 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:30.640 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:30.640 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:30.640 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:30.640 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:30.640 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:30.640 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:30.640 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:30.640 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:30.640 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:30.640 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:30.898 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:30.898 "name": "raid_bdev1", 00:33:30.898 "uuid": "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e", 00:33:30.898 "strip_size_kb": 64, 00:33:30.898 "state": "configuring", 00:33:30.898 "raid_level": "raid5f", 00:33:30.898 "superblock": true, 00:33:30.898 "num_base_bdevs": 4, 00:33:30.898 "num_base_bdevs_discovered": 2, 00:33:30.898 "num_base_bdevs_operational": 3, 00:33:30.898 "base_bdevs_list": [ 00:33:30.898 { 00:33:30.898 "name": null, 00:33:30.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:30.898 "is_configured": false, 00:33:30.898 "data_offset": 2048, 00:33:30.898 "data_size": 63488 00:33:30.898 }, 00:33:30.898 { 00:33:30.898 "name": "pt2", 00:33:30.898 "uuid": "516dd88c-e9e0-57e7-beb9-6849529b7e9e", 00:33:30.898 "is_configured": true, 00:33:30.898 "data_offset": 2048, 00:33:30.898 "data_size": 63488 00:33:30.898 }, 00:33:30.898 { 00:33:30.898 "name": "pt3", 00:33:30.898 "uuid": "9474a2c5-d3f3-532e-9900-dfec3dc21ccc", 00:33:30.898 "is_configured": true, 00:33:30.898 "data_offset": 2048, 00:33:30.898 "data_size": 63488 00:33:30.898 }, 00:33:30.898 { 00:33:30.898 "name": null, 00:33:30.898 "uuid": "990a23fd-904a-575f-a5dd-f31b6878bde0", 00:33:30.898 "is_configured": false, 00:33:30.898 "data_offset": 2048, 00:33:30.898 "data_size": 63488 00:33:30.898 } 00:33:30.898 ] 00:33:30.898 }' 00:33:30.898 07:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:30.898 07:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.465 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:33:31.465 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:33:31.465 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:33:31.465 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:31.724 [2024-07-12 07:44:05.482411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:31.724 [2024-07-12 07:44:05.482513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:31.724 [2024-07-12 07:44:05.482559] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:33:31.724 [2024-07-12 07:44:05.482583] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:31.724 [2024-07-12 07:44:05.483122] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:31.724 [2024-07-12 07:44:05.483163] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:31.724 [2024-07-12 07:44:05.483263] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:33:31.724 [2024-07-12 07:44:05.483290] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:31.724 [2024-07-12 07:44:05.483436] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:33:31.724 [2024-07-12 07:44:05.483452] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:31.724 [2024-07-12 07:44:05.483521] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:33:31.724 [2024-07-12 07:44:05.484299] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:33:31.724 [2024-07-12 07:44:05.484321] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:33:31.724 [2024-07-12 07:44:05.484540] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:31.724 pt4 00:33:31.724 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:31.724 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:31.724 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:31.724 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:31.724 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:31.724 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:31.724 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:31.724 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:31.724 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:31.724 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:31.724 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:31.724 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:31.984 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:31.984 "name": "raid_bdev1", 00:33:31.984 "uuid": "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e", 00:33:31.984 "strip_size_kb": 64, 00:33:31.984 "state": "online", 00:33:31.984 "raid_level": "raid5f", 00:33:31.984 "superblock": true, 00:33:31.984 "num_base_bdevs": 4, 00:33:31.984 "num_base_bdevs_discovered": 3, 00:33:31.984 "num_base_bdevs_operational": 3, 00:33:31.984 "base_bdevs_list": [ 00:33:31.984 { 00:33:31.984 "name": null, 00:33:31.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:31.984 "is_configured": false, 00:33:31.984 "data_offset": 2048, 00:33:31.984 "data_size": 63488 00:33:31.984 }, 00:33:31.984 { 00:33:31.984 "name": "pt2", 00:33:31.984 "uuid": "516dd88c-e9e0-57e7-beb9-6849529b7e9e", 00:33:31.984 "is_configured": true, 00:33:31.984 "data_offset": 2048, 00:33:31.984 "data_size": 63488 00:33:31.984 }, 00:33:31.984 { 00:33:31.984 "name": "pt3", 00:33:31.984 "uuid": "9474a2c5-d3f3-532e-9900-dfec3dc21ccc", 00:33:31.984 "is_configured": true, 00:33:31.984 "data_offset": 2048, 00:33:31.984 "data_size": 63488 00:33:31.984 }, 00:33:31.984 { 00:33:31.984 "name": "pt4", 00:33:31.984 "uuid": "990a23fd-904a-575f-a5dd-f31b6878bde0", 00:33:31.984 "is_configured": true, 00:33:31.984 "data_offset": 2048, 00:33:31.984 "data_size": 63488 00:33:31.984 } 00:33:31.984 ] 00:33:31.984 }' 00:33:31.984 07:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:31.984 07:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.550 07:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:32.809 [2024-07-12 07:44:06.631456] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:32.809 [2024-07-12 07:44:06.631494] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:32.809 [2024-07-12 07:44:06.631578] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:32.809 [2024-07-12 07:44:06.631660] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:32.809 [2024-07-12 07:44:06.631671] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:33:32.809 07:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:33:32.809 07:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:33.067 07:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:33:33.067 07:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:33:33.067 07:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:33:33.067 07:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:33:33.067 07:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:33:33.326 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:33.584 [2024-07-12 07:44:07.303534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:33.584 [2024-07-12 07:44:07.303634] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:33.584 [2024-07-12 07:44:07.303674] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:33:33.584 [2024-07-12 07:44:07.303701] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:33.584 [2024-07-12 07:44:07.306477] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:33.584 [2024-07-12 07:44:07.306549] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:33.584 [2024-07-12 07:44:07.306643] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:33.584 [2024-07-12 07:44:07.306698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:33.584 [2024-07-12 07:44:07.306868] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:33.585 [2024-07-12 07:44:07.306894] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:33.585 [2024-07-12 07:44:07.306928] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:33:33.585 [2024-07-12 07:44:07.306976] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:33.585 [2024-07-12 07:44:07.307106] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:33.585 pt1 00:33:33.585 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:33:33.585 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:33:33.585 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:33.585 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:33.585 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:33.585 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:33.585 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:33.585 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:33.585 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:33.585 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:33.585 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:33.585 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:33.585 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:33.843 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:33.843 "name": "raid_bdev1", 00:33:33.843 "uuid": "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e", 00:33:33.843 "strip_size_kb": 64, 00:33:33.843 "state": "configuring", 00:33:33.843 "raid_level": "raid5f", 00:33:33.843 "superblock": true, 00:33:33.843 "num_base_bdevs": 4, 00:33:33.843 "num_base_bdevs_discovered": 2, 00:33:33.843 "num_base_bdevs_operational": 3, 00:33:33.843 "base_bdevs_list": [ 00:33:33.843 { 00:33:33.843 "name": null, 00:33:33.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:33.843 "is_configured": false, 00:33:33.843 "data_offset": 2048, 00:33:33.843 "data_size": 63488 00:33:33.843 }, 00:33:33.843 { 00:33:33.843 "name": "pt2", 00:33:33.843 "uuid": "516dd88c-e9e0-57e7-beb9-6849529b7e9e", 00:33:33.843 "is_configured": true, 00:33:33.843 "data_offset": 2048, 00:33:33.843 "data_size": 63488 00:33:33.843 }, 00:33:33.843 { 00:33:33.843 "name": "pt3", 00:33:33.843 "uuid": "9474a2c5-d3f3-532e-9900-dfec3dc21ccc", 00:33:33.843 "is_configured": true, 00:33:33.843 "data_offset": 2048, 00:33:33.843 "data_size": 63488 00:33:33.843 }, 00:33:33.843 { 00:33:33.843 "name": null, 00:33:33.843 "uuid": "990a23fd-904a-575f-a5dd-f31b6878bde0", 00:33:33.843 "is_configured": false, 00:33:33.843 "data_offset": 2048, 00:33:33.843 "data_size": 63488 00:33:33.843 } 00:33:33.843 ] 00:33:33.843 }' 00:33:33.843 07:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:33.843 07:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.411 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:33:34.411 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:34.411 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:33:34.411 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:34.671 [2024-07-12 07:44:08.443770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:34.671 [2024-07-12 07:44:08.443879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:34.671 [2024-07-12 07:44:08.443923] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:33:34.671 [2024-07-12 07:44:08.443965] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:34.671 [2024-07-12 07:44:08.444476] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:34.671 [2024-07-12 07:44:08.444532] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:34.671 [2024-07-12 07:44:08.444624] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:33:34.671 [2024-07-12 07:44:08.444649] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:34.671 [2024-07-12 07:44:08.444779] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:33:34.671 [2024-07-12 07:44:08.444787] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:34.671 [2024-07-12 07:44:08.444860] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:33:34.671 [2024-07-12 07:44:08.445647] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:33:34.671 [2024-07-12 07:44:08.445661] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:33:34.671 [2024-07-12 07:44:08.445848] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:34.671 pt4 00:33:34.671 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:34.671 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:34.671 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:34.671 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:34.671 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:34.671 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:34.671 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:34.671 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:34.671 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:34.671 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:34.671 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:34.671 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.930 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:34.930 "name": "raid_bdev1", 00:33:34.930 "uuid": "3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e", 00:33:34.930 "strip_size_kb": 64, 00:33:34.930 "state": "online", 00:33:34.930 "raid_level": "raid5f", 00:33:34.930 "superblock": true, 00:33:34.930 "num_base_bdevs": 4, 00:33:34.930 "num_base_bdevs_discovered": 3, 00:33:34.930 "num_base_bdevs_operational": 3, 00:33:34.930 "base_bdevs_list": [ 00:33:34.930 { 00:33:34.930 "name": null, 00:33:34.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.930 "is_configured": false, 00:33:34.930 "data_offset": 2048, 00:33:34.930 "data_size": 63488 00:33:34.930 }, 00:33:34.930 { 00:33:34.930 "name": "pt2", 00:33:34.930 "uuid": "516dd88c-e9e0-57e7-beb9-6849529b7e9e", 00:33:34.930 "is_configured": true, 00:33:34.930 "data_offset": 2048, 00:33:34.930 "data_size": 63488 00:33:34.930 }, 00:33:34.930 { 00:33:34.930 "name": "pt3", 00:33:34.930 "uuid": "9474a2c5-d3f3-532e-9900-dfec3dc21ccc", 00:33:34.930 "is_configured": true, 00:33:34.930 "data_offset": 2048, 00:33:34.930 "data_size": 63488 00:33:34.930 }, 00:33:34.930 { 00:33:34.930 "name": "pt4", 00:33:34.930 "uuid": "990a23fd-904a-575f-a5dd-f31b6878bde0", 00:33:34.930 "is_configured": true, 00:33:34.930 "data_offset": 2048, 00:33:34.930 "data_size": 63488 00:33:34.930 } 00:33:34.930 ] 00:33:34.930 }' 00:33:34.930 07:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:34.930 07:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.496 07:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:33:35.496 07:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:35.754 07:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:33:35.754 07:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:35.754 07:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:33:36.013 [2024-07-12 07:44:09.782008] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:36.013 07:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e '!=' 3a519359-9dd4-4fd7-ab17-e3e0ce3cc26e ']' 00:33:36.013 07:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 165550 00:33:36.013 07:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 165550 ']' 00:33:36.013 07:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # kill -0 165550 00:33:36.013 07:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # uname 00:33:36.013 07:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:36.013 07:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 165550 00:33:36.013 killing process with pid 165550 00:33:36.013 07:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:36.013 07:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:36.013 07:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 165550' 00:33:36.013 07:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@965 -- # kill 165550 00:33:36.013 [2024-07-12 07:44:09.825366] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:36.013 07:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # wait 165550 00:33:36.013 [2024-07-12 07:44:09.825461] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:36.013 [2024-07-12 07:44:09.825563] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:36.013 [2024-07-12 07:44:09.825572] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:33:36.271 [2024-07-12 07:44:09.903393] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:36.530 07:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:33:36.530 00:33:36.530 real 0m24.447s 00:33:36.530 user 0m44.663s 00:33:36.530 sys 0m4.306s 00:33:36.530 07:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:36.530 07:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:36.531 ************************************ 00:33:36.531 END TEST raid5f_superblock_test 00:33:36.531 ************************************ 00:33:36.531 07:44:10 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:33:36.531 07:44:10 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:33:36.531 07:44:10 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:33:36.531 07:44:10 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:36.531 07:44:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:36.531 ************************************ 00:33:36.531 START TEST raid5f_rebuild_test 00:33:36.531 ************************************ 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 4 false false true 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:33:36.531 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:33:36.790 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=166378 00:33:36.791 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 166378 /var/tmp/spdk-raid.sock 00:33:36.791 07:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:36.791 07:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 166378 ']' 00:33:36.791 07:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:36.791 07:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:36.791 07:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:36.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:36.791 07:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:36.791 07:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:36.791 [2024-07-12 07:44:10.476310] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:36.791 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:36.791 Zero copy mechanism will not be used. 00:33:36.791 [2024-07-12 07:44:10.476597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166378 ] 00:33:36.791 [2024-07-12 07:44:10.632439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.050 [2024-07-12 07:44:10.688238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.050 [2024-07-12 07:44:10.737181] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:37.619 07:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:37.619 07:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:33:37.619 07:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:37.619 07:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:37.878 BaseBdev1_malloc 00:33:37.878 07:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:38.137 [2024-07-12 07:44:11.864231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:38.137 [2024-07-12 07:44:11.864471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:38.137 [2024-07-12 07:44:11.864550] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:33:38.137 [2024-07-12 07:44:11.864684] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:38.137 [2024-07-12 07:44:11.867195] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:38.137 [2024-07-12 07:44:11.867402] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:38.137 BaseBdev1 00:33:38.137 07:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:38.137 07:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:38.397 BaseBdev2_malloc 00:33:38.397 07:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:38.656 [2024-07-12 07:44:12.385088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:38.656 [2024-07-12 07:44:12.385315] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:38.656 [2024-07-12 07:44:12.385384] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:33:38.656 [2024-07-12 07:44:12.385497] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:38.656 [2024-07-12 07:44:12.387930] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:38.656 [2024-07-12 07:44:12.388082] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:38.656 BaseBdev2 00:33:38.656 07:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:38.656 07:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:38.915 BaseBdev3_malloc 00:33:38.915 07:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:39.174 [2024-07-12 07:44:12.812839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:39.174 [2024-07-12 07:44:12.813114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:39.174 [2024-07-12 07:44:12.813186] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:39.174 [2024-07-12 07:44:12.813313] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:39.174 [2024-07-12 07:44:12.815679] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:39.174 [2024-07-12 07:44:12.815830] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:39.174 BaseBdev3 00:33:39.174 07:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:39.174 07:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:33:39.432 BaseBdev4_malloc 00:33:39.432 07:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:33:39.432 [2024-07-12 07:44:13.261632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:33:39.432 [2024-07-12 07:44:13.261930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:39.432 [2024-07-12 07:44:13.261995] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:39.432 [2024-07-12 07:44:13.262114] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:39.432 [2024-07-12 07:44:13.264362] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:39.432 [2024-07-12 07:44:13.264548] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:39.432 BaseBdev4 00:33:39.432 07:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:33:39.690 spare_malloc 00:33:39.690 07:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:39.949 spare_delay 00:33:39.949 07:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:40.208 [2024-07-12 07:44:13.878614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:40.208 [2024-07-12 07:44:13.878852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:40.208 [2024-07-12 07:44:13.878912] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:40.208 [2024-07-12 07:44:13.879061] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:40.208 [2024-07-12 07:44:13.881464] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:40.208 [2024-07-12 07:44:13.881650] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:40.208 spare 00:33:40.208 07:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:33:40.467 [2024-07-12 07:44:14.126752] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:40.467 [2024-07-12 07:44:14.128963] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:40.467 [2024-07-12 07:44:14.129148] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:40.467 [2024-07-12 07:44:14.129222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:40.467 [2024-07-12 07:44:14.129418] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:33:40.467 [2024-07-12 07:44:14.129457] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:33:40.467 [2024-07-12 07:44:14.129629] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:33:40.467 [2024-07-12 07:44:14.130353] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:33:40.467 [2024-07-12 07:44:14.130496] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:33:40.467 [2024-07-12 07:44:14.130788] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:40.467 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:40.467 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:40.467 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:40.467 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:40.467 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:40.467 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:40.467 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:40.467 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:40.467 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:40.467 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:40.467 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:40.467 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.467 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:40.467 "name": "raid_bdev1", 00:33:40.467 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:40.467 "strip_size_kb": 64, 00:33:40.467 "state": "online", 00:33:40.467 "raid_level": "raid5f", 00:33:40.467 "superblock": false, 00:33:40.467 "num_base_bdevs": 4, 00:33:40.467 "num_base_bdevs_discovered": 4, 00:33:40.467 "num_base_bdevs_operational": 4, 00:33:40.467 "base_bdevs_list": [ 00:33:40.467 { 00:33:40.467 "name": "BaseBdev1", 00:33:40.467 "uuid": "92d2ec0f-0c78-545e-9ebb-ad33a9f48ae0", 00:33:40.467 "is_configured": true, 00:33:40.467 "data_offset": 0, 00:33:40.467 "data_size": 65536 00:33:40.467 }, 00:33:40.467 { 00:33:40.467 "name": "BaseBdev2", 00:33:40.467 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:40.467 "is_configured": true, 00:33:40.467 "data_offset": 0, 00:33:40.467 "data_size": 65536 00:33:40.467 }, 00:33:40.467 { 00:33:40.467 "name": "BaseBdev3", 00:33:40.467 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:40.467 "is_configured": true, 00:33:40.467 "data_offset": 0, 00:33:40.467 "data_size": 65536 00:33:40.467 }, 00:33:40.467 { 00:33:40.467 "name": "BaseBdev4", 00:33:40.467 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:40.467 "is_configured": true, 00:33:40.467 "data_offset": 0, 00:33:40.467 "data_size": 65536 00:33:40.467 } 00:33:40.467 ] 00:33:40.467 }' 00:33:40.467 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:40.467 07:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.035 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:41.035 07:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:33:41.295 [2024-07-12 07:44:15.115049] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:41.295 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=196608 00:33:41.295 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:41.295 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:41.555 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:33:41.555 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:33:41.555 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:33:41.555 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:33:41.555 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:33:41.555 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:41.555 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:33:41.555 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:41.555 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:41.555 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:41.555 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:33:41.555 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:41.555 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:41.555 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:41.815 [2024-07-12 07:44:15.591002] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:33:41.815 /dev/nbd0 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:41.815 1+0 records in 00:33:41.815 1+0 records out 00:33:41.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654803 s, 6.3 MB/s 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 192 00:33:41.815 07:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:33:42.384 512+0 records in 00:33:42.384 512+0 records out 00:33:42.384 100663296 bytes (101 MB, 96 MiB) copied, 0.495961 s, 203 MB/s 00:33:42.384 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:33:42.384 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:42.384 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:42.384 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:42.384 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:33:42.384 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:42.384 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:42.643 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:42.643 [2024-07-12 07:44:16.441494] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:42.643 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:42.643 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:42.643 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:42.643 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:42.643 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:42.643 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:42.643 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:42.643 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:42.901 [2024-07-12 07:44:16.705122] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:42.901 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:42.901 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:42.901 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:42.901 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:42.901 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:42.901 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:42.901 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:42.901 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:42.901 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:42.901 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:42.901 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:42.901 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:43.161 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:43.161 "name": "raid_bdev1", 00:33:43.161 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:43.161 "strip_size_kb": 64, 00:33:43.161 "state": "online", 00:33:43.161 "raid_level": "raid5f", 00:33:43.161 "superblock": false, 00:33:43.161 "num_base_bdevs": 4, 00:33:43.161 "num_base_bdevs_discovered": 3, 00:33:43.161 "num_base_bdevs_operational": 3, 00:33:43.161 "base_bdevs_list": [ 00:33:43.161 { 00:33:43.161 "name": null, 00:33:43.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.161 "is_configured": false, 00:33:43.161 "data_offset": 0, 00:33:43.161 "data_size": 65536 00:33:43.161 }, 00:33:43.161 { 00:33:43.161 "name": "BaseBdev2", 00:33:43.161 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:43.161 "is_configured": true, 00:33:43.161 "data_offset": 0, 00:33:43.161 "data_size": 65536 00:33:43.161 }, 00:33:43.161 { 00:33:43.161 "name": "BaseBdev3", 00:33:43.161 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:43.161 "is_configured": true, 00:33:43.161 "data_offset": 0, 00:33:43.161 "data_size": 65536 00:33:43.161 }, 00:33:43.161 { 00:33:43.161 "name": "BaseBdev4", 00:33:43.161 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:43.161 "is_configured": true, 00:33:43.161 "data_offset": 0, 00:33:43.161 "data_size": 65536 00:33:43.161 } 00:33:43.161 ] 00:33:43.161 }' 00:33:43.161 07:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:43.161 07:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.730 07:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:43.988 [2024-07-12 07:44:17.761302] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:43.988 [2024-07-12 07:44:17.764826] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:33:43.988 [2024-07-12 07:44:17.767403] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:43.988 07:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:33:44.925 07:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:44.925 07:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:44.925 07:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:44.925 07:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:44.925 07:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:44.925 07:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:44.925 07:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:45.184 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:45.184 "name": "raid_bdev1", 00:33:45.184 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:45.184 "strip_size_kb": 64, 00:33:45.184 "state": "online", 00:33:45.184 "raid_level": "raid5f", 00:33:45.184 "superblock": false, 00:33:45.184 "num_base_bdevs": 4, 00:33:45.184 "num_base_bdevs_discovered": 4, 00:33:45.184 "num_base_bdevs_operational": 4, 00:33:45.184 "process": { 00:33:45.184 "type": "rebuild", 00:33:45.184 "target": "spare", 00:33:45.184 "progress": { 00:33:45.184 "blocks": 23040, 00:33:45.184 "percent": 11 00:33:45.184 } 00:33:45.184 }, 00:33:45.184 "base_bdevs_list": [ 00:33:45.184 { 00:33:45.184 "name": "spare", 00:33:45.184 "uuid": "a93bfcda-618e-5c66-8579-f60f93386dc9", 00:33:45.184 "is_configured": true, 00:33:45.184 "data_offset": 0, 00:33:45.184 "data_size": 65536 00:33:45.184 }, 00:33:45.184 { 00:33:45.184 "name": "BaseBdev2", 00:33:45.184 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:45.184 "is_configured": true, 00:33:45.184 "data_offset": 0, 00:33:45.184 "data_size": 65536 00:33:45.184 }, 00:33:45.185 { 00:33:45.185 "name": "BaseBdev3", 00:33:45.185 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:45.185 "is_configured": true, 00:33:45.185 "data_offset": 0, 00:33:45.185 "data_size": 65536 00:33:45.185 }, 00:33:45.185 { 00:33:45.185 "name": "BaseBdev4", 00:33:45.185 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:45.185 "is_configured": true, 00:33:45.185 "data_offset": 0, 00:33:45.185 "data_size": 65536 00:33:45.185 } 00:33:45.185 ] 00:33:45.185 }' 00:33:45.185 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:45.443 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:45.443 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:45.443 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:45.443 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:45.703 [2024-07-12 07:44:19.352476] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:45.703 [2024-07-12 07:44:19.377579] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:45.703 [2024-07-12 07:44:19.377778] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:45.703 [2024-07-12 07:44:19.377828] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:45.703 [2024-07-12 07:44:19.377899] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:45.703 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:45.703 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:45.703 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:45.703 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:45.703 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:45.703 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:45.703 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:45.703 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:45.703 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:45.703 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:45.703 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.703 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:45.962 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:45.962 "name": "raid_bdev1", 00:33:45.962 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:45.962 "strip_size_kb": 64, 00:33:45.962 "state": "online", 00:33:45.962 "raid_level": "raid5f", 00:33:45.962 "superblock": false, 00:33:45.962 "num_base_bdevs": 4, 00:33:45.962 "num_base_bdevs_discovered": 3, 00:33:45.962 "num_base_bdevs_operational": 3, 00:33:45.962 "base_bdevs_list": [ 00:33:45.962 { 00:33:45.962 "name": null, 00:33:45.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.962 "is_configured": false, 00:33:45.962 "data_offset": 0, 00:33:45.962 "data_size": 65536 00:33:45.962 }, 00:33:45.962 { 00:33:45.962 "name": "BaseBdev2", 00:33:45.962 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:45.962 "is_configured": true, 00:33:45.962 "data_offset": 0, 00:33:45.962 "data_size": 65536 00:33:45.962 }, 00:33:45.962 { 00:33:45.962 "name": "BaseBdev3", 00:33:45.962 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:45.962 "is_configured": true, 00:33:45.962 "data_offset": 0, 00:33:45.962 "data_size": 65536 00:33:45.962 }, 00:33:45.962 { 00:33:45.962 "name": "BaseBdev4", 00:33:45.962 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:45.962 "is_configured": true, 00:33:45.962 "data_offset": 0, 00:33:45.962 "data_size": 65536 00:33:45.962 } 00:33:45.962 ] 00:33:45.962 }' 00:33:45.962 07:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:45.962 07:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.529 07:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:46.529 07:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:46.529 07:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:46.529 07:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:46.529 07:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:46.529 07:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:46.529 07:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:46.529 07:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:46.529 "name": "raid_bdev1", 00:33:46.529 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:46.529 "strip_size_kb": 64, 00:33:46.529 "state": "online", 00:33:46.529 "raid_level": "raid5f", 00:33:46.529 "superblock": false, 00:33:46.529 "num_base_bdevs": 4, 00:33:46.529 "num_base_bdevs_discovered": 3, 00:33:46.529 "num_base_bdevs_operational": 3, 00:33:46.529 "base_bdevs_list": [ 00:33:46.529 { 00:33:46.529 "name": null, 00:33:46.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:46.529 "is_configured": false, 00:33:46.529 "data_offset": 0, 00:33:46.529 "data_size": 65536 00:33:46.529 }, 00:33:46.530 { 00:33:46.530 "name": "BaseBdev2", 00:33:46.530 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:46.530 "is_configured": true, 00:33:46.530 "data_offset": 0, 00:33:46.530 "data_size": 65536 00:33:46.530 }, 00:33:46.530 { 00:33:46.530 "name": "BaseBdev3", 00:33:46.530 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:46.530 "is_configured": true, 00:33:46.530 "data_offset": 0, 00:33:46.530 "data_size": 65536 00:33:46.530 }, 00:33:46.530 { 00:33:46.530 "name": "BaseBdev4", 00:33:46.530 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:46.530 "is_configured": true, 00:33:46.530 "data_offset": 0, 00:33:46.530 "data_size": 65536 00:33:46.530 } 00:33:46.530 ] 00:33:46.530 }' 00:33:46.530 07:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:46.788 07:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:46.788 07:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:46.788 07:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:46.788 07:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:46.788 [2024-07-12 07:44:20.617575] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:46.788 [2024-07-12 07:44:20.620863] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027f40 00:33:46.788 [2024-07-12 07:44:20.623238] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:46.788 07:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:48.203 "name": "raid_bdev1", 00:33:48.203 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:48.203 "strip_size_kb": 64, 00:33:48.203 "state": "online", 00:33:48.203 "raid_level": "raid5f", 00:33:48.203 "superblock": false, 00:33:48.203 "num_base_bdevs": 4, 00:33:48.203 "num_base_bdevs_discovered": 4, 00:33:48.203 "num_base_bdevs_operational": 4, 00:33:48.203 "process": { 00:33:48.203 "type": "rebuild", 00:33:48.203 "target": "spare", 00:33:48.203 "progress": { 00:33:48.203 "blocks": 21120, 00:33:48.203 "percent": 10 00:33:48.203 } 00:33:48.203 }, 00:33:48.203 "base_bdevs_list": [ 00:33:48.203 { 00:33:48.203 "name": "spare", 00:33:48.203 "uuid": "a93bfcda-618e-5c66-8579-f60f93386dc9", 00:33:48.203 "is_configured": true, 00:33:48.203 "data_offset": 0, 00:33:48.203 "data_size": 65536 00:33:48.203 }, 00:33:48.203 { 00:33:48.203 "name": "BaseBdev2", 00:33:48.203 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:48.203 "is_configured": true, 00:33:48.203 "data_offset": 0, 00:33:48.203 "data_size": 65536 00:33:48.203 }, 00:33:48.203 { 00:33:48.203 "name": "BaseBdev3", 00:33:48.203 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:48.203 "is_configured": true, 00:33:48.203 "data_offset": 0, 00:33:48.203 "data_size": 65536 00:33:48.203 }, 00:33:48.203 { 00:33:48.203 "name": "BaseBdev4", 00:33:48.203 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:48.203 "is_configured": true, 00:33:48.203 "data_offset": 0, 00:33:48.203 "data_size": 65536 00:33:48.203 } 00:33:48.203 ] 00:33:48.203 }' 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1172 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:48.203 07:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:48.462 07:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:48.462 "name": "raid_bdev1", 00:33:48.462 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:48.462 "strip_size_kb": 64, 00:33:48.462 "state": "online", 00:33:48.462 "raid_level": "raid5f", 00:33:48.462 "superblock": false, 00:33:48.462 "num_base_bdevs": 4, 00:33:48.462 "num_base_bdevs_discovered": 4, 00:33:48.462 "num_base_bdevs_operational": 4, 00:33:48.462 "process": { 00:33:48.462 "type": "rebuild", 00:33:48.462 "target": "spare", 00:33:48.462 "progress": { 00:33:48.462 "blocks": 28800, 00:33:48.462 "percent": 14 00:33:48.462 } 00:33:48.462 }, 00:33:48.462 "base_bdevs_list": [ 00:33:48.462 { 00:33:48.462 "name": "spare", 00:33:48.462 "uuid": "a93bfcda-618e-5c66-8579-f60f93386dc9", 00:33:48.462 "is_configured": true, 00:33:48.462 "data_offset": 0, 00:33:48.462 "data_size": 65536 00:33:48.462 }, 00:33:48.462 { 00:33:48.462 "name": "BaseBdev2", 00:33:48.462 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:48.462 "is_configured": true, 00:33:48.462 "data_offset": 0, 00:33:48.462 "data_size": 65536 00:33:48.462 }, 00:33:48.462 { 00:33:48.462 "name": "BaseBdev3", 00:33:48.462 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:48.462 "is_configured": true, 00:33:48.462 "data_offset": 0, 00:33:48.462 "data_size": 65536 00:33:48.462 }, 00:33:48.462 { 00:33:48.462 "name": "BaseBdev4", 00:33:48.462 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:48.462 "is_configured": true, 00:33:48.462 "data_offset": 0, 00:33:48.462 "data_size": 65536 00:33:48.462 } 00:33:48.462 ] 00:33:48.462 }' 00:33:48.462 07:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:48.462 07:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:48.462 07:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:48.462 07:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:48.462 07:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:49.396 07:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:49.396 07:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:49.396 07:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:49.397 07:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:49.397 07:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:49.397 07:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:49.397 07:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:49.397 07:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:49.655 07:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:49.655 "name": "raid_bdev1", 00:33:49.655 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:49.655 "strip_size_kb": 64, 00:33:49.655 "state": "online", 00:33:49.655 "raid_level": "raid5f", 00:33:49.655 "superblock": false, 00:33:49.655 "num_base_bdevs": 4, 00:33:49.655 "num_base_bdevs_discovered": 4, 00:33:49.655 "num_base_bdevs_operational": 4, 00:33:49.655 "process": { 00:33:49.655 "type": "rebuild", 00:33:49.655 "target": "spare", 00:33:49.655 "progress": { 00:33:49.655 "blocks": 53760, 00:33:49.655 "percent": 27 00:33:49.655 } 00:33:49.655 }, 00:33:49.655 "base_bdevs_list": [ 00:33:49.655 { 00:33:49.655 "name": "spare", 00:33:49.655 "uuid": "a93bfcda-618e-5c66-8579-f60f93386dc9", 00:33:49.655 "is_configured": true, 00:33:49.655 "data_offset": 0, 00:33:49.655 "data_size": 65536 00:33:49.655 }, 00:33:49.655 { 00:33:49.655 "name": "BaseBdev2", 00:33:49.655 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:49.655 "is_configured": true, 00:33:49.655 "data_offset": 0, 00:33:49.655 "data_size": 65536 00:33:49.655 }, 00:33:49.655 { 00:33:49.655 "name": "BaseBdev3", 00:33:49.655 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:49.655 "is_configured": true, 00:33:49.655 "data_offset": 0, 00:33:49.655 "data_size": 65536 00:33:49.655 }, 00:33:49.655 { 00:33:49.655 "name": "BaseBdev4", 00:33:49.655 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:49.655 "is_configured": true, 00:33:49.655 "data_offset": 0, 00:33:49.655 "data_size": 65536 00:33:49.655 } 00:33:49.655 ] 00:33:49.655 }' 00:33:49.655 07:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:49.913 07:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:49.913 07:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:49.913 07:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:49.913 07:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:50.847 07:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:50.847 07:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:50.847 07:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:50.847 07:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:50.847 07:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:50.847 07:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:50.847 07:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.847 07:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:51.106 07:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:51.106 "name": "raid_bdev1", 00:33:51.106 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:51.106 "strip_size_kb": 64, 00:33:51.106 "state": "online", 00:33:51.106 "raid_level": "raid5f", 00:33:51.106 "superblock": false, 00:33:51.106 "num_base_bdevs": 4, 00:33:51.106 "num_base_bdevs_discovered": 4, 00:33:51.106 "num_base_bdevs_operational": 4, 00:33:51.106 "process": { 00:33:51.106 "type": "rebuild", 00:33:51.106 "target": "spare", 00:33:51.106 "progress": { 00:33:51.106 "blocks": 78720, 00:33:51.106 "percent": 40 00:33:51.106 } 00:33:51.106 }, 00:33:51.106 "base_bdevs_list": [ 00:33:51.106 { 00:33:51.106 "name": "spare", 00:33:51.106 "uuid": "a93bfcda-618e-5c66-8579-f60f93386dc9", 00:33:51.106 "is_configured": true, 00:33:51.106 "data_offset": 0, 00:33:51.106 "data_size": 65536 00:33:51.106 }, 00:33:51.106 { 00:33:51.106 "name": "BaseBdev2", 00:33:51.106 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:51.106 "is_configured": true, 00:33:51.106 "data_offset": 0, 00:33:51.106 "data_size": 65536 00:33:51.106 }, 00:33:51.106 { 00:33:51.106 "name": "BaseBdev3", 00:33:51.106 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:51.106 "is_configured": true, 00:33:51.106 "data_offset": 0, 00:33:51.106 "data_size": 65536 00:33:51.106 }, 00:33:51.106 { 00:33:51.106 "name": "BaseBdev4", 00:33:51.106 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:51.106 "is_configured": true, 00:33:51.106 "data_offset": 0, 00:33:51.106 "data_size": 65536 00:33:51.106 } 00:33:51.106 ] 00:33:51.106 }' 00:33:51.106 07:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:51.106 07:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:51.106 07:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:51.106 07:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:51.106 07:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:52.481 07:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:52.481 07:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:52.481 07:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:52.481 07:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:52.481 07:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:52.481 07:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:52.481 07:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:52.481 07:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:52.481 07:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:52.481 "name": "raid_bdev1", 00:33:52.481 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:52.481 "strip_size_kb": 64, 00:33:52.481 "state": "online", 00:33:52.481 "raid_level": "raid5f", 00:33:52.481 "superblock": false, 00:33:52.481 "num_base_bdevs": 4, 00:33:52.481 "num_base_bdevs_discovered": 4, 00:33:52.481 "num_base_bdevs_operational": 4, 00:33:52.481 "process": { 00:33:52.481 "type": "rebuild", 00:33:52.481 "target": "spare", 00:33:52.481 "progress": { 00:33:52.481 "blocks": 103680, 00:33:52.481 "percent": 52 00:33:52.481 } 00:33:52.481 }, 00:33:52.481 "base_bdevs_list": [ 00:33:52.481 { 00:33:52.481 "name": "spare", 00:33:52.481 "uuid": "a93bfcda-618e-5c66-8579-f60f93386dc9", 00:33:52.481 "is_configured": true, 00:33:52.481 "data_offset": 0, 00:33:52.481 "data_size": 65536 00:33:52.481 }, 00:33:52.481 { 00:33:52.481 "name": "BaseBdev2", 00:33:52.481 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:52.481 "is_configured": true, 00:33:52.481 "data_offset": 0, 00:33:52.481 "data_size": 65536 00:33:52.481 }, 00:33:52.481 { 00:33:52.481 "name": "BaseBdev3", 00:33:52.481 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:52.481 "is_configured": true, 00:33:52.481 "data_offset": 0, 00:33:52.481 "data_size": 65536 00:33:52.481 }, 00:33:52.481 { 00:33:52.481 "name": "BaseBdev4", 00:33:52.481 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:52.481 "is_configured": true, 00:33:52.481 "data_offset": 0, 00:33:52.481 "data_size": 65536 00:33:52.481 } 00:33:52.481 ] 00:33:52.481 }' 00:33:52.481 07:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:52.481 07:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:52.481 07:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:52.481 07:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:52.481 07:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:53.419 07:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:53.419 07:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:53.419 07:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:53.419 07:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:53.419 07:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:53.419 07:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:53.419 07:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:53.419 07:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:53.679 07:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:53.679 "name": "raid_bdev1", 00:33:53.679 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:53.679 "strip_size_kb": 64, 00:33:53.679 "state": "online", 00:33:53.679 "raid_level": "raid5f", 00:33:53.679 "superblock": false, 00:33:53.679 "num_base_bdevs": 4, 00:33:53.679 "num_base_bdevs_discovered": 4, 00:33:53.679 "num_base_bdevs_operational": 4, 00:33:53.679 "process": { 00:33:53.679 "type": "rebuild", 00:33:53.679 "target": "spare", 00:33:53.679 "progress": { 00:33:53.679 "blocks": 128640, 00:33:53.679 "percent": 65 00:33:53.679 } 00:33:53.679 }, 00:33:53.679 "base_bdevs_list": [ 00:33:53.679 { 00:33:53.679 "name": "spare", 00:33:53.679 "uuid": "a93bfcda-618e-5c66-8579-f60f93386dc9", 00:33:53.679 "is_configured": true, 00:33:53.679 "data_offset": 0, 00:33:53.679 "data_size": 65536 00:33:53.679 }, 00:33:53.679 { 00:33:53.679 "name": "BaseBdev2", 00:33:53.679 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:53.679 "is_configured": true, 00:33:53.679 "data_offset": 0, 00:33:53.679 "data_size": 65536 00:33:53.679 }, 00:33:53.679 { 00:33:53.679 "name": "BaseBdev3", 00:33:53.679 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:53.679 "is_configured": true, 00:33:53.679 "data_offset": 0, 00:33:53.679 "data_size": 65536 00:33:53.679 }, 00:33:53.679 { 00:33:53.679 "name": "BaseBdev4", 00:33:53.679 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:53.679 "is_configured": true, 00:33:53.679 "data_offset": 0, 00:33:53.679 "data_size": 65536 00:33:53.679 } 00:33:53.679 ] 00:33:53.679 }' 00:33:53.679 07:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:53.679 07:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:53.679 07:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:53.679 07:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:53.679 07:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:55.060 07:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:55.060 07:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:55.060 07:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:55.060 07:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:55.060 07:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:55.060 07:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:55.060 07:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.060 07:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:55.060 07:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:55.060 "name": "raid_bdev1", 00:33:55.060 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:55.060 "strip_size_kb": 64, 00:33:55.060 "state": "online", 00:33:55.060 "raid_level": "raid5f", 00:33:55.060 "superblock": false, 00:33:55.060 "num_base_bdevs": 4, 00:33:55.060 "num_base_bdevs_discovered": 4, 00:33:55.060 "num_base_bdevs_operational": 4, 00:33:55.060 "process": { 00:33:55.060 "type": "rebuild", 00:33:55.060 "target": "spare", 00:33:55.060 "progress": { 00:33:55.060 "blocks": 153600, 00:33:55.060 "percent": 78 00:33:55.060 } 00:33:55.060 }, 00:33:55.060 "base_bdevs_list": [ 00:33:55.060 { 00:33:55.060 "name": "spare", 00:33:55.060 "uuid": "a93bfcda-618e-5c66-8579-f60f93386dc9", 00:33:55.060 "is_configured": true, 00:33:55.060 "data_offset": 0, 00:33:55.060 "data_size": 65536 00:33:55.060 }, 00:33:55.060 { 00:33:55.060 "name": "BaseBdev2", 00:33:55.060 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:55.060 "is_configured": true, 00:33:55.060 "data_offset": 0, 00:33:55.060 "data_size": 65536 00:33:55.060 }, 00:33:55.060 { 00:33:55.060 "name": "BaseBdev3", 00:33:55.060 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:55.060 "is_configured": true, 00:33:55.060 "data_offset": 0, 00:33:55.060 "data_size": 65536 00:33:55.060 }, 00:33:55.060 { 00:33:55.060 "name": "BaseBdev4", 00:33:55.060 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:55.060 "is_configured": true, 00:33:55.060 "data_offset": 0, 00:33:55.060 "data_size": 65536 00:33:55.060 } 00:33:55.060 ] 00:33:55.060 }' 00:33:55.060 07:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:55.060 07:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:55.060 07:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:55.060 07:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:55.060 07:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:55.998 07:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:55.998 07:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:55.998 07:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:55.998 07:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:55.998 07:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:55.998 07:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:55.998 07:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.998 07:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:56.258 07:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:56.258 "name": "raid_bdev1", 00:33:56.258 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:56.258 "strip_size_kb": 64, 00:33:56.258 "state": "online", 00:33:56.258 "raid_level": "raid5f", 00:33:56.258 "superblock": false, 00:33:56.258 "num_base_bdevs": 4, 00:33:56.258 "num_base_bdevs_discovered": 4, 00:33:56.258 "num_base_bdevs_operational": 4, 00:33:56.258 "process": { 00:33:56.258 "type": "rebuild", 00:33:56.258 "target": "spare", 00:33:56.258 "progress": { 00:33:56.258 "blocks": 178560, 00:33:56.258 "percent": 90 00:33:56.258 } 00:33:56.258 }, 00:33:56.258 "base_bdevs_list": [ 00:33:56.258 { 00:33:56.258 "name": "spare", 00:33:56.258 "uuid": "a93bfcda-618e-5c66-8579-f60f93386dc9", 00:33:56.258 "is_configured": true, 00:33:56.258 "data_offset": 0, 00:33:56.258 "data_size": 65536 00:33:56.258 }, 00:33:56.258 { 00:33:56.258 "name": "BaseBdev2", 00:33:56.258 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:56.258 "is_configured": true, 00:33:56.258 "data_offset": 0, 00:33:56.258 "data_size": 65536 00:33:56.258 }, 00:33:56.258 { 00:33:56.258 "name": "BaseBdev3", 00:33:56.258 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:56.258 "is_configured": true, 00:33:56.258 "data_offset": 0, 00:33:56.258 "data_size": 65536 00:33:56.258 }, 00:33:56.258 { 00:33:56.258 "name": "BaseBdev4", 00:33:56.258 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:56.258 "is_configured": true, 00:33:56.258 "data_offset": 0, 00:33:56.258 "data_size": 65536 00:33:56.258 } 00:33:56.258 ] 00:33:56.258 }' 00:33:56.258 07:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:56.258 07:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:56.259 07:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:56.518 07:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:56.518 07:44:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:57.454 [2024-07-12 07:44:30.984333] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:57.454 [2024-07-12 07:44:30.984556] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:57.454 [2024-07-12 07:44:30.984754] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:57.454 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:57.454 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:57.454 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:57.454 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:57.454 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:57.454 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:57.454 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.454 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:57.712 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:57.712 "name": "raid_bdev1", 00:33:57.712 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:57.712 "strip_size_kb": 64, 00:33:57.712 "state": "online", 00:33:57.712 "raid_level": "raid5f", 00:33:57.712 "superblock": false, 00:33:57.712 "num_base_bdevs": 4, 00:33:57.712 "num_base_bdevs_discovered": 4, 00:33:57.712 "num_base_bdevs_operational": 4, 00:33:57.712 "base_bdevs_list": [ 00:33:57.712 { 00:33:57.712 "name": "spare", 00:33:57.712 "uuid": "a93bfcda-618e-5c66-8579-f60f93386dc9", 00:33:57.712 "is_configured": true, 00:33:57.712 "data_offset": 0, 00:33:57.712 "data_size": 65536 00:33:57.712 }, 00:33:57.712 { 00:33:57.712 "name": "BaseBdev2", 00:33:57.712 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:57.712 "is_configured": true, 00:33:57.712 "data_offset": 0, 00:33:57.712 "data_size": 65536 00:33:57.712 }, 00:33:57.712 { 00:33:57.712 "name": "BaseBdev3", 00:33:57.712 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:57.712 "is_configured": true, 00:33:57.712 "data_offset": 0, 00:33:57.712 "data_size": 65536 00:33:57.712 }, 00:33:57.712 { 00:33:57.712 "name": "BaseBdev4", 00:33:57.712 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:57.712 "is_configured": true, 00:33:57.712 "data_offset": 0, 00:33:57.712 "data_size": 65536 00:33:57.712 } 00:33:57.712 ] 00:33:57.712 }' 00:33:57.712 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:57.712 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:57.712 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:57.712 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:33:57.712 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:33:57.712 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:57.712 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:57.712 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:57.712 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:57.712 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:57.712 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.712 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:57.970 "name": "raid_bdev1", 00:33:57.970 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:57.970 "strip_size_kb": 64, 00:33:57.970 "state": "online", 00:33:57.970 "raid_level": "raid5f", 00:33:57.970 "superblock": false, 00:33:57.970 "num_base_bdevs": 4, 00:33:57.970 "num_base_bdevs_discovered": 4, 00:33:57.970 "num_base_bdevs_operational": 4, 00:33:57.970 "base_bdevs_list": [ 00:33:57.970 { 00:33:57.970 "name": "spare", 00:33:57.970 "uuid": "a93bfcda-618e-5c66-8579-f60f93386dc9", 00:33:57.970 "is_configured": true, 00:33:57.970 "data_offset": 0, 00:33:57.970 "data_size": 65536 00:33:57.970 }, 00:33:57.970 { 00:33:57.970 "name": "BaseBdev2", 00:33:57.970 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:57.970 "is_configured": true, 00:33:57.970 "data_offset": 0, 00:33:57.970 "data_size": 65536 00:33:57.970 }, 00:33:57.970 { 00:33:57.970 "name": "BaseBdev3", 00:33:57.970 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:57.970 "is_configured": true, 00:33:57.970 "data_offset": 0, 00:33:57.970 "data_size": 65536 00:33:57.970 }, 00:33:57.970 { 00:33:57.970 "name": "BaseBdev4", 00:33:57.970 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:57.970 "is_configured": true, 00:33:57.970 "data_offset": 0, 00:33:57.970 "data_size": 65536 00:33:57.970 } 00:33:57.970 ] 00:33:57.970 }' 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.970 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:58.227 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:58.227 "name": "raid_bdev1", 00:33:58.227 "uuid": "2ca86c6b-ad9a-49d9-84b6-e336a7309de0", 00:33:58.227 "strip_size_kb": 64, 00:33:58.227 "state": "online", 00:33:58.227 "raid_level": "raid5f", 00:33:58.227 "superblock": false, 00:33:58.227 "num_base_bdevs": 4, 00:33:58.227 "num_base_bdevs_discovered": 4, 00:33:58.227 "num_base_bdevs_operational": 4, 00:33:58.227 "base_bdevs_list": [ 00:33:58.227 { 00:33:58.227 "name": "spare", 00:33:58.227 "uuid": "a93bfcda-618e-5c66-8579-f60f93386dc9", 00:33:58.227 "is_configured": true, 00:33:58.227 "data_offset": 0, 00:33:58.227 "data_size": 65536 00:33:58.227 }, 00:33:58.227 { 00:33:58.227 "name": "BaseBdev2", 00:33:58.227 "uuid": "e72feaa4-4686-531e-9914-83aac4ce4b06", 00:33:58.227 "is_configured": true, 00:33:58.227 "data_offset": 0, 00:33:58.227 "data_size": 65536 00:33:58.227 }, 00:33:58.227 { 00:33:58.227 "name": "BaseBdev3", 00:33:58.227 "uuid": "73624f6a-0f15-5ff1-8745-6f5b98f46fab", 00:33:58.227 "is_configured": true, 00:33:58.227 "data_offset": 0, 00:33:58.227 "data_size": 65536 00:33:58.227 }, 00:33:58.228 { 00:33:58.228 "name": "BaseBdev4", 00:33:58.228 "uuid": "9137178c-adb2-5a49-a42f-fa5984282478", 00:33:58.228 "is_configured": true, 00:33:58.228 "data_offset": 0, 00:33:58.228 "data_size": 65536 00:33:58.228 } 00:33:58.228 ] 00:33:58.228 }' 00:33:58.228 07:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:58.228 07:44:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:58.795 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:59.054 [2024-07-12 07:44:32.701789] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:59.054 [2024-07-12 07:44:32.701924] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:59.054 [2024-07-12 07:44:32.702148] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:59.054 [2024-07-12 07:44:32.702324] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:59.054 [2024-07-12 07:44:32.702413] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:33:59.054 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:59.054 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:33:59.320 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:33:59.320 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:33:59.320 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:33:59.320 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:59.320 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:59.320 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:33:59.320 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:59.320 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:59.320 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:59.320 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:33:59.320 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:59.320 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:59.320 07:44:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:59.320 /dev/nbd0 00:33:59.320 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:59.320 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:59.320 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:33:59.320 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:33:59.320 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:33:59.320 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:33:59.320 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:33:59.320 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:33:59.320 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:33:59.320 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:33:59.320 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:59.320 1+0 records in 00:33:59.320 1+0 records out 00:33:59.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408632 s, 10.0 MB/s 00:33:59.320 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:59.581 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:33:59.581 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:59.581 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:33:59.581 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:33:59.581 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:59.581 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:59.581 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:33:59.839 /dev/nbd1 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:59.839 1+0 records in 00:33:59.839 1+0 records out 00:33:59.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484043 s, 8.5 MB/s 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:59.839 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:00.098 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:00.098 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:00.098 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:00.098 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:00.098 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:00.098 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:00.098 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:00.098 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:00.098 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:00.098 07:44:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 166378 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 166378 ']' 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 166378 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 166378 00:34:00.357 killing process with pid 166378 00:34:00.357 Received shutdown signal, test time was about 60.000000 seconds 00:34:00.357 00:34:00.357 Latency(us) 00:34:00.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.357 =================================================================================================================== 00:34:00.357 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 166378' 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@965 -- # kill 166378 00:34:00.357 07:44:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # wait 166378 00:34:00.357 [2024-07-12 07:44:34.119001] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:00.357 [2024-07-12 07:44:34.165220] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:34:00.616 00:34:00.616 real 0m24.005s 00:34:00.616 user 0m34.771s 00:34:00.616 sys 0m3.610s 00:34:00.616 ************************************ 00:34:00.616 END TEST raid5f_rebuild_test 00:34:00.616 ************************************ 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:00.616 07:44:34 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:34:00.616 07:44:34 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:34:00.616 07:44:34 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:00.616 07:44:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:00.616 ************************************ 00:34:00.616 START TEST raid5f_rebuild_test_sb 00:34:00.616 ************************************ 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 4 true false true 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:34:00.616 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:34:00.875 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:34:00.876 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=166987 00:34:00.876 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 166987 /var/tmp/spdk-raid.sock 00:34:00.876 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 166987 ']' 00:34:00.876 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:00.876 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:00.876 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:00.876 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:00.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:00.876 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:00.876 07:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:00.876 [2024-07-12 07:44:34.549717] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:00.876 [2024-07-12 07:44:34.550083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166987 ] 00:34:00.876 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:00.876 Zero copy mechanism will not be used. 00:34:00.876 [2024-07-12 07:44:34.691432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.876 [2024-07-12 07:44:34.732731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.135 [2024-07-12 07:44:34.774510] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:01.704 07:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:01.704 07:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:34:01.704 07:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:34:01.704 07:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:01.963 BaseBdev1_malloc 00:34:01.963 07:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:01.963 [2024-07-12 07:44:35.832038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:01.963 [2024-07-12 07:44:35.832296] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:01.963 [2024-07-12 07:44:35.832373] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:34:01.963 [2024-07-12 07:44:35.832500] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:01.963 [2024-07-12 07:44:35.834978] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:01.963 [2024-07-12 07:44:35.835149] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:01.963 BaseBdev1 00:34:02.223 07:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:34:02.223 07:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:02.223 BaseBdev2_malloc 00:34:02.223 07:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:02.482 [2024-07-12 07:44:36.260500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:02.482 [2024-07-12 07:44:36.260694] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:02.482 [2024-07-12 07:44:36.260757] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:34:02.482 [2024-07-12 07:44:36.260880] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:02.482 [2024-07-12 07:44:36.263070] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:02.482 [2024-07-12 07:44:36.263229] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:02.482 BaseBdev2 00:34:02.482 07:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:34:02.482 07:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:02.742 BaseBdev3_malloc 00:34:02.742 07:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:34:03.001 [2024-07-12 07:44:36.655766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:34:03.001 [2024-07-12 07:44:36.655962] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:03.001 [2024-07-12 07:44:36.656032] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:03.001 [2024-07-12 07:44:36.656143] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:03.001 [2024-07-12 07:44:36.658368] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:03.001 [2024-07-12 07:44:36.658533] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:03.001 BaseBdev3 00:34:03.001 07:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:34:03.001 07:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:34:03.261 BaseBdev4_malloc 00:34:03.261 07:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:34:03.261 [2024-07-12 07:44:37.100372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:34:03.261 [2024-07-12 07:44:37.100568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:03.261 [2024-07-12 07:44:37.100626] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:03.261 [2024-07-12 07:44:37.100737] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:03.261 [2024-07-12 07:44:37.102909] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:03.261 [2024-07-12 07:44:37.103092] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:34:03.261 BaseBdev4 00:34:03.261 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:34:03.521 spare_malloc 00:34:03.521 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:03.780 spare_delay 00:34:03.780 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:04.040 [2024-07-12 07:44:37.676908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:04.040 [2024-07-12 07:44:37.677105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:04.040 [2024-07-12 07:44:37.677165] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:04.040 [2024-07-12 07:44:37.677289] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:04.040 [2024-07-12 07:44:37.679511] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:04.040 [2024-07-12 07:44:37.679692] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:04.040 spare 00:34:04.040 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:34:04.040 [2024-07-12 07:44:37.849025] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:04.040 [2024-07-12 07:44:37.851115] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:04.040 [2024-07-12 07:44:37.851289] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:04.040 [2024-07-12 07:44:37.851361] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:04.040 [2024-07-12 07:44:37.851621] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:34:04.040 [2024-07-12 07:44:37.851726] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:04.040 [2024-07-12 07:44:37.851879] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:34:04.040 [2024-07-12 07:44:37.852570] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:34:04.040 [2024-07-12 07:44:37.852683] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:34:04.040 [2024-07-12 07:44:37.852912] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:04.040 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:04.040 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:04.040 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:04.040 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:04.040 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:04.040 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:04.040 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:04.040 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:04.040 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:04.040 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:04.040 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:04.040 07:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:04.299 07:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:04.299 "name": "raid_bdev1", 00:34:04.299 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:04.299 "strip_size_kb": 64, 00:34:04.299 "state": "online", 00:34:04.299 "raid_level": "raid5f", 00:34:04.299 "superblock": true, 00:34:04.299 "num_base_bdevs": 4, 00:34:04.299 "num_base_bdevs_discovered": 4, 00:34:04.299 "num_base_bdevs_operational": 4, 00:34:04.299 "base_bdevs_list": [ 00:34:04.299 { 00:34:04.299 "name": "BaseBdev1", 00:34:04.299 "uuid": "6a27eafc-059f-5cec-855d-c090355b72ab", 00:34:04.299 "is_configured": true, 00:34:04.299 "data_offset": 2048, 00:34:04.299 "data_size": 63488 00:34:04.299 }, 00:34:04.299 { 00:34:04.299 "name": "BaseBdev2", 00:34:04.299 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:04.299 "is_configured": true, 00:34:04.299 "data_offset": 2048, 00:34:04.299 "data_size": 63488 00:34:04.299 }, 00:34:04.299 { 00:34:04.299 "name": "BaseBdev3", 00:34:04.299 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:04.299 "is_configured": true, 00:34:04.299 "data_offset": 2048, 00:34:04.299 "data_size": 63488 00:34:04.299 }, 00:34:04.299 { 00:34:04.299 "name": "BaseBdev4", 00:34:04.299 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:04.299 "is_configured": true, 00:34:04.299 "data_offset": 2048, 00:34:04.299 "data_size": 63488 00:34:04.299 } 00:34:04.299 ] 00:34:04.299 }' 00:34:04.299 07:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:04.299 07:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.867 07:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:04.867 07:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:34:05.127 [2024-07-12 07:44:38.893239] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:05.127 07:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=190464 00:34:05.127 07:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:05.127 07:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:05.387 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:34:05.387 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:34:05.387 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:34:05.387 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:34:05.387 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:34:05.387 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:05.387 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:34:05.387 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:05.387 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:05.387 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:05.387 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:34:05.387 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:05.387 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:05.387 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:34:05.647 [2024-07-12 07:44:39.345245] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:34:05.647 /dev/nbd0 00:34:05.647 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:05.647 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:05.647 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:34:05.647 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:05.648 1+0 records in 00:34:05.648 1+0 records out 00:34:05.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342646 s, 12.0 MB/s 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 192 00:34:05.648 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:34:06.217 496+0 records in 00:34:06.217 496+0 records out 00:34:06.217 97517568 bytes (98 MB, 93 MiB) copied, 0.51455 s, 190 MB/s 00:34:06.217 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:34:06.217 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:06.217 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:06.217 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:06.217 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:34:06.217 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:06.217 07:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:06.476 [2024-07-12 07:44:40.149083] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:34:06.476 [2024-07-12 07:44:40.320623] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:06.476 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:06.736 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:06.736 "name": "raid_bdev1", 00:34:06.736 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:06.736 "strip_size_kb": 64, 00:34:06.736 "state": "online", 00:34:06.736 "raid_level": "raid5f", 00:34:06.736 "superblock": true, 00:34:06.736 "num_base_bdevs": 4, 00:34:06.736 "num_base_bdevs_discovered": 3, 00:34:06.736 "num_base_bdevs_operational": 3, 00:34:06.736 "base_bdevs_list": [ 00:34:06.736 { 00:34:06.736 "name": null, 00:34:06.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:06.736 "is_configured": false, 00:34:06.736 "data_offset": 2048, 00:34:06.736 "data_size": 63488 00:34:06.736 }, 00:34:06.736 { 00:34:06.736 "name": "BaseBdev2", 00:34:06.736 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:06.736 "is_configured": true, 00:34:06.736 "data_offset": 2048, 00:34:06.736 "data_size": 63488 00:34:06.736 }, 00:34:06.736 { 00:34:06.736 "name": "BaseBdev3", 00:34:06.736 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:06.736 "is_configured": true, 00:34:06.736 "data_offset": 2048, 00:34:06.736 "data_size": 63488 00:34:06.736 }, 00:34:06.736 { 00:34:06.736 "name": "BaseBdev4", 00:34:06.736 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:06.736 "is_configured": true, 00:34:06.736 "data_offset": 2048, 00:34:06.736 "data_size": 63488 00:34:06.736 } 00:34:06.736 ] 00:34:06.736 }' 00:34:06.736 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:06.736 07:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.305 07:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:07.564 [2024-07-12 07:44:41.288791] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:07.564 [2024-07-12 07:44:41.292248] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:34:07.564 [2024-07-12 07:44:41.294855] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:07.564 07:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:34:08.502 07:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:08.502 07:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:08.502 07:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:08.502 07:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:08.502 07:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:08.502 07:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:08.502 07:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:08.761 07:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:08.761 "name": "raid_bdev1", 00:34:08.761 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:08.761 "strip_size_kb": 64, 00:34:08.761 "state": "online", 00:34:08.761 "raid_level": "raid5f", 00:34:08.761 "superblock": true, 00:34:08.761 "num_base_bdevs": 4, 00:34:08.761 "num_base_bdevs_discovered": 4, 00:34:08.761 "num_base_bdevs_operational": 4, 00:34:08.761 "process": { 00:34:08.761 "type": "rebuild", 00:34:08.761 "target": "spare", 00:34:08.761 "progress": { 00:34:08.761 "blocks": 23040, 00:34:08.761 "percent": 12 00:34:08.761 } 00:34:08.761 }, 00:34:08.761 "base_bdevs_list": [ 00:34:08.761 { 00:34:08.761 "name": "spare", 00:34:08.761 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:08.761 "is_configured": true, 00:34:08.761 "data_offset": 2048, 00:34:08.761 "data_size": 63488 00:34:08.761 }, 00:34:08.761 { 00:34:08.761 "name": "BaseBdev2", 00:34:08.761 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:08.761 "is_configured": true, 00:34:08.761 "data_offset": 2048, 00:34:08.761 "data_size": 63488 00:34:08.761 }, 00:34:08.761 { 00:34:08.761 "name": "BaseBdev3", 00:34:08.761 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:08.761 "is_configured": true, 00:34:08.761 "data_offset": 2048, 00:34:08.761 "data_size": 63488 00:34:08.761 }, 00:34:08.761 { 00:34:08.761 "name": "BaseBdev4", 00:34:08.761 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:08.761 "is_configured": true, 00:34:08.761 "data_offset": 2048, 00:34:08.761 "data_size": 63488 00:34:08.762 } 00:34:08.762 ] 00:34:08.762 }' 00:34:08.762 07:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:08.762 07:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:08.762 07:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:09.021 07:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:09.021 07:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:34:09.286 [2024-07-12 07:44:42.919770] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:09.286 [2024-07-12 07:44:43.005465] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:09.286 [2024-07-12 07:44:43.005650] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:09.286 [2024-07-12 07:44:43.005697] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:09.286 [2024-07-12 07:44:43.005779] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:09.286 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:09.286 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:09.286 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:09.286 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:09.286 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:09.286 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:09.286 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:09.286 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:09.286 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:09.286 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:09.286 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:09.286 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:09.550 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:09.550 "name": "raid_bdev1", 00:34:09.550 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:09.550 "strip_size_kb": 64, 00:34:09.550 "state": "online", 00:34:09.550 "raid_level": "raid5f", 00:34:09.550 "superblock": true, 00:34:09.550 "num_base_bdevs": 4, 00:34:09.550 "num_base_bdevs_discovered": 3, 00:34:09.550 "num_base_bdevs_operational": 3, 00:34:09.550 "base_bdevs_list": [ 00:34:09.550 { 00:34:09.550 "name": null, 00:34:09.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:09.550 "is_configured": false, 00:34:09.550 "data_offset": 2048, 00:34:09.550 "data_size": 63488 00:34:09.550 }, 00:34:09.550 { 00:34:09.550 "name": "BaseBdev2", 00:34:09.550 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:09.550 "is_configured": true, 00:34:09.550 "data_offset": 2048, 00:34:09.550 "data_size": 63488 00:34:09.550 }, 00:34:09.550 { 00:34:09.550 "name": "BaseBdev3", 00:34:09.550 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:09.550 "is_configured": true, 00:34:09.550 "data_offset": 2048, 00:34:09.550 "data_size": 63488 00:34:09.550 }, 00:34:09.550 { 00:34:09.550 "name": "BaseBdev4", 00:34:09.550 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:09.550 "is_configured": true, 00:34:09.550 "data_offset": 2048, 00:34:09.550 "data_size": 63488 00:34:09.550 } 00:34:09.550 ] 00:34:09.550 }' 00:34:09.550 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:09.550 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:10.116 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:10.116 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:10.116 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:10.116 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:10.116 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:10.116 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:10.116 07:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:10.374 07:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:10.374 "name": "raid_bdev1", 00:34:10.374 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:10.374 "strip_size_kb": 64, 00:34:10.374 "state": "online", 00:34:10.374 "raid_level": "raid5f", 00:34:10.374 "superblock": true, 00:34:10.374 "num_base_bdevs": 4, 00:34:10.374 "num_base_bdevs_discovered": 3, 00:34:10.374 "num_base_bdevs_operational": 3, 00:34:10.374 "base_bdevs_list": [ 00:34:10.374 { 00:34:10.374 "name": null, 00:34:10.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:10.374 "is_configured": false, 00:34:10.374 "data_offset": 2048, 00:34:10.374 "data_size": 63488 00:34:10.374 }, 00:34:10.374 { 00:34:10.374 "name": "BaseBdev2", 00:34:10.374 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:10.374 "is_configured": true, 00:34:10.374 "data_offset": 2048, 00:34:10.374 "data_size": 63488 00:34:10.374 }, 00:34:10.374 { 00:34:10.374 "name": "BaseBdev3", 00:34:10.374 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:10.374 "is_configured": true, 00:34:10.374 "data_offset": 2048, 00:34:10.374 "data_size": 63488 00:34:10.374 }, 00:34:10.374 { 00:34:10.374 "name": "BaseBdev4", 00:34:10.374 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:10.374 "is_configured": true, 00:34:10.374 "data_offset": 2048, 00:34:10.374 "data_size": 63488 00:34:10.374 } 00:34:10.374 ] 00:34:10.374 }' 00:34:10.374 07:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:10.374 07:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:10.374 07:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:10.374 07:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:10.374 07:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:10.633 [2024-07-12 07:44:44.319293] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:10.633 [2024-07-12 07:44:44.321846] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027240 00:34:10.633 [2024-07-12 07:44:44.324036] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:10.633 07:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:34:11.566 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:11.566 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:11.566 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:11.566 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:11.566 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:11.566 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:11.566 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:11.825 "name": "raid_bdev1", 00:34:11.825 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:11.825 "strip_size_kb": 64, 00:34:11.825 "state": "online", 00:34:11.825 "raid_level": "raid5f", 00:34:11.825 "superblock": true, 00:34:11.825 "num_base_bdevs": 4, 00:34:11.825 "num_base_bdevs_discovered": 4, 00:34:11.825 "num_base_bdevs_operational": 4, 00:34:11.825 "process": { 00:34:11.825 "type": "rebuild", 00:34:11.825 "target": "spare", 00:34:11.825 "progress": { 00:34:11.825 "blocks": 23040, 00:34:11.825 "percent": 12 00:34:11.825 } 00:34:11.825 }, 00:34:11.825 "base_bdevs_list": [ 00:34:11.825 { 00:34:11.825 "name": "spare", 00:34:11.825 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:11.825 "is_configured": true, 00:34:11.825 "data_offset": 2048, 00:34:11.825 "data_size": 63488 00:34:11.825 }, 00:34:11.825 { 00:34:11.825 "name": "BaseBdev2", 00:34:11.825 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:11.825 "is_configured": true, 00:34:11.825 "data_offset": 2048, 00:34:11.825 "data_size": 63488 00:34:11.825 }, 00:34:11.825 { 00:34:11.825 "name": "BaseBdev3", 00:34:11.825 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:11.825 "is_configured": true, 00:34:11.825 "data_offset": 2048, 00:34:11.825 "data_size": 63488 00:34:11.825 }, 00:34:11.825 { 00:34:11.825 "name": "BaseBdev4", 00:34:11.825 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:11.825 "is_configured": true, 00:34:11.825 "data_offset": 2048, 00:34:11.825 "data_size": 63488 00:34:11.825 } 00:34:11.825 ] 00:34:11.825 }' 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:34:11.825 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1196 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:11.825 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:12.083 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:12.083 "name": "raid_bdev1", 00:34:12.083 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:12.083 "strip_size_kb": 64, 00:34:12.083 "state": "online", 00:34:12.083 "raid_level": "raid5f", 00:34:12.083 "superblock": true, 00:34:12.083 "num_base_bdevs": 4, 00:34:12.083 "num_base_bdevs_discovered": 4, 00:34:12.083 "num_base_bdevs_operational": 4, 00:34:12.083 "process": { 00:34:12.083 "type": "rebuild", 00:34:12.083 "target": "spare", 00:34:12.083 "progress": { 00:34:12.083 "blocks": 28800, 00:34:12.083 "percent": 15 00:34:12.083 } 00:34:12.083 }, 00:34:12.083 "base_bdevs_list": [ 00:34:12.083 { 00:34:12.083 "name": "spare", 00:34:12.083 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:12.083 "is_configured": true, 00:34:12.083 "data_offset": 2048, 00:34:12.083 "data_size": 63488 00:34:12.083 }, 00:34:12.083 { 00:34:12.083 "name": "BaseBdev2", 00:34:12.083 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:12.083 "is_configured": true, 00:34:12.083 "data_offset": 2048, 00:34:12.083 "data_size": 63488 00:34:12.083 }, 00:34:12.083 { 00:34:12.083 "name": "BaseBdev3", 00:34:12.083 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:12.083 "is_configured": true, 00:34:12.083 "data_offset": 2048, 00:34:12.083 "data_size": 63488 00:34:12.083 }, 00:34:12.083 { 00:34:12.083 "name": "BaseBdev4", 00:34:12.083 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:12.083 "is_configured": true, 00:34:12.083 "data_offset": 2048, 00:34:12.084 "data_size": 63488 00:34:12.084 } 00:34:12.084 ] 00:34:12.084 }' 00:34:12.084 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:12.342 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:12.342 07:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:12.342 07:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:12.342 07:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:13.279 07:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:13.279 07:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:13.279 07:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:13.279 07:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:13.279 07:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:13.279 07:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:13.279 07:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:13.279 07:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:13.538 07:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:13.538 "name": "raid_bdev1", 00:34:13.538 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:13.538 "strip_size_kb": 64, 00:34:13.538 "state": "online", 00:34:13.538 "raid_level": "raid5f", 00:34:13.538 "superblock": true, 00:34:13.538 "num_base_bdevs": 4, 00:34:13.538 "num_base_bdevs_discovered": 4, 00:34:13.538 "num_base_bdevs_operational": 4, 00:34:13.538 "process": { 00:34:13.538 "type": "rebuild", 00:34:13.538 "target": "spare", 00:34:13.538 "progress": { 00:34:13.538 "blocks": 55680, 00:34:13.538 "percent": 29 00:34:13.538 } 00:34:13.538 }, 00:34:13.538 "base_bdevs_list": [ 00:34:13.538 { 00:34:13.538 "name": "spare", 00:34:13.538 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:13.538 "is_configured": true, 00:34:13.538 "data_offset": 2048, 00:34:13.538 "data_size": 63488 00:34:13.538 }, 00:34:13.538 { 00:34:13.538 "name": "BaseBdev2", 00:34:13.538 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:13.538 "is_configured": true, 00:34:13.538 "data_offset": 2048, 00:34:13.538 "data_size": 63488 00:34:13.538 }, 00:34:13.538 { 00:34:13.538 "name": "BaseBdev3", 00:34:13.538 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:13.538 "is_configured": true, 00:34:13.538 "data_offset": 2048, 00:34:13.538 "data_size": 63488 00:34:13.538 }, 00:34:13.538 { 00:34:13.538 "name": "BaseBdev4", 00:34:13.538 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:13.538 "is_configured": true, 00:34:13.538 "data_offset": 2048, 00:34:13.538 "data_size": 63488 00:34:13.538 } 00:34:13.538 ] 00:34:13.538 }' 00:34:13.538 07:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:13.538 07:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:13.538 07:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:13.538 07:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:13.538 07:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:14.915 07:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:14.915 07:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:14.915 07:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:14.915 07:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:14.915 07:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:14.915 07:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:14.915 07:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:14.915 07:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:14.915 07:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:14.915 "name": "raid_bdev1", 00:34:14.915 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:14.915 "strip_size_kb": 64, 00:34:14.915 "state": "online", 00:34:14.915 "raid_level": "raid5f", 00:34:14.915 "superblock": true, 00:34:14.915 "num_base_bdevs": 4, 00:34:14.915 "num_base_bdevs_discovered": 4, 00:34:14.915 "num_base_bdevs_operational": 4, 00:34:14.915 "process": { 00:34:14.915 "type": "rebuild", 00:34:14.915 "target": "spare", 00:34:14.915 "progress": { 00:34:14.915 "blocks": 78720, 00:34:14.915 "percent": 41 00:34:14.915 } 00:34:14.915 }, 00:34:14.915 "base_bdevs_list": [ 00:34:14.915 { 00:34:14.915 "name": "spare", 00:34:14.915 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:14.916 "is_configured": true, 00:34:14.916 "data_offset": 2048, 00:34:14.916 "data_size": 63488 00:34:14.916 }, 00:34:14.916 { 00:34:14.916 "name": "BaseBdev2", 00:34:14.916 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:14.916 "is_configured": true, 00:34:14.916 "data_offset": 2048, 00:34:14.916 "data_size": 63488 00:34:14.916 }, 00:34:14.916 { 00:34:14.916 "name": "BaseBdev3", 00:34:14.916 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:14.916 "is_configured": true, 00:34:14.916 "data_offset": 2048, 00:34:14.916 "data_size": 63488 00:34:14.916 }, 00:34:14.916 { 00:34:14.916 "name": "BaseBdev4", 00:34:14.916 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:14.916 "is_configured": true, 00:34:14.916 "data_offset": 2048, 00:34:14.916 "data_size": 63488 00:34:14.916 } 00:34:14.916 ] 00:34:14.916 }' 00:34:14.916 07:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:14.916 07:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:14.916 07:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:14.916 07:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:14.916 07:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:15.852 07:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:15.852 07:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:15.852 07:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:15.852 07:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:15.852 07:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:15.852 07:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:15.852 07:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:15.852 07:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:16.130 07:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:16.130 "name": "raid_bdev1", 00:34:16.130 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:16.130 "strip_size_kb": 64, 00:34:16.130 "state": "online", 00:34:16.130 "raid_level": "raid5f", 00:34:16.130 "superblock": true, 00:34:16.130 "num_base_bdevs": 4, 00:34:16.130 "num_base_bdevs_discovered": 4, 00:34:16.130 "num_base_bdevs_operational": 4, 00:34:16.131 "process": { 00:34:16.131 "type": "rebuild", 00:34:16.131 "target": "spare", 00:34:16.131 "progress": { 00:34:16.131 "blocks": 105600, 00:34:16.131 "percent": 55 00:34:16.131 } 00:34:16.131 }, 00:34:16.131 "base_bdevs_list": [ 00:34:16.131 { 00:34:16.131 "name": "spare", 00:34:16.131 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:16.131 "is_configured": true, 00:34:16.131 "data_offset": 2048, 00:34:16.131 "data_size": 63488 00:34:16.131 }, 00:34:16.131 { 00:34:16.131 "name": "BaseBdev2", 00:34:16.131 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:16.131 "is_configured": true, 00:34:16.131 "data_offset": 2048, 00:34:16.131 "data_size": 63488 00:34:16.131 }, 00:34:16.131 { 00:34:16.131 "name": "BaseBdev3", 00:34:16.131 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:16.131 "is_configured": true, 00:34:16.131 "data_offset": 2048, 00:34:16.131 "data_size": 63488 00:34:16.131 }, 00:34:16.131 { 00:34:16.131 "name": "BaseBdev4", 00:34:16.131 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:16.131 "is_configured": true, 00:34:16.131 "data_offset": 2048, 00:34:16.131 "data_size": 63488 00:34:16.131 } 00:34:16.131 ] 00:34:16.131 }' 00:34:16.131 07:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:16.131 07:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:16.131 07:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:16.131 07:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:16.131 07:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:17.505 07:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:17.505 07:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:17.505 07:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:17.505 07:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:17.505 07:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:17.505 07:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:17.505 07:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:17.505 07:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:17.505 07:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:17.505 "name": "raid_bdev1", 00:34:17.505 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:17.505 "strip_size_kb": 64, 00:34:17.505 "state": "online", 00:34:17.505 "raid_level": "raid5f", 00:34:17.505 "superblock": true, 00:34:17.505 "num_base_bdevs": 4, 00:34:17.505 "num_base_bdevs_discovered": 4, 00:34:17.505 "num_base_bdevs_operational": 4, 00:34:17.505 "process": { 00:34:17.505 "type": "rebuild", 00:34:17.505 "target": "spare", 00:34:17.505 "progress": { 00:34:17.505 "blocks": 130560, 00:34:17.505 "percent": 68 00:34:17.505 } 00:34:17.505 }, 00:34:17.505 "base_bdevs_list": [ 00:34:17.505 { 00:34:17.505 "name": "spare", 00:34:17.505 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:17.505 "is_configured": true, 00:34:17.505 "data_offset": 2048, 00:34:17.505 "data_size": 63488 00:34:17.505 }, 00:34:17.505 { 00:34:17.505 "name": "BaseBdev2", 00:34:17.505 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:17.505 "is_configured": true, 00:34:17.505 "data_offset": 2048, 00:34:17.505 "data_size": 63488 00:34:17.505 }, 00:34:17.505 { 00:34:17.505 "name": "BaseBdev3", 00:34:17.505 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:17.505 "is_configured": true, 00:34:17.505 "data_offset": 2048, 00:34:17.505 "data_size": 63488 00:34:17.506 }, 00:34:17.506 { 00:34:17.506 "name": "BaseBdev4", 00:34:17.506 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:17.506 "is_configured": true, 00:34:17.506 "data_offset": 2048, 00:34:17.506 "data_size": 63488 00:34:17.506 } 00:34:17.506 ] 00:34:17.506 }' 00:34:17.506 07:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:17.506 07:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:17.506 07:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:17.506 07:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:17.506 07:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:18.514 07:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:18.514 07:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:18.514 07:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:18.514 07:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:18.514 07:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:18.514 07:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:18.514 07:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:18.514 07:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:18.793 07:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:18.793 "name": "raid_bdev1", 00:34:18.793 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:18.793 "strip_size_kb": 64, 00:34:18.793 "state": "online", 00:34:18.793 "raid_level": "raid5f", 00:34:18.793 "superblock": true, 00:34:18.793 "num_base_bdevs": 4, 00:34:18.793 "num_base_bdevs_discovered": 4, 00:34:18.793 "num_base_bdevs_operational": 4, 00:34:18.793 "process": { 00:34:18.793 "type": "rebuild", 00:34:18.793 "target": "spare", 00:34:18.793 "progress": { 00:34:18.793 "blocks": 155520, 00:34:18.793 "percent": 81 00:34:18.793 } 00:34:18.793 }, 00:34:18.793 "base_bdevs_list": [ 00:34:18.793 { 00:34:18.793 "name": "spare", 00:34:18.793 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:18.793 "is_configured": true, 00:34:18.793 "data_offset": 2048, 00:34:18.793 "data_size": 63488 00:34:18.793 }, 00:34:18.793 { 00:34:18.793 "name": "BaseBdev2", 00:34:18.793 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:18.793 "is_configured": true, 00:34:18.793 "data_offset": 2048, 00:34:18.793 "data_size": 63488 00:34:18.793 }, 00:34:18.793 { 00:34:18.793 "name": "BaseBdev3", 00:34:18.793 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:18.793 "is_configured": true, 00:34:18.793 "data_offset": 2048, 00:34:18.793 "data_size": 63488 00:34:18.793 }, 00:34:18.793 { 00:34:18.793 "name": "BaseBdev4", 00:34:18.793 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:18.793 "is_configured": true, 00:34:18.793 "data_offset": 2048, 00:34:18.793 "data_size": 63488 00:34:18.793 } 00:34:18.793 ] 00:34:18.793 }' 00:34:18.793 07:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:18.793 07:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:18.793 07:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:18.793 07:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:18.793 07:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:20.172 07:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:20.172 07:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:20.172 07:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:20.172 07:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:20.172 07:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:20.172 07:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:20.172 07:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:20.172 07:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:20.172 07:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:20.172 "name": "raid_bdev1", 00:34:20.172 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:20.172 "strip_size_kb": 64, 00:34:20.172 "state": "online", 00:34:20.172 "raid_level": "raid5f", 00:34:20.172 "superblock": true, 00:34:20.172 "num_base_bdevs": 4, 00:34:20.172 "num_base_bdevs_discovered": 4, 00:34:20.172 "num_base_bdevs_operational": 4, 00:34:20.172 "process": { 00:34:20.172 "type": "rebuild", 00:34:20.172 "target": "spare", 00:34:20.172 "progress": { 00:34:20.172 "blocks": 180480, 00:34:20.172 "percent": 94 00:34:20.172 } 00:34:20.172 }, 00:34:20.172 "base_bdevs_list": [ 00:34:20.172 { 00:34:20.172 "name": "spare", 00:34:20.172 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:20.172 "is_configured": true, 00:34:20.172 "data_offset": 2048, 00:34:20.172 "data_size": 63488 00:34:20.172 }, 00:34:20.172 { 00:34:20.172 "name": "BaseBdev2", 00:34:20.172 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:20.172 "is_configured": true, 00:34:20.172 "data_offset": 2048, 00:34:20.172 "data_size": 63488 00:34:20.172 }, 00:34:20.172 { 00:34:20.172 "name": "BaseBdev3", 00:34:20.172 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:20.172 "is_configured": true, 00:34:20.172 "data_offset": 2048, 00:34:20.172 "data_size": 63488 00:34:20.172 }, 00:34:20.172 { 00:34:20.172 "name": "BaseBdev4", 00:34:20.172 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:20.172 "is_configured": true, 00:34:20.172 "data_offset": 2048, 00:34:20.172 "data_size": 63488 00:34:20.172 } 00:34:20.172 ] 00:34:20.172 }' 00:34:20.172 07:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:20.172 07:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:20.172 07:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:20.172 07:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:20.172 07:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:20.740 [2024-07-12 07:44:54.382893] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:20.740 [2024-07-12 07:44:54.383123] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:20.740 [2024-07-12 07:44:54.383364] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:21.308 07:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:21.308 07:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:21.308 07:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:21.308 07:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:21.308 07:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:21.308 07:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:21.308 07:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:21.308 07:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:21.308 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:21.308 "name": "raid_bdev1", 00:34:21.308 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:21.308 "strip_size_kb": 64, 00:34:21.308 "state": "online", 00:34:21.308 "raid_level": "raid5f", 00:34:21.308 "superblock": true, 00:34:21.308 "num_base_bdevs": 4, 00:34:21.308 "num_base_bdevs_discovered": 4, 00:34:21.308 "num_base_bdevs_operational": 4, 00:34:21.308 "base_bdevs_list": [ 00:34:21.308 { 00:34:21.308 "name": "spare", 00:34:21.308 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:21.308 "is_configured": true, 00:34:21.308 "data_offset": 2048, 00:34:21.308 "data_size": 63488 00:34:21.308 }, 00:34:21.308 { 00:34:21.308 "name": "BaseBdev2", 00:34:21.308 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:21.308 "is_configured": true, 00:34:21.308 "data_offset": 2048, 00:34:21.308 "data_size": 63488 00:34:21.308 }, 00:34:21.308 { 00:34:21.308 "name": "BaseBdev3", 00:34:21.308 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:21.308 "is_configured": true, 00:34:21.308 "data_offset": 2048, 00:34:21.308 "data_size": 63488 00:34:21.308 }, 00:34:21.308 { 00:34:21.308 "name": "BaseBdev4", 00:34:21.308 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:21.308 "is_configured": true, 00:34:21.308 "data_offset": 2048, 00:34:21.308 "data_size": 63488 00:34:21.308 } 00:34:21.308 ] 00:34:21.308 }' 00:34:21.308 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:21.567 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:21.567 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:21.567 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:34:21.567 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:34:21.567 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:21.567 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:21.567 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:21.567 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:21.567 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:21.567 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:21.567 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:21.826 "name": "raid_bdev1", 00:34:21.826 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:21.826 "strip_size_kb": 64, 00:34:21.826 "state": "online", 00:34:21.826 "raid_level": "raid5f", 00:34:21.826 "superblock": true, 00:34:21.826 "num_base_bdevs": 4, 00:34:21.826 "num_base_bdevs_discovered": 4, 00:34:21.826 "num_base_bdevs_operational": 4, 00:34:21.826 "base_bdevs_list": [ 00:34:21.826 { 00:34:21.826 "name": "spare", 00:34:21.826 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:21.826 "is_configured": true, 00:34:21.826 "data_offset": 2048, 00:34:21.826 "data_size": 63488 00:34:21.826 }, 00:34:21.826 { 00:34:21.826 "name": "BaseBdev2", 00:34:21.826 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:21.826 "is_configured": true, 00:34:21.826 "data_offset": 2048, 00:34:21.826 "data_size": 63488 00:34:21.826 }, 00:34:21.826 { 00:34:21.826 "name": "BaseBdev3", 00:34:21.826 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:21.826 "is_configured": true, 00:34:21.826 "data_offset": 2048, 00:34:21.826 "data_size": 63488 00:34:21.826 }, 00:34:21.826 { 00:34:21.826 "name": "BaseBdev4", 00:34:21.826 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:21.826 "is_configured": true, 00:34:21.826 "data_offset": 2048, 00:34:21.826 "data_size": 63488 00:34:21.826 } 00:34:21.826 ] 00:34:21.826 }' 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:21.826 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:22.084 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:22.084 "name": "raid_bdev1", 00:34:22.084 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:22.084 "strip_size_kb": 64, 00:34:22.084 "state": "online", 00:34:22.084 "raid_level": "raid5f", 00:34:22.084 "superblock": true, 00:34:22.084 "num_base_bdevs": 4, 00:34:22.084 "num_base_bdevs_discovered": 4, 00:34:22.084 "num_base_bdevs_operational": 4, 00:34:22.084 "base_bdevs_list": [ 00:34:22.084 { 00:34:22.084 "name": "spare", 00:34:22.084 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:22.084 "is_configured": true, 00:34:22.084 "data_offset": 2048, 00:34:22.084 "data_size": 63488 00:34:22.084 }, 00:34:22.084 { 00:34:22.084 "name": "BaseBdev2", 00:34:22.084 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:22.084 "is_configured": true, 00:34:22.084 "data_offset": 2048, 00:34:22.084 "data_size": 63488 00:34:22.084 }, 00:34:22.084 { 00:34:22.084 "name": "BaseBdev3", 00:34:22.084 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:22.084 "is_configured": true, 00:34:22.084 "data_offset": 2048, 00:34:22.084 "data_size": 63488 00:34:22.084 }, 00:34:22.084 { 00:34:22.084 "name": "BaseBdev4", 00:34:22.084 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:22.084 "is_configured": true, 00:34:22.084 "data_offset": 2048, 00:34:22.084 "data_size": 63488 00:34:22.084 } 00:34:22.084 ] 00:34:22.084 }' 00:34:22.084 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:22.084 07:44:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:22.649 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:22.906 [2024-07-12 07:44:56.564543] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:22.906 [2024-07-12 07:44:56.564713] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:22.906 [2024-07-12 07:44:56.564906] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:22.906 [2024-07-12 07:44:56.565080] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:22.906 [2024-07-12 07:44:56.565162] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:34:22.906 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:34:22.906 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:23.165 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:34:23.165 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:34:23.165 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:34:23.165 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:34:23.165 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:23.165 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:34:23.165 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:23.165 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:23.165 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:23.165 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:34:23.165 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:23.165 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:23.165 07:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:34:23.423 /dev/nbd0 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:23.423 1+0 records in 00:34:23.423 1+0 records out 00:34:23.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317896 s, 12.9 MB/s 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:23.423 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:34:23.423 /dev/nbd1 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:23.682 1+0 records in 00:34:23.682 1+0 records out 00:34:23.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464758 s, 8.8 MB/s 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:23.682 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:23.939 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:23.939 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:23.939 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:23.939 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:23.939 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:23.939 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:23.939 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:23.939 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:23.939 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:23.939 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:34:24.198 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:24.198 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:24.198 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:24.198 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:24.198 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:24.198 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:24.198 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:24.198 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:24.198 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:34:24.198 07:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:24.457 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:24.716 [2024-07-12 07:44:58.393859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:24.716 [2024-07-12 07:44:58.394100] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:24.716 [2024-07-12 07:44:58.394163] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:34:24.716 [2024-07-12 07:44:58.394257] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:24.716 [2024-07-12 07:44:58.396581] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:24.716 [2024-07-12 07:44:58.396754] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:24.716 [2024-07-12 07:44:58.396936] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:24.716 [2024-07-12 07:44:58.397129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:24.716 [2024-07-12 07:44:58.397377] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:24.716 [2024-07-12 07:44:58.397573] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:24.716 [2024-07-12 07:44:58.397722] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:24.716 spare 00:34:24.716 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:24.716 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:24.716 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:24.716 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:24.716 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:24.716 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:24.716 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:24.716 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:24.716 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:24.716 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:24.716 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:24.716 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:24.716 [2024-07-12 07:44:58.497837] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:34:24.716 [2024-07-12 07:44:58.497944] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:24.716 [2024-07-12 07:44:58.498069] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045b60 00:34:24.716 [2024-07-12 07:44:58.498825] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:34:24.716 [2024-07-12 07:44:58.498927] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:34:24.716 [2024-07-12 07:44:58.499145] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:24.716 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:24.716 "name": "raid_bdev1", 00:34:24.716 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:24.716 "strip_size_kb": 64, 00:34:24.716 "state": "online", 00:34:24.716 "raid_level": "raid5f", 00:34:24.716 "superblock": true, 00:34:24.716 "num_base_bdevs": 4, 00:34:24.716 "num_base_bdevs_discovered": 4, 00:34:24.716 "num_base_bdevs_operational": 4, 00:34:24.716 "base_bdevs_list": [ 00:34:24.716 { 00:34:24.717 "name": "spare", 00:34:24.717 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:24.717 "is_configured": true, 00:34:24.717 "data_offset": 2048, 00:34:24.717 "data_size": 63488 00:34:24.717 }, 00:34:24.717 { 00:34:24.717 "name": "BaseBdev2", 00:34:24.717 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:24.717 "is_configured": true, 00:34:24.717 "data_offset": 2048, 00:34:24.717 "data_size": 63488 00:34:24.717 }, 00:34:24.717 { 00:34:24.717 "name": "BaseBdev3", 00:34:24.717 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:24.717 "is_configured": true, 00:34:24.717 "data_offset": 2048, 00:34:24.717 "data_size": 63488 00:34:24.717 }, 00:34:24.717 { 00:34:24.717 "name": "BaseBdev4", 00:34:24.717 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:24.717 "is_configured": true, 00:34:24.717 "data_offset": 2048, 00:34:24.717 "data_size": 63488 00:34:24.717 } 00:34:24.717 ] 00:34:24.717 }' 00:34:24.717 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:24.717 07:44:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:25.284 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:25.284 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:25.284 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:25.284 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:25.284 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:25.284 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:25.284 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:25.855 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:25.855 "name": "raid_bdev1", 00:34:25.855 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:25.855 "strip_size_kb": 64, 00:34:25.855 "state": "online", 00:34:25.855 "raid_level": "raid5f", 00:34:25.855 "superblock": true, 00:34:25.855 "num_base_bdevs": 4, 00:34:25.855 "num_base_bdevs_discovered": 4, 00:34:25.855 "num_base_bdevs_operational": 4, 00:34:25.855 "base_bdevs_list": [ 00:34:25.855 { 00:34:25.855 "name": "spare", 00:34:25.855 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:25.855 "is_configured": true, 00:34:25.855 "data_offset": 2048, 00:34:25.855 "data_size": 63488 00:34:25.855 }, 00:34:25.855 { 00:34:25.855 "name": "BaseBdev2", 00:34:25.855 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:25.855 "is_configured": true, 00:34:25.855 "data_offset": 2048, 00:34:25.855 "data_size": 63488 00:34:25.855 }, 00:34:25.855 { 00:34:25.855 "name": "BaseBdev3", 00:34:25.855 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:25.855 "is_configured": true, 00:34:25.855 "data_offset": 2048, 00:34:25.855 "data_size": 63488 00:34:25.855 }, 00:34:25.855 { 00:34:25.855 "name": "BaseBdev4", 00:34:25.856 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:25.856 "is_configured": true, 00:34:25.856 "data_offset": 2048, 00:34:25.856 "data_size": 63488 00:34:25.856 } 00:34:25.856 ] 00:34:25.856 }' 00:34:25.856 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:25.856 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:25.856 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:25.856 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:25.856 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:25.856 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:34:26.115 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:34:26.115 07:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:34:26.375 [2024-07-12 07:45:00.007407] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:26.375 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:26.375 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:26.375 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:26.375 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:26.375 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:26.375 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:26.375 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:26.375 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:26.375 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:26.375 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:26.375 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:26.375 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:26.375 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:26.375 "name": "raid_bdev1", 00:34:26.375 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:26.375 "strip_size_kb": 64, 00:34:26.375 "state": "online", 00:34:26.375 "raid_level": "raid5f", 00:34:26.375 "superblock": true, 00:34:26.375 "num_base_bdevs": 4, 00:34:26.375 "num_base_bdevs_discovered": 3, 00:34:26.375 "num_base_bdevs_operational": 3, 00:34:26.375 "base_bdevs_list": [ 00:34:26.375 { 00:34:26.375 "name": null, 00:34:26.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:26.376 "is_configured": false, 00:34:26.376 "data_offset": 2048, 00:34:26.376 "data_size": 63488 00:34:26.376 }, 00:34:26.376 { 00:34:26.376 "name": "BaseBdev2", 00:34:26.376 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:26.376 "is_configured": true, 00:34:26.376 "data_offset": 2048, 00:34:26.376 "data_size": 63488 00:34:26.376 }, 00:34:26.376 { 00:34:26.376 "name": "BaseBdev3", 00:34:26.376 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:26.376 "is_configured": true, 00:34:26.376 "data_offset": 2048, 00:34:26.376 "data_size": 63488 00:34:26.376 }, 00:34:26.376 { 00:34:26.376 "name": "BaseBdev4", 00:34:26.376 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:26.376 "is_configured": true, 00:34:26.376 "data_offset": 2048, 00:34:26.376 "data_size": 63488 00:34:26.376 } 00:34:26.376 ] 00:34:26.376 }' 00:34:26.376 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:26.376 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:26.944 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:27.204 [2024-07-12 07:45:00.986993] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:27.204 [2024-07-12 07:45:00.987240] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:27.204 [2024-07-12 07:45:00.987339] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:27.204 [2024-07-12 07:45:00.987437] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:27.204 [2024-07-12 07:45:00.990741] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045d00 00:34:27.204 [2024-07-12 07:45:00.992928] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:27.204 07:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:34:28.142 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:28.142 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:28.143 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:28.143 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:28.143 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:28.143 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:28.143 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:28.402 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:28.402 "name": "raid_bdev1", 00:34:28.402 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:28.402 "strip_size_kb": 64, 00:34:28.402 "state": "online", 00:34:28.402 "raid_level": "raid5f", 00:34:28.402 "superblock": true, 00:34:28.402 "num_base_bdevs": 4, 00:34:28.402 "num_base_bdevs_discovered": 4, 00:34:28.402 "num_base_bdevs_operational": 4, 00:34:28.402 "process": { 00:34:28.402 "type": "rebuild", 00:34:28.402 "target": "spare", 00:34:28.402 "progress": { 00:34:28.402 "blocks": 23040, 00:34:28.402 "percent": 12 00:34:28.402 } 00:34:28.402 }, 00:34:28.402 "base_bdevs_list": [ 00:34:28.402 { 00:34:28.402 "name": "spare", 00:34:28.402 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:28.402 "is_configured": true, 00:34:28.402 "data_offset": 2048, 00:34:28.402 "data_size": 63488 00:34:28.402 }, 00:34:28.402 { 00:34:28.402 "name": "BaseBdev2", 00:34:28.402 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:28.402 "is_configured": true, 00:34:28.402 "data_offset": 2048, 00:34:28.402 "data_size": 63488 00:34:28.402 }, 00:34:28.402 { 00:34:28.402 "name": "BaseBdev3", 00:34:28.402 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:28.402 "is_configured": true, 00:34:28.402 "data_offset": 2048, 00:34:28.402 "data_size": 63488 00:34:28.402 }, 00:34:28.402 { 00:34:28.402 "name": "BaseBdev4", 00:34:28.402 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:28.402 "is_configured": true, 00:34:28.402 "data_offset": 2048, 00:34:28.402 "data_size": 63488 00:34:28.402 } 00:34:28.402 ] 00:34:28.402 }' 00:34:28.402 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:28.661 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:28.661 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:28.661 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:28.661 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:28.918 [2024-07-12 07:45:02.634131] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:28.918 [2024-07-12 07:45:02.703265] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:28.918 [2024-07-12 07:45:02.703451] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:28.918 [2024-07-12 07:45:02.703498] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:28.918 [2024-07-12 07:45:02.703580] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:28.918 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:28.918 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:28.918 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:28.918 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:28.918 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:28.918 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:28.918 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:28.918 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:28.918 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:28.918 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:28.918 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:28.918 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:29.178 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:29.178 "name": "raid_bdev1", 00:34:29.178 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:29.178 "strip_size_kb": 64, 00:34:29.178 "state": "online", 00:34:29.178 "raid_level": "raid5f", 00:34:29.178 "superblock": true, 00:34:29.178 "num_base_bdevs": 4, 00:34:29.178 "num_base_bdevs_discovered": 3, 00:34:29.178 "num_base_bdevs_operational": 3, 00:34:29.178 "base_bdevs_list": [ 00:34:29.178 { 00:34:29.178 "name": null, 00:34:29.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:29.178 "is_configured": false, 00:34:29.178 "data_offset": 2048, 00:34:29.178 "data_size": 63488 00:34:29.178 }, 00:34:29.178 { 00:34:29.178 "name": "BaseBdev2", 00:34:29.178 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:29.178 "is_configured": true, 00:34:29.178 "data_offset": 2048, 00:34:29.178 "data_size": 63488 00:34:29.178 }, 00:34:29.178 { 00:34:29.178 "name": "BaseBdev3", 00:34:29.178 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:29.178 "is_configured": true, 00:34:29.178 "data_offset": 2048, 00:34:29.178 "data_size": 63488 00:34:29.178 }, 00:34:29.178 { 00:34:29.178 "name": "BaseBdev4", 00:34:29.178 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:29.178 "is_configured": true, 00:34:29.178 "data_offset": 2048, 00:34:29.178 "data_size": 63488 00:34:29.178 } 00:34:29.178 ] 00:34:29.178 }' 00:34:29.178 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:29.178 07:45:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.745 07:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:30.004 [2024-07-12 07:45:03.736198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:30.004 [2024-07-12 07:45:03.736383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:30.004 [2024-07-12 07:45:03.736457] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:34:30.004 [2024-07-12 07:45:03.736551] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:30.004 [2024-07-12 07:45:03.736966] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:30.004 [2024-07-12 07:45:03.737097] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:30.004 [2024-07-12 07:45:03.737290] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:30.004 [2024-07-12 07:45:03.737389] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:30.004 [2024-07-12 07:45:03.737491] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:30.004 [2024-07-12 07:45:03.737566] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:30.004 [2024-07-12 07:45:03.739460] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000046040 00:34:30.004 [2024-07-12 07:45:03.741790] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:30.004 spare 00:34:30.004 07:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:34:30.940 07:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:30.940 07:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:30.940 07:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:30.940 07:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:30.940 07:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:30.940 07:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:30.940 07:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:31.200 07:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:31.200 "name": "raid_bdev1", 00:34:31.200 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:31.200 "strip_size_kb": 64, 00:34:31.200 "state": "online", 00:34:31.200 "raid_level": "raid5f", 00:34:31.200 "superblock": true, 00:34:31.200 "num_base_bdevs": 4, 00:34:31.200 "num_base_bdevs_discovered": 4, 00:34:31.200 "num_base_bdevs_operational": 4, 00:34:31.200 "process": { 00:34:31.200 "type": "rebuild", 00:34:31.200 "target": "spare", 00:34:31.200 "progress": { 00:34:31.200 "blocks": 23040, 00:34:31.201 "percent": 12 00:34:31.201 } 00:34:31.201 }, 00:34:31.201 "base_bdevs_list": [ 00:34:31.201 { 00:34:31.201 "name": "spare", 00:34:31.201 "uuid": "f005aa0f-068f-5bce-9b6a-5e37136cd365", 00:34:31.201 "is_configured": true, 00:34:31.201 "data_offset": 2048, 00:34:31.201 "data_size": 63488 00:34:31.201 }, 00:34:31.201 { 00:34:31.201 "name": "BaseBdev2", 00:34:31.201 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:31.201 "is_configured": true, 00:34:31.201 "data_offset": 2048, 00:34:31.201 "data_size": 63488 00:34:31.201 }, 00:34:31.201 { 00:34:31.201 "name": "BaseBdev3", 00:34:31.201 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:31.201 "is_configured": true, 00:34:31.201 "data_offset": 2048, 00:34:31.201 "data_size": 63488 00:34:31.201 }, 00:34:31.201 { 00:34:31.201 "name": "BaseBdev4", 00:34:31.201 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:31.201 "is_configured": true, 00:34:31.201 "data_offset": 2048, 00:34:31.201 "data_size": 63488 00:34:31.201 } 00:34:31.201 ] 00:34:31.201 }' 00:34:31.201 07:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:31.201 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:31.201 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:31.201 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:31.201 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:31.460 [2024-07-12 07:45:05.311239] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:31.718 [2024-07-12 07:45:05.351328] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:31.718 [2024-07-12 07:45:05.351499] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:31.718 [2024-07-12 07:45:05.351545] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:31.718 [2024-07-12 07:45:05.351623] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:31.718 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:31.718 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:31.718 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:31.718 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:31.718 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:31.718 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:31.718 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:31.718 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:31.718 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:31.718 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:31.718 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:31.718 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:31.977 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:31.977 "name": "raid_bdev1", 00:34:31.978 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:31.978 "strip_size_kb": 64, 00:34:31.978 "state": "online", 00:34:31.978 "raid_level": "raid5f", 00:34:31.978 "superblock": true, 00:34:31.978 "num_base_bdevs": 4, 00:34:31.978 "num_base_bdevs_discovered": 3, 00:34:31.978 "num_base_bdevs_operational": 3, 00:34:31.978 "base_bdevs_list": [ 00:34:31.978 { 00:34:31.978 "name": null, 00:34:31.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:31.978 "is_configured": false, 00:34:31.978 "data_offset": 2048, 00:34:31.978 "data_size": 63488 00:34:31.978 }, 00:34:31.978 { 00:34:31.978 "name": "BaseBdev2", 00:34:31.978 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:31.978 "is_configured": true, 00:34:31.978 "data_offset": 2048, 00:34:31.978 "data_size": 63488 00:34:31.978 }, 00:34:31.978 { 00:34:31.978 "name": "BaseBdev3", 00:34:31.978 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:31.978 "is_configured": true, 00:34:31.978 "data_offset": 2048, 00:34:31.978 "data_size": 63488 00:34:31.978 }, 00:34:31.978 { 00:34:31.978 "name": "BaseBdev4", 00:34:31.978 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:31.978 "is_configured": true, 00:34:31.978 "data_offset": 2048, 00:34:31.978 "data_size": 63488 00:34:31.978 } 00:34:31.978 ] 00:34:31.978 }' 00:34:31.978 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:31.978 07:45:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.546 07:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:32.546 07:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:32.546 07:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:32.546 07:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:32.546 07:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:32.546 07:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:32.546 07:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:32.805 07:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:32.805 "name": "raid_bdev1", 00:34:32.805 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:32.805 "strip_size_kb": 64, 00:34:32.805 "state": "online", 00:34:32.805 "raid_level": "raid5f", 00:34:32.805 "superblock": true, 00:34:32.805 "num_base_bdevs": 4, 00:34:32.805 "num_base_bdevs_discovered": 3, 00:34:32.805 "num_base_bdevs_operational": 3, 00:34:32.805 "base_bdevs_list": [ 00:34:32.805 { 00:34:32.805 "name": null, 00:34:32.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:32.805 "is_configured": false, 00:34:32.805 "data_offset": 2048, 00:34:32.805 "data_size": 63488 00:34:32.805 }, 00:34:32.805 { 00:34:32.805 "name": "BaseBdev2", 00:34:32.805 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:32.805 "is_configured": true, 00:34:32.805 "data_offset": 2048, 00:34:32.805 "data_size": 63488 00:34:32.805 }, 00:34:32.805 { 00:34:32.805 "name": "BaseBdev3", 00:34:32.805 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:32.805 "is_configured": true, 00:34:32.805 "data_offset": 2048, 00:34:32.805 "data_size": 63488 00:34:32.805 }, 00:34:32.805 { 00:34:32.805 "name": "BaseBdev4", 00:34:32.805 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:32.806 "is_configured": true, 00:34:32.806 "data_offset": 2048, 00:34:32.806 "data_size": 63488 00:34:32.806 } 00:34:32.806 ] 00:34:32.806 }' 00:34:32.806 07:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:32.806 07:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:32.806 07:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:32.806 07:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:32.806 07:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:34:33.064 07:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:33.324 [2024-07-12 07:45:06.972164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:33.324 [2024-07-12 07:45:06.972348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:33.324 [2024-07-12 07:45:06.972432] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:34:33.324 [2024-07-12 07:45:06.972519] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:33.324 [2024-07-12 07:45:06.972909] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:33.324 [2024-07-12 07:45:06.973030] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:33.324 [2024-07-12 07:45:06.973131] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:33.324 [2024-07-12 07:45:06.973227] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:33.324 [2024-07-12 07:45:06.973381] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:33.324 BaseBdev1 00:34:33.324 07:45:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:34:34.261 07:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:34.261 07:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:34.261 07:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:34.261 07:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:34.261 07:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:34.261 07:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:34.261 07:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:34.261 07:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:34.261 07:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:34.261 07:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:34.261 07:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:34.261 07:45:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:34.520 07:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:34.520 "name": "raid_bdev1", 00:34:34.520 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:34.520 "strip_size_kb": 64, 00:34:34.520 "state": "online", 00:34:34.520 "raid_level": "raid5f", 00:34:34.520 "superblock": true, 00:34:34.520 "num_base_bdevs": 4, 00:34:34.520 "num_base_bdevs_discovered": 3, 00:34:34.520 "num_base_bdevs_operational": 3, 00:34:34.520 "base_bdevs_list": [ 00:34:34.520 { 00:34:34.520 "name": null, 00:34:34.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:34.520 "is_configured": false, 00:34:34.520 "data_offset": 2048, 00:34:34.520 "data_size": 63488 00:34:34.520 }, 00:34:34.520 { 00:34:34.520 "name": "BaseBdev2", 00:34:34.520 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:34.520 "is_configured": true, 00:34:34.520 "data_offset": 2048, 00:34:34.520 "data_size": 63488 00:34:34.520 }, 00:34:34.520 { 00:34:34.520 "name": "BaseBdev3", 00:34:34.520 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:34.520 "is_configured": true, 00:34:34.520 "data_offset": 2048, 00:34:34.520 "data_size": 63488 00:34:34.520 }, 00:34:34.520 { 00:34:34.520 "name": "BaseBdev4", 00:34:34.520 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:34.520 "is_configured": true, 00:34:34.520 "data_offset": 2048, 00:34:34.520 "data_size": 63488 00:34:34.520 } 00:34:34.520 ] 00:34:34.520 }' 00:34:34.520 07:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:34.520 07:45:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:35.088 07:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:35.088 07:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:35.088 07:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:35.088 07:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:35.088 07:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:35.088 07:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:35.088 07:45:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:35.346 "name": "raid_bdev1", 00:34:35.346 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:35.346 "strip_size_kb": 64, 00:34:35.346 "state": "online", 00:34:35.346 "raid_level": "raid5f", 00:34:35.346 "superblock": true, 00:34:35.346 "num_base_bdevs": 4, 00:34:35.346 "num_base_bdevs_discovered": 3, 00:34:35.346 "num_base_bdevs_operational": 3, 00:34:35.346 "base_bdevs_list": [ 00:34:35.346 { 00:34:35.346 "name": null, 00:34:35.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:35.346 "is_configured": false, 00:34:35.346 "data_offset": 2048, 00:34:35.346 "data_size": 63488 00:34:35.346 }, 00:34:35.346 { 00:34:35.346 "name": "BaseBdev2", 00:34:35.346 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:35.346 "is_configured": true, 00:34:35.346 "data_offset": 2048, 00:34:35.346 "data_size": 63488 00:34:35.346 }, 00:34:35.346 { 00:34:35.346 "name": "BaseBdev3", 00:34:35.346 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:35.346 "is_configured": true, 00:34:35.346 "data_offset": 2048, 00:34:35.346 "data_size": 63488 00:34:35.346 }, 00:34:35.346 { 00:34:35.346 "name": "BaseBdev4", 00:34:35.346 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:35.346 "is_configured": true, 00:34:35.346 "data_offset": 2048, 00:34:35.346 "data_size": 63488 00:34:35.346 } 00:34:35.346 ] 00:34:35.346 }' 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:35.346 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:35.605 [2024-07-12 07:45:09.447182] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:35.605 [2024-07-12 07:45:09.447420] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:35.605 [2024-07-12 07:45:09.447537] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:35.605 request: 00:34:35.605 { 00:34:35.605 "raid_bdev": "raid_bdev1", 00:34:35.605 "base_bdev": "BaseBdev1", 00:34:35.605 "method": "bdev_raid_add_base_bdev", 00:34:35.605 "req_id": 1 00:34:35.605 } 00:34:35.605 Got JSON-RPC error response 00:34:35.605 response: 00:34:35.605 { 00:34:35.605 "code": -22, 00:34:35.605 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:35.605 } 00:34:35.605 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:34:35.605 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:35.605 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:35.605 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:35.605 07:45:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:34:36.983 07:45:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:36.983 07:45:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:36.983 07:45:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:36.983 07:45:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:36.983 07:45:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:36.983 07:45:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:36.983 07:45:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:36.983 07:45:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:36.983 07:45:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:36.983 07:45:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:36.983 07:45:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:36.983 07:45:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:36.983 07:45:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:36.983 "name": "raid_bdev1", 00:34:36.983 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:36.983 "strip_size_kb": 64, 00:34:36.983 "state": "online", 00:34:36.983 "raid_level": "raid5f", 00:34:36.983 "superblock": true, 00:34:36.983 "num_base_bdevs": 4, 00:34:36.983 "num_base_bdevs_discovered": 3, 00:34:36.983 "num_base_bdevs_operational": 3, 00:34:36.983 "base_bdevs_list": [ 00:34:36.983 { 00:34:36.983 "name": null, 00:34:36.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:36.983 "is_configured": false, 00:34:36.983 "data_offset": 2048, 00:34:36.983 "data_size": 63488 00:34:36.983 }, 00:34:36.983 { 00:34:36.983 "name": "BaseBdev2", 00:34:36.983 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:36.983 "is_configured": true, 00:34:36.983 "data_offset": 2048, 00:34:36.983 "data_size": 63488 00:34:36.983 }, 00:34:36.983 { 00:34:36.983 "name": "BaseBdev3", 00:34:36.983 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:36.983 "is_configured": true, 00:34:36.983 "data_offset": 2048, 00:34:36.983 "data_size": 63488 00:34:36.983 }, 00:34:36.983 { 00:34:36.983 "name": "BaseBdev4", 00:34:36.983 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:36.983 "is_configured": true, 00:34:36.983 "data_offset": 2048, 00:34:36.983 "data_size": 63488 00:34:36.983 } 00:34:36.983 ] 00:34:36.983 }' 00:34:36.983 07:45:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:36.983 07:45:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:37.549 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:37.550 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:37.550 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:37.550 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:37.550 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:37.550 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:37.550 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:37.808 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:37.808 "name": "raid_bdev1", 00:34:37.809 "uuid": "a9eb2034-3dbb-40e8-8ed8-00a8dc2fdb70", 00:34:37.809 "strip_size_kb": 64, 00:34:37.809 "state": "online", 00:34:37.809 "raid_level": "raid5f", 00:34:37.809 "superblock": true, 00:34:37.809 "num_base_bdevs": 4, 00:34:37.809 "num_base_bdevs_discovered": 3, 00:34:37.809 "num_base_bdevs_operational": 3, 00:34:37.809 "base_bdevs_list": [ 00:34:37.809 { 00:34:37.809 "name": null, 00:34:37.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:37.809 "is_configured": false, 00:34:37.809 "data_offset": 2048, 00:34:37.809 "data_size": 63488 00:34:37.809 }, 00:34:37.809 { 00:34:37.809 "name": "BaseBdev2", 00:34:37.809 "uuid": "3951c92f-ba5c-51e3-84d2-fb77247456de", 00:34:37.809 "is_configured": true, 00:34:37.809 "data_offset": 2048, 00:34:37.809 "data_size": 63488 00:34:37.809 }, 00:34:37.809 { 00:34:37.809 "name": "BaseBdev3", 00:34:37.809 "uuid": "3968aabc-c743-5e63-b749-f8819124bfac", 00:34:37.809 "is_configured": true, 00:34:37.809 "data_offset": 2048, 00:34:37.809 "data_size": 63488 00:34:37.809 }, 00:34:37.809 { 00:34:37.809 "name": "BaseBdev4", 00:34:37.809 "uuid": "bd9145da-7eeb-54c5-96c5-73044a7126de", 00:34:37.809 "is_configured": true, 00:34:37.809 "data_offset": 2048, 00:34:37.809 "data_size": 63488 00:34:37.809 } 00:34:37.809 ] 00:34:37.809 }' 00:34:37.809 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:37.809 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:37.809 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:37.809 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:37.809 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 166987 00:34:37.809 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 166987 ']' 00:34:37.809 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 166987 00:34:37.809 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:34:37.809 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:37.809 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 166987 00:34:37.809 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:37.809 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:37.809 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 166987' 00:34:37.809 killing process with pid 166987 00:34:37.809 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 166987 00:34:37.809 Received shutdown signal, test time was about 60.000000 seconds 00:34:37.809 00:34:37.809 Latency(us) 00:34:37.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:37.809 =================================================================================================================== 00:34:37.809 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:37.809 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 166987 00:34:37.809 [2024-07-12 07:45:11.594544] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:37.809 [2024-07-12 07:45:11.594643] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:37.809 [2024-07-12 07:45:11.594696] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:37.809 [2024-07-12 07:45:11.594703] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:34:37.809 [2024-07-12 07:45:11.641711] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:38.076 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:34:38.076 00:34:38.076 real 0m37.403s 00:34:38.076 user 0m56.019s 00:34:38.076 sys 0m5.449s 00:34:38.076 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:38.076 07:45:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:38.076 ************************************ 00:34:38.076 END TEST raid5f_rebuild_test_sb 00:34:38.076 ************************************ 00:34:38.076 07:45:11 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:34:38.076 07:45:11 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:34:38.076 07:45:11 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:34:38.076 07:45:11 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:38.076 07:45:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:38.341 ************************************ 00:34:38.341 START TEST raid_state_function_test_sb_4k 00:34:38.341 ************************************ 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=168040 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 168040' 00:34:38.341 Process raid pid: 168040 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 168040 /var/tmp/spdk-raid.sock 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@827 -- # '[' -z 168040 ']' 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:38.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:38.341 07:45:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:38.341 [2024-07-12 07:45:12.033825] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:38.341 [2024-07-12 07:45:12.034158] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:38.341 [2024-07-12 07:45:12.173929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:38.341 [2024-07-12 07:45:12.215770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.599 [2024-07-12 07:45:12.256886] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:38.599 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:38.599 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # return 0 00:34:38.599 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:38.599 [2024-07-12 07:45:12.461439] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:38.599 [2024-07-12 07:45:12.461692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:38.599 [2024-07-12 07:45:12.461767] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:38.599 [2024-07-12 07:45:12.461851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:38.599 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:38.599 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:38.599 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:38.600 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:38.600 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:38.600 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:38.600 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:38.600 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:38.600 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:38.600 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:38.857 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:38.857 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:38.857 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:38.857 "name": "Existed_Raid", 00:34:38.857 "uuid": "a653e83f-b35d-4acd-95f2-3d1dd65f8e31", 00:34:38.857 "strip_size_kb": 0, 00:34:38.857 "state": "configuring", 00:34:38.857 "raid_level": "raid1", 00:34:38.857 "superblock": true, 00:34:38.857 "num_base_bdevs": 2, 00:34:38.857 "num_base_bdevs_discovered": 0, 00:34:38.857 "num_base_bdevs_operational": 2, 00:34:38.857 "base_bdevs_list": [ 00:34:38.857 { 00:34:38.857 "name": "BaseBdev1", 00:34:38.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:38.857 "is_configured": false, 00:34:38.857 "data_offset": 0, 00:34:38.857 "data_size": 0 00:34:38.857 }, 00:34:38.857 { 00:34:38.857 "name": "BaseBdev2", 00:34:38.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:38.857 "is_configured": false, 00:34:38.857 "data_offset": 0, 00:34:38.857 "data_size": 0 00:34:38.857 } 00:34:38.857 ] 00:34:38.857 }' 00:34:38.857 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:38.857 07:45:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:39.423 07:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:39.682 [2024-07-12 07:45:13.449487] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:39.682 [2024-07-12 07:45:13.449646] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:34:39.682 07:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:39.940 [2024-07-12 07:45:13.633502] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:39.940 [2024-07-12 07:45:13.633681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:39.940 [2024-07-12 07:45:13.633767] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:39.940 [2024-07-12 07:45:13.633824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:39.940 07:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:34:39.940 [2024-07-12 07:45:13.822478] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:40.198 BaseBdev1 00:34:40.198 07:45:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:34:40.198 07:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:34:40.198 07:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:34:40.198 07:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:34:40.198 07:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:34:40.198 07:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:34:40.198 07:45:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:40.198 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:40.457 [ 00:34:40.457 { 00:34:40.457 "name": "BaseBdev1", 00:34:40.457 "aliases": [ 00:34:40.457 "b76d9c66-6e82-48b1-a969-dc66dac0ef96" 00:34:40.457 ], 00:34:40.457 "product_name": "Malloc disk", 00:34:40.457 "block_size": 4096, 00:34:40.457 "num_blocks": 8192, 00:34:40.457 "uuid": "b76d9c66-6e82-48b1-a969-dc66dac0ef96", 00:34:40.457 "assigned_rate_limits": { 00:34:40.457 "rw_ios_per_sec": 0, 00:34:40.457 "rw_mbytes_per_sec": 0, 00:34:40.457 "r_mbytes_per_sec": 0, 00:34:40.457 "w_mbytes_per_sec": 0 00:34:40.457 }, 00:34:40.457 "claimed": true, 00:34:40.457 "claim_type": "exclusive_write", 00:34:40.457 "zoned": false, 00:34:40.457 "supported_io_types": { 00:34:40.457 "read": true, 00:34:40.457 "write": true, 00:34:40.457 "unmap": true, 00:34:40.457 "write_zeroes": true, 00:34:40.457 "flush": true, 00:34:40.457 "reset": true, 00:34:40.457 "compare": false, 00:34:40.457 "compare_and_write": false, 00:34:40.457 "abort": true, 00:34:40.457 "nvme_admin": false, 00:34:40.457 "nvme_io": false 00:34:40.457 }, 00:34:40.457 "memory_domains": [ 00:34:40.457 { 00:34:40.457 "dma_device_id": "system", 00:34:40.457 "dma_device_type": 1 00:34:40.457 }, 00:34:40.457 { 00:34:40.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:40.457 "dma_device_type": 2 00:34:40.457 } 00:34:40.457 ], 00:34:40.457 "driver_specific": {} 00:34:40.457 } 00:34:40.457 ] 00:34:40.457 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:34:40.457 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:40.457 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:40.457 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:40.457 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:40.457 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:40.457 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:40.457 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:40.457 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:40.457 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:40.457 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:40.457 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:40.457 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:40.715 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:40.716 "name": "Existed_Raid", 00:34:40.716 "uuid": "9d49f232-6fe5-4a14-8032-f703cd7506b0", 00:34:40.716 "strip_size_kb": 0, 00:34:40.716 "state": "configuring", 00:34:40.716 "raid_level": "raid1", 00:34:40.716 "superblock": true, 00:34:40.716 "num_base_bdevs": 2, 00:34:40.716 "num_base_bdevs_discovered": 1, 00:34:40.716 "num_base_bdevs_operational": 2, 00:34:40.716 "base_bdevs_list": [ 00:34:40.716 { 00:34:40.716 "name": "BaseBdev1", 00:34:40.716 "uuid": "b76d9c66-6e82-48b1-a969-dc66dac0ef96", 00:34:40.716 "is_configured": true, 00:34:40.716 "data_offset": 256, 00:34:40.716 "data_size": 7936 00:34:40.716 }, 00:34:40.716 { 00:34:40.716 "name": "BaseBdev2", 00:34:40.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.716 "is_configured": false, 00:34:40.716 "data_offset": 0, 00:34:40.716 "data_size": 0 00:34:40.716 } 00:34:40.716 ] 00:34:40.716 }' 00:34:40.716 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:40.716 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:41.283 07:45:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:41.283 [2024-07-12 07:45:15.054702] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:41.283 [2024-07-12 07:45:15.054858] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:34:41.283 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:41.542 [2024-07-12 07:45:15.326790] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:41.542 [2024-07-12 07:45:15.328881] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:41.542 [2024-07-12 07:45:15.329032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:41.542 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:34:41.542 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:41.542 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:41.542 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:41.542 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:41.542 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:41.542 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:41.542 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:41.542 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:41.542 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:41.542 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:41.542 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:41.542 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:41.542 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:41.801 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:41.801 "name": "Existed_Raid", 00:34:41.801 "uuid": "cb04f6a0-84dc-40d7-a293-cc91fe093dda", 00:34:41.801 "strip_size_kb": 0, 00:34:41.801 "state": "configuring", 00:34:41.801 "raid_level": "raid1", 00:34:41.801 "superblock": true, 00:34:41.801 "num_base_bdevs": 2, 00:34:41.801 "num_base_bdevs_discovered": 1, 00:34:41.801 "num_base_bdevs_operational": 2, 00:34:41.801 "base_bdevs_list": [ 00:34:41.801 { 00:34:41.801 "name": "BaseBdev1", 00:34:41.801 "uuid": "b76d9c66-6e82-48b1-a969-dc66dac0ef96", 00:34:41.801 "is_configured": true, 00:34:41.801 "data_offset": 256, 00:34:41.801 "data_size": 7936 00:34:41.801 }, 00:34:41.801 { 00:34:41.801 "name": "BaseBdev2", 00:34:41.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:41.801 "is_configured": false, 00:34:41.801 "data_offset": 0, 00:34:41.801 "data_size": 0 00:34:41.801 } 00:34:41.801 ] 00:34:41.801 }' 00:34:41.801 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:41.801 07:45:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:42.368 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:34:42.626 [2024-07-12 07:45:16.297821] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:42.626 [2024-07-12 07:45:16.298525] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:34:42.626 [2024-07-12 07:45:16.298835] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:42.626 BaseBdev2 00:34:42.626 [2024-07-12 07:45:16.299486] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:34:42.626 [2024-07-12 07:45:16.300751] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:34:42.626 [2024-07-12 07:45:16.301017] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:34:42.626 [2024-07-12 07:45:16.301568] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:42.626 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:34:42.626 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:34:42.626 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:34:42.626 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:34:42.626 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:34:42.626 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:34:42.626 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:42.885 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:43.144 [ 00:34:43.144 { 00:34:43.144 "name": "BaseBdev2", 00:34:43.145 "aliases": [ 00:34:43.145 "c61d1b44-7808-45ca-9e77-81fbea3d435a" 00:34:43.145 ], 00:34:43.145 "product_name": "Malloc disk", 00:34:43.145 "block_size": 4096, 00:34:43.145 "num_blocks": 8192, 00:34:43.145 "uuid": "c61d1b44-7808-45ca-9e77-81fbea3d435a", 00:34:43.145 "assigned_rate_limits": { 00:34:43.145 "rw_ios_per_sec": 0, 00:34:43.145 "rw_mbytes_per_sec": 0, 00:34:43.145 "r_mbytes_per_sec": 0, 00:34:43.145 "w_mbytes_per_sec": 0 00:34:43.145 }, 00:34:43.145 "claimed": true, 00:34:43.145 "claim_type": "exclusive_write", 00:34:43.145 "zoned": false, 00:34:43.145 "supported_io_types": { 00:34:43.145 "read": true, 00:34:43.145 "write": true, 00:34:43.145 "unmap": true, 00:34:43.145 "write_zeroes": true, 00:34:43.145 "flush": true, 00:34:43.145 "reset": true, 00:34:43.145 "compare": false, 00:34:43.145 "compare_and_write": false, 00:34:43.145 "abort": true, 00:34:43.145 "nvme_admin": false, 00:34:43.145 "nvme_io": false 00:34:43.145 }, 00:34:43.145 "memory_domains": [ 00:34:43.145 { 00:34:43.145 "dma_device_id": "system", 00:34:43.145 "dma_device_type": 1 00:34:43.145 }, 00:34:43.145 { 00:34:43.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:43.145 "dma_device_type": 2 00:34:43.145 } 00:34:43.145 ], 00:34:43.145 "driver_specific": {} 00:34:43.145 } 00:34:43.145 ] 00:34:43.145 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:34:43.145 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:43.145 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:43.145 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:34:43.145 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:43.145 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:43.145 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:43.145 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:43.145 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:43.145 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:43.145 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:43.145 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:43.145 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:43.145 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:43.145 07:45:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:43.145 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:43.145 "name": "Existed_Raid", 00:34:43.145 "uuid": "cb04f6a0-84dc-40d7-a293-cc91fe093dda", 00:34:43.145 "strip_size_kb": 0, 00:34:43.145 "state": "online", 00:34:43.145 "raid_level": "raid1", 00:34:43.145 "superblock": true, 00:34:43.145 "num_base_bdevs": 2, 00:34:43.145 "num_base_bdevs_discovered": 2, 00:34:43.145 "num_base_bdevs_operational": 2, 00:34:43.145 "base_bdevs_list": [ 00:34:43.145 { 00:34:43.145 "name": "BaseBdev1", 00:34:43.145 "uuid": "b76d9c66-6e82-48b1-a969-dc66dac0ef96", 00:34:43.145 "is_configured": true, 00:34:43.145 "data_offset": 256, 00:34:43.145 "data_size": 7936 00:34:43.145 }, 00:34:43.145 { 00:34:43.145 "name": "BaseBdev2", 00:34:43.145 "uuid": "c61d1b44-7808-45ca-9e77-81fbea3d435a", 00:34:43.145 "is_configured": true, 00:34:43.145 "data_offset": 256, 00:34:43.145 "data_size": 7936 00:34:43.145 } 00:34:43.145 ] 00:34:43.145 }' 00:34:43.145 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:43.145 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:43.713 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:34:43.713 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:43.713 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:43.713 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:43.713 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:43.713 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:34:43.713 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:43.713 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:43.972 [2024-07-12 07:45:17.802342] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:43.972 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:43.972 "name": "Existed_Raid", 00:34:43.972 "aliases": [ 00:34:43.972 "cb04f6a0-84dc-40d7-a293-cc91fe093dda" 00:34:43.972 ], 00:34:43.972 "product_name": "Raid Volume", 00:34:43.972 "block_size": 4096, 00:34:43.972 "num_blocks": 7936, 00:34:43.972 "uuid": "cb04f6a0-84dc-40d7-a293-cc91fe093dda", 00:34:43.972 "assigned_rate_limits": { 00:34:43.972 "rw_ios_per_sec": 0, 00:34:43.972 "rw_mbytes_per_sec": 0, 00:34:43.972 "r_mbytes_per_sec": 0, 00:34:43.972 "w_mbytes_per_sec": 0 00:34:43.972 }, 00:34:43.972 "claimed": false, 00:34:43.972 "zoned": false, 00:34:43.972 "supported_io_types": { 00:34:43.972 "read": true, 00:34:43.972 "write": true, 00:34:43.972 "unmap": false, 00:34:43.972 "write_zeroes": true, 00:34:43.972 "flush": false, 00:34:43.972 "reset": true, 00:34:43.972 "compare": false, 00:34:43.972 "compare_and_write": false, 00:34:43.972 "abort": false, 00:34:43.972 "nvme_admin": false, 00:34:43.972 "nvme_io": false 00:34:43.972 }, 00:34:43.972 "memory_domains": [ 00:34:43.972 { 00:34:43.972 "dma_device_id": "system", 00:34:43.972 "dma_device_type": 1 00:34:43.972 }, 00:34:43.972 { 00:34:43.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:43.972 "dma_device_type": 2 00:34:43.972 }, 00:34:43.972 { 00:34:43.972 "dma_device_id": "system", 00:34:43.972 "dma_device_type": 1 00:34:43.972 }, 00:34:43.972 { 00:34:43.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:43.972 "dma_device_type": 2 00:34:43.972 } 00:34:43.972 ], 00:34:43.972 "driver_specific": { 00:34:43.972 "raid": { 00:34:43.972 "uuid": "cb04f6a0-84dc-40d7-a293-cc91fe093dda", 00:34:43.972 "strip_size_kb": 0, 00:34:43.972 "state": "online", 00:34:43.972 "raid_level": "raid1", 00:34:43.972 "superblock": true, 00:34:43.972 "num_base_bdevs": 2, 00:34:43.972 "num_base_bdevs_discovered": 2, 00:34:43.972 "num_base_bdevs_operational": 2, 00:34:43.972 "base_bdevs_list": [ 00:34:43.972 { 00:34:43.972 "name": "BaseBdev1", 00:34:43.972 "uuid": "b76d9c66-6e82-48b1-a969-dc66dac0ef96", 00:34:43.972 "is_configured": true, 00:34:43.972 "data_offset": 256, 00:34:43.972 "data_size": 7936 00:34:43.972 }, 00:34:43.972 { 00:34:43.972 "name": "BaseBdev2", 00:34:43.972 "uuid": "c61d1b44-7808-45ca-9e77-81fbea3d435a", 00:34:43.972 "is_configured": true, 00:34:43.972 "data_offset": 256, 00:34:43.972 "data_size": 7936 00:34:43.972 } 00:34:43.972 ] 00:34:43.972 } 00:34:43.972 } 00:34:43.972 }' 00:34:43.972 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:44.232 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:34:44.232 BaseBdev2' 00:34:44.232 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:44.232 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:34:44.232 07:45:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:44.232 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:44.232 "name": "BaseBdev1", 00:34:44.232 "aliases": [ 00:34:44.232 "b76d9c66-6e82-48b1-a969-dc66dac0ef96" 00:34:44.232 ], 00:34:44.232 "product_name": "Malloc disk", 00:34:44.232 "block_size": 4096, 00:34:44.232 "num_blocks": 8192, 00:34:44.232 "uuid": "b76d9c66-6e82-48b1-a969-dc66dac0ef96", 00:34:44.232 "assigned_rate_limits": { 00:34:44.232 "rw_ios_per_sec": 0, 00:34:44.232 "rw_mbytes_per_sec": 0, 00:34:44.232 "r_mbytes_per_sec": 0, 00:34:44.232 "w_mbytes_per_sec": 0 00:34:44.232 }, 00:34:44.232 "claimed": true, 00:34:44.232 "claim_type": "exclusive_write", 00:34:44.232 "zoned": false, 00:34:44.232 "supported_io_types": { 00:34:44.232 "read": true, 00:34:44.232 "write": true, 00:34:44.232 "unmap": true, 00:34:44.232 "write_zeroes": true, 00:34:44.232 "flush": true, 00:34:44.232 "reset": true, 00:34:44.232 "compare": false, 00:34:44.232 "compare_and_write": false, 00:34:44.232 "abort": true, 00:34:44.232 "nvme_admin": false, 00:34:44.232 "nvme_io": false 00:34:44.232 }, 00:34:44.232 "memory_domains": [ 00:34:44.232 { 00:34:44.232 "dma_device_id": "system", 00:34:44.232 "dma_device_type": 1 00:34:44.232 }, 00:34:44.232 { 00:34:44.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:44.232 "dma_device_type": 2 00:34:44.232 } 00:34:44.232 ], 00:34:44.232 "driver_specific": {} 00:34:44.232 }' 00:34:44.232 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:44.232 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:44.491 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:34:44.491 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:44.491 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:44.491 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:44.491 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:44.491 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:44.491 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:44.491 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:44.787 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:44.787 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:44.787 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:44.787 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:44.787 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:45.071 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:45.071 "name": "BaseBdev2", 00:34:45.071 "aliases": [ 00:34:45.071 "c61d1b44-7808-45ca-9e77-81fbea3d435a" 00:34:45.071 ], 00:34:45.071 "product_name": "Malloc disk", 00:34:45.071 "block_size": 4096, 00:34:45.071 "num_blocks": 8192, 00:34:45.071 "uuid": "c61d1b44-7808-45ca-9e77-81fbea3d435a", 00:34:45.071 "assigned_rate_limits": { 00:34:45.071 "rw_ios_per_sec": 0, 00:34:45.071 "rw_mbytes_per_sec": 0, 00:34:45.071 "r_mbytes_per_sec": 0, 00:34:45.071 "w_mbytes_per_sec": 0 00:34:45.071 }, 00:34:45.071 "claimed": true, 00:34:45.071 "claim_type": "exclusive_write", 00:34:45.071 "zoned": false, 00:34:45.071 "supported_io_types": { 00:34:45.071 "read": true, 00:34:45.071 "write": true, 00:34:45.071 "unmap": true, 00:34:45.071 "write_zeroes": true, 00:34:45.071 "flush": true, 00:34:45.071 "reset": true, 00:34:45.071 "compare": false, 00:34:45.071 "compare_and_write": false, 00:34:45.071 "abort": true, 00:34:45.071 "nvme_admin": false, 00:34:45.071 "nvme_io": false 00:34:45.071 }, 00:34:45.071 "memory_domains": [ 00:34:45.071 { 00:34:45.071 "dma_device_id": "system", 00:34:45.071 "dma_device_type": 1 00:34:45.071 }, 00:34:45.071 { 00:34:45.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:45.071 "dma_device_type": 2 00:34:45.071 } 00:34:45.071 ], 00:34:45.071 "driver_specific": {} 00:34:45.071 }' 00:34:45.071 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:45.071 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:45.071 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:34:45.071 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:45.071 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:45.071 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:45.071 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:45.071 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:45.329 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:45.329 07:45:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:45.329 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:45.329 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:45.329 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:45.587 [2024-07-12 07:45:19.318489] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:45.587 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:45.588 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:45.846 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:45.846 "name": "Existed_Raid", 00:34:45.846 "uuid": "cb04f6a0-84dc-40d7-a293-cc91fe093dda", 00:34:45.846 "strip_size_kb": 0, 00:34:45.846 "state": "online", 00:34:45.846 "raid_level": "raid1", 00:34:45.846 "superblock": true, 00:34:45.846 "num_base_bdevs": 2, 00:34:45.846 "num_base_bdevs_discovered": 1, 00:34:45.846 "num_base_bdevs_operational": 1, 00:34:45.846 "base_bdevs_list": [ 00:34:45.846 { 00:34:45.846 "name": null, 00:34:45.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:45.846 "is_configured": false, 00:34:45.846 "data_offset": 256, 00:34:45.846 "data_size": 7936 00:34:45.846 }, 00:34:45.846 { 00:34:45.846 "name": "BaseBdev2", 00:34:45.846 "uuid": "c61d1b44-7808-45ca-9e77-81fbea3d435a", 00:34:45.846 "is_configured": true, 00:34:45.846 "data_offset": 256, 00:34:45.846 "data_size": 7936 00:34:45.846 } 00:34:45.846 ] 00:34:45.846 }' 00:34:45.846 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:45.846 07:45:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:46.412 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:34:46.412 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:46.412 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:46.412 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:46.412 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:46.412 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:46.412 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:46.671 [2024-07-12 07:45:20.545897] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:46.671 [2024-07-12 07:45:20.546215] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:46.929 [2024-07-12 07:45:20.567967] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:46.929 [2024-07-12 07:45:20.568238] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:46.929 [2024-07-12 07:45:20.568318] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 168040 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@946 -- # '[' -z 168040 ']' 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # kill -0 168040 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # uname 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 168040 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 168040' 00:34:46.929 killing process with pid 168040 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@965 -- # kill 168040 00:34:46.929 [2024-07-12 07:45:20.801395] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:46.929 07:45:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # wait 168040 00:34:46.929 [2024-07-12 07:45:20.801571] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:47.497 07:45:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:34:47.497 00:34:47.497 real 0m9.232s 00:34:47.497 user 0m16.594s 00:34:47.497 sys 0m1.669s 00:34:47.497 07:45:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:47.497 07:45:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:47.497 ************************************ 00:34:47.497 END TEST raid_state_function_test_sb_4k 00:34:47.497 ************************************ 00:34:47.497 07:45:21 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:34:47.497 07:45:21 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:34:47.497 07:45:21 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:47.497 07:45:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:47.497 ************************************ 00:34:47.497 START TEST raid_superblock_test_4k 00:34:47.497 ************************************ 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=168384 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 168384 /var/tmp/spdk-raid.sock 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@827 -- # '[' -z 168384 ']' 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:47.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:47.497 07:45:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:34:47.497 [2024-07-12 07:45:21.363834] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:47.497 [2024-07-12 07:45:21.364337] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168384 ] 00:34:47.756 [2024-07-12 07:45:21.519563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.756 [2024-07-12 07:45:21.572028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.756 [2024-07-12 07:45:21.618756] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:48.694 07:45:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:48.694 07:45:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # return 0 00:34:48.694 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:34:48.694 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:34:48.694 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:34:48.694 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:34:48.694 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:48.694 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:48.694 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:34:48.694 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:48.694 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:34:48.694 malloc1 00:34:48.694 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:48.953 [2024-07-12 07:45:22.596710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:48.953 [2024-07-12 07:45:22.597006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:48.953 [2024-07-12 07:45:22.597075] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:34:48.953 [2024-07-12 07:45:22.597192] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:48.953 [2024-07-12 07:45:22.599644] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:48.953 [2024-07-12 07:45:22.599816] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:48.953 pt1 00:34:48.953 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:34:48.953 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:34:48.953 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:34:48.953 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:34:48.953 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:48.953 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:48.953 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:34:48.953 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:48.953 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:34:48.953 malloc2 00:34:48.953 07:45:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:49.213 [2024-07-12 07:45:23.013194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:49.213 [2024-07-12 07:45:23.013404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:49.213 [2024-07-12 07:45:23.013470] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:34:49.213 [2024-07-12 07:45:23.013580] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:49.213 [2024-07-12 07:45:23.015849] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:49.213 [2024-07-12 07:45:23.016003] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:49.213 pt2 00:34:49.213 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:34:49.213 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:34:49.213 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:34:49.472 [2024-07-12 07:45:23.277318] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:49.472 [2024-07-12 07:45:23.279404] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:49.472 [2024-07-12 07:45:23.279708] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:34:49.472 [2024-07-12 07:45:23.279806] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:49.472 [2024-07-12 07:45:23.279959] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:34:49.472 [2024-07-12 07:45:23.280460] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:34:49.472 [2024-07-12 07:45:23.280571] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:34:49.472 [2024-07-12 07:45:23.280783] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:49.472 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:49.472 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:49.472 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:49.472 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:49.472 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:49.472 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:49.472 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:49.472 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:49.472 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:49.472 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:49.472 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:49.472 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:49.731 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:49.731 "name": "raid_bdev1", 00:34:49.731 "uuid": "1e216093-d22e-4001-b868-d92e01870312", 00:34:49.731 "strip_size_kb": 0, 00:34:49.731 "state": "online", 00:34:49.731 "raid_level": "raid1", 00:34:49.731 "superblock": true, 00:34:49.731 "num_base_bdevs": 2, 00:34:49.731 "num_base_bdevs_discovered": 2, 00:34:49.731 "num_base_bdevs_operational": 2, 00:34:49.731 "base_bdevs_list": [ 00:34:49.731 { 00:34:49.731 "name": "pt1", 00:34:49.731 "uuid": "b1a901da-06aa-548f-a08f-a899a16b6963", 00:34:49.731 "is_configured": true, 00:34:49.731 "data_offset": 256, 00:34:49.731 "data_size": 7936 00:34:49.731 }, 00:34:49.731 { 00:34:49.731 "name": "pt2", 00:34:49.731 "uuid": "69abd40c-8498-5ccd-9ca6-e99a52cbb760", 00:34:49.731 "is_configured": true, 00:34:49.731 "data_offset": 256, 00:34:49.731 "data_size": 7936 00:34:49.731 } 00:34:49.731 ] 00:34:49.731 }' 00:34:49.731 07:45:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:49.731 07:45:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:34:50.297 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:34:50.297 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:34:50.297 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:50.297 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:50.297 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:50.297 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:34:50.297 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:50.297 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:50.555 [2024-07-12 07:45:24.253613] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:50.555 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:50.555 "name": "raid_bdev1", 00:34:50.555 "aliases": [ 00:34:50.555 "1e216093-d22e-4001-b868-d92e01870312" 00:34:50.555 ], 00:34:50.555 "product_name": "Raid Volume", 00:34:50.555 "block_size": 4096, 00:34:50.555 "num_blocks": 7936, 00:34:50.555 "uuid": "1e216093-d22e-4001-b868-d92e01870312", 00:34:50.555 "assigned_rate_limits": { 00:34:50.555 "rw_ios_per_sec": 0, 00:34:50.555 "rw_mbytes_per_sec": 0, 00:34:50.555 "r_mbytes_per_sec": 0, 00:34:50.555 "w_mbytes_per_sec": 0 00:34:50.555 }, 00:34:50.555 "claimed": false, 00:34:50.555 "zoned": false, 00:34:50.555 "supported_io_types": { 00:34:50.555 "read": true, 00:34:50.555 "write": true, 00:34:50.555 "unmap": false, 00:34:50.555 "write_zeroes": true, 00:34:50.555 "flush": false, 00:34:50.555 "reset": true, 00:34:50.555 "compare": false, 00:34:50.555 "compare_and_write": false, 00:34:50.555 "abort": false, 00:34:50.555 "nvme_admin": false, 00:34:50.555 "nvme_io": false 00:34:50.555 }, 00:34:50.555 "memory_domains": [ 00:34:50.555 { 00:34:50.555 "dma_device_id": "system", 00:34:50.555 "dma_device_type": 1 00:34:50.555 }, 00:34:50.555 { 00:34:50.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:50.555 "dma_device_type": 2 00:34:50.555 }, 00:34:50.555 { 00:34:50.555 "dma_device_id": "system", 00:34:50.555 "dma_device_type": 1 00:34:50.555 }, 00:34:50.555 { 00:34:50.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:50.555 "dma_device_type": 2 00:34:50.555 } 00:34:50.555 ], 00:34:50.555 "driver_specific": { 00:34:50.555 "raid": { 00:34:50.555 "uuid": "1e216093-d22e-4001-b868-d92e01870312", 00:34:50.555 "strip_size_kb": 0, 00:34:50.555 "state": "online", 00:34:50.555 "raid_level": "raid1", 00:34:50.555 "superblock": true, 00:34:50.555 "num_base_bdevs": 2, 00:34:50.555 "num_base_bdevs_discovered": 2, 00:34:50.555 "num_base_bdevs_operational": 2, 00:34:50.555 "base_bdevs_list": [ 00:34:50.555 { 00:34:50.555 "name": "pt1", 00:34:50.555 "uuid": "b1a901da-06aa-548f-a08f-a899a16b6963", 00:34:50.556 "is_configured": true, 00:34:50.556 "data_offset": 256, 00:34:50.556 "data_size": 7936 00:34:50.556 }, 00:34:50.556 { 00:34:50.556 "name": "pt2", 00:34:50.556 "uuid": "69abd40c-8498-5ccd-9ca6-e99a52cbb760", 00:34:50.556 "is_configured": true, 00:34:50.556 "data_offset": 256, 00:34:50.556 "data_size": 7936 00:34:50.556 } 00:34:50.556 ] 00:34:50.556 } 00:34:50.556 } 00:34:50.556 }' 00:34:50.556 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:50.556 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:34:50.556 pt2' 00:34:50.556 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:50.556 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:34:50.556 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:50.814 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:50.814 "name": "pt1", 00:34:50.814 "aliases": [ 00:34:50.814 "b1a901da-06aa-548f-a08f-a899a16b6963" 00:34:50.814 ], 00:34:50.814 "product_name": "passthru", 00:34:50.814 "block_size": 4096, 00:34:50.814 "num_blocks": 8192, 00:34:50.814 "uuid": "b1a901da-06aa-548f-a08f-a899a16b6963", 00:34:50.814 "assigned_rate_limits": { 00:34:50.814 "rw_ios_per_sec": 0, 00:34:50.814 "rw_mbytes_per_sec": 0, 00:34:50.814 "r_mbytes_per_sec": 0, 00:34:50.814 "w_mbytes_per_sec": 0 00:34:50.814 }, 00:34:50.814 "claimed": true, 00:34:50.814 "claim_type": "exclusive_write", 00:34:50.814 "zoned": false, 00:34:50.814 "supported_io_types": { 00:34:50.814 "read": true, 00:34:50.814 "write": true, 00:34:50.814 "unmap": true, 00:34:50.814 "write_zeroes": true, 00:34:50.814 "flush": true, 00:34:50.814 "reset": true, 00:34:50.814 "compare": false, 00:34:50.814 "compare_and_write": false, 00:34:50.814 "abort": true, 00:34:50.814 "nvme_admin": false, 00:34:50.814 "nvme_io": false 00:34:50.814 }, 00:34:50.814 "memory_domains": [ 00:34:50.814 { 00:34:50.814 "dma_device_id": "system", 00:34:50.814 "dma_device_type": 1 00:34:50.814 }, 00:34:50.814 { 00:34:50.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:50.814 "dma_device_type": 2 00:34:50.814 } 00:34:50.814 ], 00:34:50.814 "driver_specific": { 00:34:50.814 "passthru": { 00:34:50.814 "name": "pt1", 00:34:50.814 "base_bdev_name": "malloc1" 00:34:50.814 } 00:34:50.814 } 00:34:50.814 }' 00:34:50.814 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:50.814 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:50.814 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:34:50.814 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:50.814 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:50.814 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:50.814 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:50.814 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:51.073 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:51.073 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:51.073 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:51.073 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:51.073 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:51.073 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:51.073 07:45:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:34:51.332 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:51.332 "name": "pt2", 00:34:51.332 "aliases": [ 00:34:51.332 "69abd40c-8498-5ccd-9ca6-e99a52cbb760" 00:34:51.332 ], 00:34:51.332 "product_name": "passthru", 00:34:51.332 "block_size": 4096, 00:34:51.332 "num_blocks": 8192, 00:34:51.332 "uuid": "69abd40c-8498-5ccd-9ca6-e99a52cbb760", 00:34:51.332 "assigned_rate_limits": { 00:34:51.332 "rw_ios_per_sec": 0, 00:34:51.332 "rw_mbytes_per_sec": 0, 00:34:51.332 "r_mbytes_per_sec": 0, 00:34:51.332 "w_mbytes_per_sec": 0 00:34:51.332 }, 00:34:51.332 "claimed": true, 00:34:51.332 "claim_type": "exclusive_write", 00:34:51.332 "zoned": false, 00:34:51.332 "supported_io_types": { 00:34:51.332 "read": true, 00:34:51.332 "write": true, 00:34:51.332 "unmap": true, 00:34:51.332 "write_zeroes": true, 00:34:51.332 "flush": true, 00:34:51.332 "reset": true, 00:34:51.332 "compare": false, 00:34:51.332 "compare_and_write": false, 00:34:51.332 "abort": true, 00:34:51.332 "nvme_admin": false, 00:34:51.332 "nvme_io": false 00:34:51.332 }, 00:34:51.332 "memory_domains": [ 00:34:51.332 { 00:34:51.332 "dma_device_id": "system", 00:34:51.332 "dma_device_type": 1 00:34:51.332 }, 00:34:51.332 { 00:34:51.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:51.332 "dma_device_type": 2 00:34:51.332 } 00:34:51.332 ], 00:34:51.332 "driver_specific": { 00:34:51.332 "passthru": { 00:34:51.332 "name": "pt2", 00:34:51.332 "base_bdev_name": "malloc2" 00:34:51.332 } 00:34:51.332 } 00:34:51.332 }' 00:34:51.332 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.332 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.332 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:34:51.332 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:51.332 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:51.591 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:51.591 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:51.591 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:51.591 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:51.591 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:51.591 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:51.591 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:51.591 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:51.591 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:34:51.850 [2024-07-12 07:45:25.613801] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:51.850 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=1e216093-d22e-4001-b868-d92e01870312 00:34:51.850 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 1e216093-d22e-4001-b868-d92e01870312 ']' 00:34:51.850 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:52.109 [2024-07-12 07:45:25.785670] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:52.109 [2024-07-12 07:45:25.785775] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:52.109 [2024-07-12 07:45:25.785973] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:52.109 [2024-07-12 07:45:25.786099] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:52.109 [2024-07-12 07:45:25.786161] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:34:52.109 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:52.109 07:45:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:34:52.369 07:45:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:34:52.369 07:45:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:34:52.369 07:45:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:34:52.369 07:45:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:52.629 07:45:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:34:52.629 07:45:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:52.629 07:45:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:34:52.629 07:45:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:52.888 07:45:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:34:52.888 07:45:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:34:52.888 07:45:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:34:52.888 07:45:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:34:52.888 07:45:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:52.888 07:45:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:52.888 07:45:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:52.888 07:45:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:52.888 07:45:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:52.888 07:45:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:52.888 07:45:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:52.889 07:45:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:52.889 07:45:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:34:53.148 [2024-07-12 07:45:26.862166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:53.148 [2024-07-12 07:45:26.864237] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:53.148 [2024-07-12 07:45:26.864397] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:53.148 [2024-07-12 07:45:26.865006] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:53.148 [2024-07-12 07:45:26.865245] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:53.148 [2024-07-12 07:45:26.865360] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:34:53.148 request: 00:34:53.148 { 00:34:53.148 "name": "raid_bdev1", 00:34:53.148 "raid_level": "raid1", 00:34:53.148 "base_bdevs": [ 00:34:53.148 "malloc1", 00:34:53.148 "malloc2" 00:34:53.148 ], 00:34:53.148 "superblock": false, 00:34:53.148 "method": "bdev_raid_create", 00:34:53.148 "req_id": 1 00:34:53.148 } 00:34:53.148 Got JSON-RPC error response 00:34:53.148 response: 00:34:53.148 { 00:34:53.148 "code": -17, 00:34:53.148 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:53.148 } 00:34:53.148 07:45:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:34:53.148 07:45:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:53.148 07:45:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:53.148 07:45:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:53.148 07:45:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:34:53.148 07:45:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:53.407 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:34:53.407 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:34:53.407 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:53.407 [2024-07-12 07:45:27.282210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:53.407 [2024-07-12 07:45:27.282515] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:53.407 [2024-07-12 07:45:27.282736] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:34:53.407 [2024-07-12 07:45:27.282943] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:53.407 [2024-07-12 07:45:27.285222] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:53.407 [2024-07-12 07:45:27.285495] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:53.407 [2024-07-12 07:45:27.285753] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:53.407 [2024-07-12 07:45:27.285914] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:53.407 pt1 00:34:53.667 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:34:53.667 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:53.667 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:53.667 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:53.667 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:53.667 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:53.667 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:53.667 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:53.667 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:53.667 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:53.667 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:53.667 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:53.667 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:53.667 "name": "raid_bdev1", 00:34:53.668 "uuid": "1e216093-d22e-4001-b868-d92e01870312", 00:34:53.668 "strip_size_kb": 0, 00:34:53.668 "state": "configuring", 00:34:53.668 "raid_level": "raid1", 00:34:53.668 "superblock": true, 00:34:53.668 "num_base_bdevs": 2, 00:34:53.668 "num_base_bdevs_discovered": 1, 00:34:53.668 "num_base_bdevs_operational": 2, 00:34:53.668 "base_bdevs_list": [ 00:34:53.668 { 00:34:53.668 "name": "pt1", 00:34:53.668 "uuid": "b1a901da-06aa-548f-a08f-a899a16b6963", 00:34:53.668 "is_configured": true, 00:34:53.668 "data_offset": 256, 00:34:53.668 "data_size": 7936 00:34:53.668 }, 00:34:53.668 { 00:34:53.668 "name": null, 00:34:53.668 "uuid": "69abd40c-8498-5ccd-9ca6-e99a52cbb760", 00:34:53.668 "is_configured": false, 00:34:53.668 "data_offset": 256, 00:34:53.668 "data_size": 7936 00:34:53.668 } 00:34:53.668 ] 00:34:53.668 }' 00:34:53.668 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:53.668 07:45:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:34:54.236 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:34:54.236 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:34:54.236 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:34:54.236 07:45:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:54.496 [2024-07-12 07:45:28.238378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:54.496 [2024-07-12 07:45:28.238808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:54.496 [2024-07-12 07:45:28.239027] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:34:54.496 [2024-07-12 07:45:28.239257] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:54.496 [2024-07-12 07:45:28.239742] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:54.496 [2024-07-12 07:45:28.239971] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:54.496 [2024-07-12 07:45:28.240224] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:54.496 [2024-07-12 07:45:28.240343] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:54.496 [2024-07-12 07:45:28.240472] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:34:54.496 [2024-07-12 07:45:28.240586] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:54.496 [2024-07-12 07:45:28.240730] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:34:54.496 [2024-07-12 07:45:28.241092] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:34:54.496 [2024-07-12 07:45:28.241195] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:34:54.496 [2024-07-12 07:45:28.241356] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:54.496 pt2 00:34:54.496 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:34:54.496 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:34:54.496 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:54.496 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:54.496 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:54.496 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:54.496 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:54.496 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:54.496 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:54.496 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:54.496 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:54.496 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:54.496 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:54.496 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:54.755 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:54.755 "name": "raid_bdev1", 00:34:54.755 "uuid": "1e216093-d22e-4001-b868-d92e01870312", 00:34:54.755 "strip_size_kb": 0, 00:34:54.755 "state": "online", 00:34:54.756 "raid_level": "raid1", 00:34:54.756 "superblock": true, 00:34:54.756 "num_base_bdevs": 2, 00:34:54.756 "num_base_bdevs_discovered": 2, 00:34:54.756 "num_base_bdevs_operational": 2, 00:34:54.756 "base_bdevs_list": [ 00:34:54.756 { 00:34:54.756 "name": "pt1", 00:34:54.756 "uuid": "b1a901da-06aa-548f-a08f-a899a16b6963", 00:34:54.756 "is_configured": true, 00:34:54.756 "data_offset": 256, 00:34:54.756 "data_size": 7936 00:34:54.756 }, 00:34:54.756 { 00:34:54.756 "name": "pt2", 00:34:54.756 "uuid": "69abd40c-8498-5ccd-9ca6-e99a52cbb760", 00:34:54.756 "is_configured": true, 00:34:54.756 "data_offset": 256, 00:34:54.756 "data_size": 7936 00:34:54.756 } 00:34:54.756 ] 00:34:54.756 }' 00:34:54.756 07:45:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:54.756 07:45:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:34:55.325 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:34:55.325 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:34:55.325 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:55.325 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:55.325 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:55.325 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:34:55.325 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:55.325 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:55.585 [2024-07-12 07:45:29.386717] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:55.585 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:55.585 "name": "raid_bdev1", 00:34:55.585 "aliases": [ 00:34:55.585 "1e216093-d22e-4001-b868-d92e01870312" 00:34:55.585 ], 00:34:55.585 "product_name": "Raid Volume", 00:34:55.585 "block_size": 4096, 00:34:55.585 "num_blocks": 7936, 00:34:55.585 "uuid": "1e216093-d22e-4001-b868-d92e01870312", 00:34:55.585 "assigned_rate_limits": { 00:34:55.585 "rw_ios_per_sec": 0, 00:34:55.585 "rw_mbytes_per_sec": 0, 00:34:55.585 "r_mbytes_per_sec": 0, 00:34:55.585 "w_mbytes_per_sec": 0 00:34:55.585 }, 00:34:55.585 "claimed": false, 00:34:55.585 "zoned": false, 00:34:55.585 "supported_io_types": { 00:34:55.585 "read": true, 00:34:55.585 "write": true, 00:34:55.585 "unmap": false, 00:34:55.585 "write_zeroes": true, 00:34:55.585 "flush": false, 00:34:55.585 "reset": true, 00:34:55.585 "compare": false, 00:34:55.585 "compare_and_write": false, 00:34:55.585 "abort": false, 00:34:55.585 "nvme_admin": false, 00:34:55.585 "nvme_io": false 00:34:55.585 }, 00:34:55.585 "memory_domains": [ 00:34:55.585 { 00:34:55.585 "dma_device_id": "system", 00:34:55.585 "dma_device_type": 1 00:34:55.585 }, 00:34:55.585 { 00:34:55.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:55.585 "dma_device_type": 2 00:34:55.585 }, 00:34:55.585 { 00:34:55.585 "dma_device_id": "system", 00:34:55.585 "dma_device_type": 1 00:34:55.585 }, 00:34:55.585 { 00:34:55.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:55.585 "dma_device_type": 2 00:34:55.585 } 00:34:55.585 ], 00:34:55.585 "driver_specific": { 00:34:55.585 "raid": { 00:34:55.585 "uuid": "1e216093-d22e-4001-b868-d92e01870312", 00:34:55.585 "strip_size_kb": 0, 00:34:55.585 "state": "online", 00:34:55.585 "raid_level": "raid1", 00:34:55.585 "superblock": true, 00:34:55.585 "num_base_bdevs": 2, 00:34:55.585 "num_base_bdevs_discovered": 2, 00:34:55.585 "num_base_bdevs_operational": 2, 00:34:55.585 "base_bdevs_list": [ 00:34:55.585 { 00:34:55.585 "name": "pt1", 00:34:55.585 "uuid": "b1a901da-06aa-548f-a08f-a899a16b6963", 00:34:55.585 "is_configured": true, 00:34:55.585 "data_offset": 256, 00:34:55.585 "data_size": 7936 00:34:55.585 }, 00:34:55.585 { 00:34:55.585 "name": "pt2", 00:34:55.585 "uuid": "69abd40c-8498-5ccd-9ca6-e99a52cbb760", 00:34:55.585 "is_configured": true, 00:34:55.585 "data_offset": 256, 00:34:55.585 "data_size": 7936 00:34:55.585 } 00:34:55.585 ] 00:34:55.585 } 00:34:55.585 } 00:34:55.585 }' 00:34:55.585 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:55.585 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:34:55.585 pt2' 00:34:55.585 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:55.585 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:55.585 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:34:55.845 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:55.845 "name": "pt1", 00:34:55.845 "aliases": [ 00:34:55.845 "b1a901da-06aa-548f-a08f-a899a16b6963" 00:34:55.845 ], 00:34:55.845 "product_name": "passthru", 00:34:55.845 "block_size": 4096, 00:34:55.845 "num_blocks": 8192, 00:34:55.845 "uuid": "b1a901da-06aa-548f-a08f-a899a16b6963", 00:34:55.845 "assigned_rate_limits": { 00:34:55.845 "rw_ios_per_sec": 0, 00:34:55.845 "rw_mbytes_per_sec": 0, 00:34:55.845 "r_mbytes_per_sec": 0, 00:34:55.845 "w_mbytes_per_sec": 0 00:34:55.845 }, 00:34:55.845 "claimed": true, 00:34:55.845 "claim_type": "exclusive_write", 00:34:55.845 "zoned": false, 00:34:55.845 "supported_io_types": { 00:34:55.845 "read": true, 00:34:55.845 "write": true, 00:34:55.845 "unmap": true, 00:34:55.845 "write_zeroes": true, 00:34:55.845 "flush": true, 00:34:55.845 "reset": true, 00:34:55.845 "compare": false, 00:34:55.845 "compare_and_write": false, 00:34:55.845 "abort": true, 00:34:55.845 "nvme_admin": false, 00:34:55.845 "nvme_io": false 00:34:55.845 }, 00:34:55.845 "memory_domains": [ 00:34:55.845 { 00:34:55.845 "dma_device_id": "system", 00:34:55.845 "dma_device_type": 1 00:34:55.845 }, 00:34:55.845 { 00:34:55.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:55.845 "dma_device_type": 2 00:34:55.845 } 00:34:55.845 ], 00:34:55.845 "driver_specific": { 00:34:55.845 "passthru": { 00:34:55.845 "name": "pt1", 00:34:55.845 "base_bdev_name": "malloc1" 00:34:55.845 } 00:34:55.845 } 00:34:55.845 }' 00:34:55.845 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:55.845 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:55.845 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:34:55.845 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:56.105 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:56.105 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:56.105 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:56.105 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:56.105 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:56.105 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:56.105 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:56.105 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:56.105 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:56.105 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:34:56.105 07:45:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:56.364 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:56.364 "name": "pt2", 00:34:56.364 "aliases": [ 00:34:56.364 "69abd40c-8498-5ccd-9ca6-e99a52cbb760" 00:34:56.364 ], 00:34:56.364 "product_name": "passthru", 00:34:56.364 "block_size": 4096, 00:34:56.364 "num_blocks": 8192, 00:34:56.364 "uuid": "69abd40c-8498-5ccd-9ca6-e99a52cbb760", 00:34:56.364 "assigned_rate_limits": { 00:34:56.364 "rw_ios_per_sec": 0, 00:34:56.364 "rw_mbytes_per_sec": 0, 00:34:56.364 "r_mbytes_per_sec": 0, 00:34:56.364 "w_mbytes_per_sec": 0 00:34:56.364 }, 00:34:56.364 "claimed": true, 00:34:56.364 "claim_type": "exclusive_write", 00:34:56.364 "zoned": false, 00:34:56.364 "supported_io_types": { 00:34:56.364 "read": true, 00:34:56.364 "write": true, 00:34:56.364 "unmap": true, 00:34:56.364 "write_zeroes": true, 00:34:56.364 "flush": true, 00:34:56.364 "reset": true, 00:34:56.364 "compare": false, 00:34:56.364 "compare_and_write": false, 00:34:56.364 "abort": true, 00:34:56.364 "nvme_admin": false, 00:34:56.364 "nvme_io": false 00:34:56.364 }, 00:34:56.364 "memory_domains": [ 00:34:56.364 { 00:34:56.364 "dma_device_id": "system", 00:34:56.364 "dma_device_type": 1 00:34:56.364 }, 00:34:56.364 { 00:34:56.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:56.364 "dma_device_type": 2 00:34:56.364 } 00:34:56.364 ], 00:34:56.364 "driver_specific": { 00:34:56.364 "passthru": { 00:34:56.364 "name": "pt2", 00:34:56.364 "base_bdev_name": "malloc2" 00:34:56.364 } 00:34:56.364 } 00:34:56.364 }' 00:34:56.364 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:56.364 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:56.364 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:34:56.364 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:56.624 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:56.624 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:56.624 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:56.624 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:56.624 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:56.624 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:56.624 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:56.624 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:56.624 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:56.624 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:34:56.883 [2024-07-12 07:45:30.746949] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:57.143 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 1e216093-d22e-4001-b868-d92e01870312 '!=' 1e216093-d22e-4001-b868-d92e01870312 ']' 00:34:57.143 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:34:57.143 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:34:57.143 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:34:57.143 07:45:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:57.143 [2024-07-12 07:45:30.986884] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:34:57.143 07:45:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:57.143 07:45:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:57.143 07:45:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:57.143 07:45:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:57.143 07:45:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:57.143 07:45:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:57.143 07:45:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:57.143 07:45:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:57.143 07:45:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:57.143 07:45:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:57.143 07:45:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:57.143 07:45:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:57.402 07:45:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:57.402 "name": "raid_bdev1", 00:34:57.402 "uuid": "1e216093-d22e-4001-b868-d92e01870312", 00:34:57.402 "strip_size_kb": 0, 00:34:57.402 "state": "online", 00:34:57.402 "raid_level": "raid1", 00:34:57.402 "superblock": true, 00:34:57.402 "num_base_bdevs": 2, 00:34:57.402 "num_base_bdevs_discovered": 1, 00:34:57.402 "num_base_bdevs_operational": 1, 00:34:57.402 "base_bdevs_list": [ 00:34:57.402 { 00:34:57.402 "name": null, 00:34:57.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.402 "is_configured": false, 00:34:57.402 "data_offset": 256, 00:34:57.402 "data_size": 7936 00:34:57.402 }, 00:34:57.402 { 00:34:57.402 "name": "pt2", 00:34:57.402 "uuid": "69abd40c-8498-5ccd-9ca6-e99a52cbb760", 00:34:57.402 "is_configured": true, 00:34:57.402 "data_offset": 256, 00:34:57.402 "data_size": 7936 00:34:57.402 } 00:34:57.402 ] 00:34:57.402 }' 00:34:57.402 07:45:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:57.402 07:45:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:34:57.971 07:45:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:58.229 [2024-07-12 07:45:32.063026] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:58.229 [2024-07-12 07:45:32.063146] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:58.229 [2024-07-12 07:45:32.063325] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:58.229 [2024-07-12 07:45:32.063426] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:58.229 [2024-07-12 07:45:32.063497] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:34:58.229 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:58.229 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:34:58.488 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:34:58.488 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:34:58.488 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:34:58.488 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:34:58.488 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:58.746 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:34:58.746 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:34:58.746 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:34:58.746 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:34:58.746 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:34:58.746 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:59.004 [2024-07-12 07:45:32.687123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:59.004 [2024-07-12 07:45:32.687643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:59.004 [2024-07-12 07:45:32.687897] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:34:59.004 [2024-07-12 07:45:32.688130] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:59.004 [2024-07-12 07:45:32.690480] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:59.004 [2024-07-12 07:45:32.690732] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:59.005 [2024-07-12 07:45:32.690976] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:59.005 [2024-07-12 07:45:32.691113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:59.005 [2024-07-12 07:45:32.691210] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:34:59.005 [2024-07-12 07:45:32.691432] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:59.005 [2024-07-12 07:45:32.691521] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:34:59.005 [2024-07-12 07:45:32.691856] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:34:59.005 [2024-07-12 07:45:32.691962] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:34:59.005 [2024-07-12 07:45:32.692147] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:59.005 pt2 00:34:59.005 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:59.005 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:59.005 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:59.005 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:59.005 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:59.005 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:59.005 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:59.005 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:59.005 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:59.005 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:59.005 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:59.005 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:59.264 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:59.264 "name": "raid_bdev1", 00:34:59.264 "uuid": "1e216093-d22e-4001-b868-d92e01870312", 00:34:59.264 "strip_size_kb": 0, 00:34:59.264 "state": "online", 00:34:59.264 "raid_level": "raid1", 00:34:59.264 "superblock": true, 00:34:59.264 "num_base_bdevs": 2, 00:34:59.264 "num_base_bdevs_discovered": 1, 00:34:59.264 "num_base_bdevs_operational": 1, 00:34:59.264 "base_bdevs_list": [ 00:34:59.264 { 00:34:59.264 "name": null, 00:34:59.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.264 "is_configured": false, 00:34:59.264 "data_offset": 256, 00:34:59.264 "data_size": 7936 00:34:59.264 }, 00:34:59.264 { 00:34:59.264 "name": "pt2", 00:34:59.264 "uuid": "69abd40c-8498-5ccd-9ca6-e99a52cbb760", 00:34:59.264 "is_configured": true, 00:34:59.264 "data_offset": 256, 00:34:59.264 "data_size": 7936 00:34:59.264 } 00:34:59.264 ] 00:34:59.264 }' 00:34:59.264 07:45:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:59.264 07:45:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:34:59.831 07:45:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:00.090 [2024-07-12 07:45:33.727455] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:00.090 [2024-07-12 07:45:33.727596] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:00.090 [2024-07-12 07:45:33.727738] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:00.090 [2024-07-12 07:45:33.727795] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:00.090 [2024-07-12 07:45:33.727823] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:35:00.090 07:45:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:35:00.090 07:45:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:00.349 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:35:00.349 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:35:00.349 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:35:00.349 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:00.349 [2024-07-12 07:45:34.163522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:00.349 [2024-07-12 07:45:34.163710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:00.349 [2024-07-12 07:45:34.163773] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:35:00.349 [2024-07-12 07:45:34.163856] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:00.349 [2024-07-12 07:45:34.166060] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:00.349 [2024-07-12 07:45:34.166211] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:00.349 [2024-07-12 07:45:34.166381] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:00.349 [2024-07-12 07:45:34.166431] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:00.349 [2024-07-12 07:45:34.166647] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:35:00.349 [2024-07-12 07:45:34.166688] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:00.349 [2024-07-12 07:45:34.166777] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:35:00.349 [2024-07-12 07:45:34.166848] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:00.349 [2024-07-12 07:45:34.166938] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:35:00.349 [2024-07-12 07:45:34.167030] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:00.349 [2024-07-12 07:45:34.167177] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:35:00.349 [2024-07-12 07:45:34.167515] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:35:00.349 [2024-07-12 07:45:34.167608] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:35:00.349 [2024-07-12 07:45:34.167805] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:00.349 pt1 00:35:00.349 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:35:00.349 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:00.349 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:00.349 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:00.349 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:00.349 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:00.349 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:00.349 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:00.349 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:00.349 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:00.349 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:00.350 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:00.350 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:00.608 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:00.608 "name": "raid_bdev1", 00:35:00.608 "uuid": "1e216093-d22e-4001-b868-d92e01870312", 00:35:00.609 "strip_size_kb": 0, 00:35:00.609 "state": "online", 00:35:00.609 "raid_level": "raid1", 00:35:00.609 "superblock": true, 00:35:00.609 "num_base_bdevs": 2, 00:35:00.609 "num_base_bdevs_discovered": 1, 00:35:00.609 "num_base_bdevs_operational": 1, 00:35:00.609 "base_bdevs_list": [ 00:35:00.609 { 00:35:00.609 "name": null, 00:35:00.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:00.609 "is_configured": false, 00:35:00.609 "data_offset": 256, 00:35:00.609 "data_size": 7936 00:35:00.609 }, 00:35:00.609 { 00:35:00.609 "name": "pt2", 00:35:00.609 "uuid": "69abd40c-8498-5ccd-9ca6-e99a52cbb760", 00:35:00.609 "is_configured": true, 00:35:00.609 "data_offset": 256, 00:35:00.609 "data_size": 7936 00:35:00.609 } 00:35:00.609 ] 00:35:00.609 }' 00:35:00.609 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:00.609 07:45:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:01.177 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:01.177 07:45:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:35:01.436 07:45:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:35:01.436 07:45:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:01.436 07:45:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:35:01.436 [2024-07-12 07:45:35.306027] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:01.696 07:45:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 1e216093-d22e-4001-b868-d92e01870312 '!=' 1e216093-d22e-4001-b868-d92e01870312 ']' 00:35:01.696 07:45:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 168384 00:35:01.696 07:45:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@946 -- # '[' -z 168384 ']' 00:35:01.696 07:45:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # kill -0 168384 00:35:01.696 07:45:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # uname 00:35:01.696 07:45:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:01.696 07:45:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 168384 00:35:01.696 07:45:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:01.696 07:45:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:01.696 07:45:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 168384' 00:35:01.696 killing process with pid 168384 00:35:01.696 07:45:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@965 -- # kill 168384 00:35:01.696 07:45:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # wait 168384 00:35:01.696 [2024-07-12 07:45:35.351554] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:01.696 [2024-07-12 07:45:35.351643] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:01.696 [2024-07-12 07:45:35.351803] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:01.696 [2024-07-12 07:45:35.351922] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:35:01.696 [2024-07-12 07:45:35.393935] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:01.955 07:45:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:35:01.955 00:35:01.955 real 0m14.511s 00:35:01.955 user 0m26.097s 00:35:01.955 sys 0m2.545s 00:35:01.955 07:45:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:01.955 07:45:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:01.955 ************************************ 00:35:01.955 END TEST raid_superblock_test_4k 00:35:01.955 ************************************ 00:35:02.214 07:45:35 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:35:02.214 07:45:35 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:35:02.214 07:45:35 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:35:02.214 07:45:35 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:02.214 07:45:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:02.214 ************************************ 00:35:02.214 START TEST raid_rebuild_test_sb_4k 00:35:02.214 ************************************ 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false true 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:35:02.214 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=168894 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 168894 /var/tmp/spdk-raid.sock 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@827 -- # '[' -z 168894 ']' 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:02.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:02.215 07:45:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:02.215 [2024-07-12 07:45:35.954940] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:02.215 [2024-07-12 07:45:35.955383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168894 ] 00:35:02.215 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:02.215 Zero copy mechanism will not be used. 00:35:02.474 [2024-07-12 07:45:36.110023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.474 [2024-07-12 07:45:36.150948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.474 [2024-07-12 07:45:36.192118] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:03.043 07:45:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:03.043 07:45:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # return 0 00:35:03.043 07:45:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:03.043 07:45:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:35:03.302 BaseBdev1_malloc 00:35:03.302 07:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:03.559 [2024-07-12 07:45:37.217217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:03.559 [2024-07-12 07:45:37.217513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:03.559 [2024-07-12 07:45:37.217584] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:35:03.559 [2024-07-12 07:45:37.217721] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:03.559 [2024-07-12 07:45:37.220252] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:03.559 [2024-07-12 07:45:37.220434] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:03.559 BaseBdev1 00:35:03.559 07:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:03.559 07:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:35:03.817 BaseBdev2_malloc 00:35:03.817 07:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:03.817 [2024-07-12 07:45:37.677819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:03.817 [2024-07-12 07:45:37.678013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:03.817 [2024-07-12 07:45:37.678077] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:35:03.817 [2024-07-12 07:45:37.678213] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:03.817 [2024-07-12 07:45:37.680399] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:03.817 [2024-07-12 07:45:37.680559] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:03.817 BaseBdev2 00:35:03.817 07:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:35:04.075 spare_malloc 00:35:04.333 07:45:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:04.333 spare_delay 00:35:04.333 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:04.593 [2024-07-12 07:45:38.293672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:04.593 [2024-07-12 07:45:38.293861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:04.593 [2024-07-12 07:45:38.293924] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:35:04.593 [2024-07-12 07:45:38.294053] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:04.593 [2024-07-12 07:45:38.296363] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:04.593 [2024-07-12 07:45:38.296536] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:04.593 spare 00:35:04.593 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:35:04.593 [2024-07-12 07:45:38.465752] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:04.593 [2024-07-12 07:45:38.467793] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:04.593 [2024-07-12 07:45:38.468088] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:35:04.593 [2024-07-12 07:45:38.468183] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:04.593 [2024-07-12 07:45:38.468353] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:35:04.593 [2024-07-12 07:45:38.468809] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:35:04.593 [2024-07-12 07:45:38.468909] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:35:04.593 [2024-07-12 07:45:38.469130] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:04.852 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:04.852 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:04.852 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:04.853 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:04.853 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:04.853 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:04.853 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:04.853 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:04.853 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:04.853 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:04.853 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:04.853 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:05.112 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:05.112 "name": "raid_bdev1", 00:35:05.112 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:05.112 "strip_size_kb": 0, 00:35:05.112 "state": "online", 00:35:05.112 "raid_level": "raid1", 00:35:05.112 "superblock": true, 00:35:05.112 "num_base_bdevs": 2, 00:35:05.112 "num_base_bdevs_discovered": 2, 00:35:05.112 "num_base_bdevs_operational": 2, 00:35:05.112 "base_bdevs_list": [ 00:35:05.112 { 00:35:05.112 "name": "BaseBdev1", 00:35:05.112 "uuid": "05a32771-359c-5314-a5c7-1355cfc52f38", 00:35:05.112 "is_configured": true, 00:35:05.112 "data_offset": 256, 00:35:05.112 "data_size": 7936 00:35:05.112 }, 00:35:05.112 { 00:35:05.112 "name": "BaseBdev2", 00:35:05.112 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:05.112 "is_configured": true, 00:35:05.112 "data_offset": 256, 00:35:05.112 "data_size": 7936 00:35:05.112 } 00:35:05.112 ] 00:35:05.112 }' 00:35:05.112 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:05.112 07:45:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:05.680 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:05.680 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:35:05.680 [2024-07-12 07:45:39.462055] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:05.680 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:35:05.680 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:05.680 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:05.939 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:35:05.939 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:35:05.939 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:35:05.939 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:35:05.939 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:35:05.939 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:05.939 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:35:05.939 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:05.939 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:05.939 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:05.939 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:35:05.939 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:05.939 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:05.939 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:06.199 [2024-07-12 07:45:39.885994] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:35:06.199 /dev/nbd0 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # local i 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # break 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:06.199 1+0 records in 00:35:06.199 1+0 records out 00:35:06.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387022 s, 10.6 MB/s 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # size=4096 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # return 0 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:35:06.199 07:45:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:35:06.767 7936+0 records in 00:35:06.767 7936+0 records out 00:35:06.767 32505856 bytes (33 MB, 31 MiB) copied, 0.668147 s, 48.7 MB/s 00:35:06.767 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:35:06.767 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:06.767 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:06.767 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:06.767 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:35:06.767 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:06.767 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:07.026 [2024-07-12 07:45:40.803515] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:07.026 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:07.026 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:07.026 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:07.026 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:07.026 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:07.026 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:07.026 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:35:07.026 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:35:07.026 07:45:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:35:07.286 [2024-07-12 07:45:41.039074] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:07.286 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:07.286 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:07.286 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:07.286 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:07.286 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:07.286 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:07.286 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:07.286 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:07.286 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:07.286 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:07.286 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:07.286 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:07.546 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:07.546 "name": "raid_bdev1", 00:35:07.546 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:07.546 "strip_size_kb": 0, 00:35:07.546 "state": "online", 00:35:07.546 "raid_level": "raid1", 00:35:07.546 "superblock": true, 00:35:07.546 "num_base_bdevs": 2, 00:35:07.546 "num_base_bdevs_discovered": 1, 00:35:07.546 "num_base_bdevs_operational": 1, 00:35:07.546 "base_bdevs_list": [ 00:35:07.546 { 00:35:07.546 "name": null, 00:35:07.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.546 "is_configured": false, 00:35:07.546 "data_offset": 256, 00:35:07.546 "data_size": 7936 00:35:07.546 }, 00:35:07.546 { 00:35:07.546 "name": "BaseBdev2", 00:35:07.546 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:07.546 "is_configured": true, 00:35:07.546 "data_offset": 256, 00:35:07.546 "data_size": 7936 00:35:07.546 } 00:35:07.546 ] 00:35:07.546 }' 00:35:07.546 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:07.546 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:08.114 07:45:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:08.114 [2024-07-12 07:45:41.987231] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:08.114 [2024-07-12 07:45:41.991327] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c7c0 00:35:08.114 [2024-07-12 07:45:41.993484] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:08.373 07:45:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:35:09.309 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:09.309 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:09.309 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:09.309 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:09.309 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:09.309 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:09.309 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:09.567 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:09.567 "name": "raid_bdev1", 00:35:09.567 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:09.567 "strip_size_kb": 0, 00:35:09.567 "state": "online", 00:35:09.567 "raid_level": "raid1", 00:35:09.567 "superblock": true, 00:35:09.567 "num_base_bdevs": 2, 00:35:09.567 "num_base_bdevs_discovered": 2, 00:35:09.567 "num_base_bdevs_operational": 2, 00:35:09.567 "process": { 00:35:09.567 "type": "rebuild", 00:35:09.567 "target": "spare", 00:35:09.567 "progress": { 00:35:09.567 "blocks": 3072, 00:35:09.567 "percent": 38 00:35:09.567 } 00:35:09.567 }, 00:35:09.567 "base_bdevs_list": [ 00:35:09.567 { 00:35:09.567 "name": "spare", 00:35:09.567 "uuid": "73fc4b2a-1170-5762-bf1d-8241e52d0be5", 00:35:09.567 "is_configured": true, 00:35:09.567 "data_offset": 256, 00:35:09.567 "data_size": 7936 00:35:09.567 }, 00:35:09.567 { 00:35:09.567 "name": "BaseBdev2", 00:35:09.567 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:09.567 "is_configured": true, 00:35:09.567 "data_offset": 256, 00:35:09.567 "data_size": 7936 00:35:09.567 } 00:35:09.567 ] 00:35:09.567 }' 00:35:09.567 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:09.567 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:09.567 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:09.567 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:09.567 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:09.825 [2024-07-12 07:45:43.496870] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:09.825 [2024-07-12 07:45:43.501860] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:09.825 [2024-07-12 07:45:43.502033] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:09.825 [2024-07-12 07:45:43.502078] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:09.825 [2024-07-12 07:45:43.502154] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:09.825 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:09.825 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:09.825 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:09.825 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:09.825 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:09.825 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:09.825 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:09.825 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:09.825 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:09.825 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:09.825 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:09.825 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:10.084 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:10.084 "name": "raid_bdev1", 00:35:10.084 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:10.084 "strip_size_kb": 0, 00:35:10.084 "state": "online", 00:35:10.084 "raid_level": "raid1", 00:35:10.084 "superblock": true, 00:35:10.084 "num_base_bdevs": 2, 00:35:10.084 "num_base_bdevs_discovered": 1, 00:35:10.084 "num_base_bdevs_operational": 1, 00:35:10.084 "base_bdevs_list": [ 00:35:10.084 { 00:35:10.084 "name": null, 00:35:10.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:10.084 "is_configured": false, 00:35:10.084 "data_offset": 256, 00:35:10.084 "data_size": 7936 00:35:10.084 }, 00:35:10.084 { 00:35:10.084 "name": "BaseBdev2", 00:35:10.084 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:10.084 "is_configured": true, 00:35:10.084 "data_offset": 256, 00:35:10.084 "data_size": 7936 00:35:10.084 } 00:35:10.084 ] 00:35:10.084 }' 00:35:10.084 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:10.084 07:45:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:10.654 07:45:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:10.654 07:45:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:10.654 07:45:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:10.654 07:45:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:10.654 07:45:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:10.654 07:45:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:10.654 07:45:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:10.913 07:45:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:10.913 "name": "raid_bdev1", 00:35:10.913 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:10.913 "strip_size_kb": 0, 00:35:10.913 "state": "online", 00:35:10.913 "raid_level": "raid1", 00:35:10.913 "superblock": true, 00:35:10.913 "num_base_bdevs": 2, 00:35:10.913 "num_base_bdevs_discovered": 1, 00:35:10.913 "num_base_bdevs_operational": 1, 00:35:10.913 "base_bdevs_list": [ 00:35:10.913 { 00:35:10.913 "name": null, 00:35:10.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:10.913 "is_configured": false, 00:35:10.913 "data_offset": 256, 00:35:10.913 "data_size": 7936 00:35:10.913 }, 00:35:10.913 { 00:35:10.913 "name": "BaseBdev2", 00:35:10.913 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:10.913 "is_configured": true, 00:35:10.913 "data_offset": 256, 00:35:10.913 "data_size": 7936 00:35:10.913 } 00:35:10.913 ] 00:35:10.913 }' 00:35:10.913 07:45:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:10.913 07:45:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:10.913 07:45:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:10.913 07:45:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:10.913 07:45:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:11.186 [2024-07-12 07:45:44.905867] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:11.187 [2024-07-12 07:45:44.908267] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:35:11.187 [2024-07-12 07:45:44.910340] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:11.187 07:45:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:12.162 07:45:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:12.162 07:45:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:12.162 07:45:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:12.162 07:45:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:12.162 07:45:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:12.162 07:45:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:12.162 07:45:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:12.421 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:12.421 "name": "raid_bdev1", 00:35:12.421 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:12.421 "strip_size_kb": 0, 00:35:12.421 "state": "online", 00:35:12.421 "raid_level": "raid1", 00:35:12.421 "superblock": true, 00:35:12.421 "num_base_bdevs": 2, 00:35:12.421 "num_base_bdevs_discovered": 2, 00:35:12.421 "num_base_bdevs_operational": 2, 00:35:12.421 "process": { 00:35:12.421 "type": "rebuild", 00:35:12.421 "target": "spare", 00:35:12.421 "progress": { 00:35:12.421 "blocks": 3072, 00:35:12.421 "percent": 38 00:35:12.421 } 00:35:12.421 }, 00:35:12.421 "base_bdevs_list": [ 00:35:12.421 { 00:35:12.421 "name": "spare", 00:35:12.421 "uuid": "73fc4b2a-1170-5762-bf1d-8241e52d0be5", 00:35:12.421 "is_configured": true, 00:35:12.421 "data_offset": 256, 00:35:12.421 "data_size": 7936 00:35:12.421 }, 00:35:12.421 { 00:35:12.421 "name": "BaseBdev2", 00:35:12.421 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:12.421 "is_configured": true, 00:35:12.421 "data_offset": 256, 00:35:12.421 "data_size": 7936 00:35:12.421 } 00:35:12.421 ] 00:35:12.421 }' 00:35:12.421 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:12.421 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:12.421 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:12.421 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:12.421 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:35:12.422 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:35:12.422 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:35:12.422 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:35:12.422 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:35:12.422 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:35:12.422 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=1257 00:35:12.422 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:12.422 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:12.422 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:12.422 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:12.422 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:12.422 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:12.422 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:12.422 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:12.681 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:12.681 "name": "raid_bdev1", 00:35:12.681 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:12.681 "strip_size_kb": 0, 00:35:12.681 "state": "online", 00:35:12.681 "raid_level": "raid1", 00:35:12.681 "superblock": true, 00:35:12.681 "num_base_bdevs": 2, 00:35:12.681 "num_base_bdevs_discovered": 2, 00:35:12.681 "num_base_bdevs_operational": 2, 00:35:12.681 "process": { 00:35:12.681 "type": "rebuild", 00:35:12.681 "target": "spare", 00:35:12.681 "progress": { 00:35:12.681 "blocks": 3840, 00:35:12.681 "percent": 48 00:35:12.681 } 00:35:12.681 }, 00:35:12.682 "base_bdevs_list": [ 00:35:12.682 { 00:35:12.682 "name": "spare", 00:35:12.682 "uuid": "73fc4b2a-1170-5762-bf1d-8241e52d0be5", 00:35:12.682 "is_configured": true, 00:35:12.682 "data_offset": 256, 00:35:12.682 "data_size": 7936 00:35:12.682 }, 00:35:12.682 { 00:35:12.682 "name": "BaseBdev2", 00:35:12.682 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:12.682 "is_configured": true, 00:35:12.682 "data_offset": 256, 00:35:12.682 "data_size": 7936 00:35:12.682 } 00:35:12.682 ] 00:35:12.682 }' 00:35:12.682 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:12.682 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:12.682 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:12.682 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:12.682 07:45:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:14.062 07:45:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:14.062 07:45:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:14.062 07:45:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:14.062 07:45:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:14.062 07:45:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:14.062 07:45:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:14.062 07:45:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:14.062 07:45:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:14.062 07:45:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:14.062 "name": "raid_bdev1", 00:35:14.062 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:14.062 "strip_size_kb": 0, 00:35:14.062 "state": "online", 00:35:14.062 "raid_level": "raid1", 00:35:14.062 "superblock": true, 00:35:14.062 "num_base_bdevs": 2, 00:35:14.062 "num_base_bdevs_discovered": 2, 00:35:14.062 "num_base_bdevs_operational": 2, 00:35:14.062 "process": { 00:35:14.062 "type": "rebuild", 00:35:14.062 "target": "spare", 00:35:14.062 "progress": { 00:35:14.062 "blocks": 7168, 00:35:14.062 "percent": 90 00:35:14.062 } 00:35:14.062 }, 00:35:14.062 "base_bdevs_list": [ 00:35:14.062 { 00:35:14.062 "name": "spare", 00:35:14.062 "uuid": "73fc4b2a-1170-5762-bf1d-8241e52d0be5", 00:35:14.062 "is_configured": true, 00:35:14.062 "data_offset": 256, 00:35:14.062 "data_size": 7936 00:35:14.062 }, 00:35:14.062 { 00:35:14.062 "name": "BaseBdev2", 00:35:14.062 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:14.062 "is_configured": true, 00:35:14.062 "data_offset": 256, 00:35:14.062 "data_size": 7936 00:35:14.062 } 00:35:14.062 ] 00:35:14.062 }' 00:35:14.062 07:45:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:14.062 07:45:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:14.062 07:45:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:14.062 07:45:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:14.062 07:45:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:14.322 [2024-07-12 07:45:48.025349] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:14.322 [2024-07-12 07:45:48.025586] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:14.322 [2024-07-12 07:45:48.025781] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:15.260 07:45:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:15.260 07:45:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:15.260 07:45:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:15.260 07:45:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:15.260 07:45:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:15.260 07:45:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:15.260 07:45:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:15.260 07:45:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:15.520 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:15.520 "name": "raid_bdev1", 00:35:15.520 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:15.520 "strip_size_kb": 0, 00:35:15.520 "state": "online", 00:35:15.520 "raid_level": "raid1", 00:35:15.520 "superblock": true, 00:35:15.520 "num_base_bdevs": 2, 00:35:15.520 "num_base_bdevs_discovered": 2, 00:35:15.520 "num_base_bdevs_operational": 2, 00:35:15.520 "base_bdevs_list": [ 00:35:15.520 { 00:35:15.520 "name": "spare", 00:35:15.520 "uuid": "73fc4b2a-1170-5762-bf1d-8241e52d0be5", 00:35:15.520 "is_configured": true, 00:35:15.520 "data_offset": 256, 00:35:15.520 "data_size": 7936 00:35:15.520 }, 00:35:15.520 { 00:35:15.520 "name": "BaseBdev2", 00:35:15.520 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:15.520 "is_configured": true, 00:35:15.520 "data_offset": 256, 00:35:15.520 "data_size": 7936 00:35:15.520 } 00:35:15.520 ] 00:35:15.520 }' 00:35:15.520 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:15.520 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:15.520 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:15.520 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:35:15.520 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:35:15.520 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:15.520 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:15.520 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:15.520 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:15.520 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:15.520 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:15.520 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:15.780 "name": "raid_bdev1", 00:35:15.780 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:15.780 "strip_size_kb": 0, 00:35:15.780 "state": "online", 00:35:15.780 "raid_level": "raid1", 00:35:15.780 "superblock": true, 00:35:15.780 "num_base_bdevs": 2, 00:35:15.780 "num_base_bdevs_discovered": 2, 00:35:15.780 "num_base_bdevs_operational": 2, 00:35:15.780 "base_bdevs_list": [ 00:35:15.780 { 00:35:15.780 "name": "spare", 00:35:15.780 "uuid": "73fc4b2a-1170-5762-bf1d-8241e52d0be5", 00:35:15.780 "is_configured": true, 00:35:15.780 "data_offset": 256, 00:35:15.780 "data_size": 7936 00:35:15.780 }, 00:35:15.780 { 00:35:15.780 "name": "BaseBdev2", 00:35:15.780 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:15.780 "is_configured": true, 00:35:15.780 "data_offset": 256, 00:35:15.780 "data_size": 7936 00:35:15.780 } 00:35:15.780 ] 00:35:15.780 }' 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:15.780 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:16.040 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:16.040 "name": "raid_bdev1", 00:35:16.040 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:16.040 "strip_size_kb": 0, 00:35:16.040 "state": "online", 00:35:16.040 "raid_level": "raid1", 00:35:16.040 "superblock": true, 00:35:16.040 "num_base_bdevs": 2, 00:35:16.040 "num_base_bdevs_discovered": 2, 00:35:16.040 "num_base_bdevs_operational": 2, 00:35:16.040 "base_bdevs_list": [ 00:35:16.040 { 00:35:16.040 "name": "spare", 00:35:16.040 "uuid": "73fc4b2a-1170-5762-bf1d-8241e52d0be5", 00:35:16.040 "is_configured": true, 00:35:16.040 "data_offset": 256, 00:35:16.040 "data_size": 7936 00:35:16.040 }, 00:35:16.040 { 00:35:16.040 "name": "BaseBdev2", 00:35:16.040 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:16.040 "is_configured": true, 00:35:16.040 "data_offset": 256, 00:35:16.040 "data_size": 7936 00:35:16.040 } 00:35:16.040 ] 00:35:16.040 }' 00:35:16.040 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:16.040 07:45:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:16.609 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:16.869 [2024-07-12 07:45:50.714727] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:16.869 [2024-07-12 07:45:50.714855] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:16.869 [2024-07-12 07:45:50.715086] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:16.869 [2024-07-12 07:45:50.715184] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:16.869 [2024-07-12 07:45:50.715355] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:35:16.869 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:16.869 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:35:17.128 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:35:17.129 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:35:17.129 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:35:17.129 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:35:17.129 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:17.129 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:35:17.129 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:17.129 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:17.129 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:17.129 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:35:17.129 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:17.129 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:17.129 07:45:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:35:17.388 /dev/nbd0 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # local i 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # break 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:17.388 1+0 records in 00:35:17.388 1+0 records out 00:35:17.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333066 s, 12.3 MB/s 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # size=4096 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # return 0 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:17.388 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:35:17.647 /dev/nbd1 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # local i 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # break 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:17.647 1+0 records in 00:35:17.647 1+0 records out 00:35:17.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457658 s, 8.9 MB/s 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # size=4096 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # return 0 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:17.647 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:35:17.906 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:35:17.906 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:17.906 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:17.906 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:17.906 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:35:17.906 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:17.906 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:18.164 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:18.164 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:18.164 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:18.165 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:18.165 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:18.165 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:18.165 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:35:18.165 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:35:18.165 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:18.165 07:45:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:35:18.165 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:18.165 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:18.165 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:18.165 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:18.165 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:18.165 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:18.165 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:35:18.165 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:35:18.165 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:35:18.165 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:18.732 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:18.732 [2024-07-12 07:45:52.465786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:18.732 [2024-07-12 07:45:52.466010] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:18.732 [2024-07-12 07:45:52.466077] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:35:18.732 [2024-07-12 07:45:52.466177] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:18.732 [2024-07-12 07:45:52.468505] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:18.732 [2024-07-12 07:45:52.468675] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:18.732 [2024-07-12 07:45:52.468886] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:18.732 [2024-07-12 07:45:52.469077] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:18.732 [2024-07-12 07:45:52.469324] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:18.732 spare 00:35:18.732 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:18.732 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:18.732 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:18.732 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:18.732 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:18.732 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:18.732 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:18.732 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:18.732 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:18.732 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:18.732 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:18.732 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:18.732 [2024-07-12 07:45:52.569455] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:35:18.732 [2024-07-12 07:45:52.569556] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:18.732 [2024-07-12 07:45:52.569674] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:35:18.732 [2024-07-12 07:45:52.570087] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:35:18.732 [2024-07-12 07:45:52.570181] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:35:18.732 [2024-07-12 07:45:52.570385] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:18.991 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:18.991 "name": "raid_bdev1", 00:35:18.991 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:18.991 "strip_size_kb": 0, 00:35:18.991 "state": "online", 00:35:18.991 "raid_level": "raid1", 00:35:18.991 "superblock": true, 00:35:18.991 "num_base_bdevs": 2, 00:35:18.991 "num_base_bdevs_discovered": 2, 00:35:18.991 "num_base_bdevs_operational": 2, 00:35:18.991 "base_bdevs_list": [ 00:35:18.991 { 00:35:18.991 "name": "spare", 00:35:18.991 "uuid": "73fc4b2a-1170-5762-bf1d-8241e52d0be5", 00:35:18.991 "is_configured": true, 00:35:18.991 "data_offset": 256, 00:35:18.991 "data_size": 7936 00:35:18.991 }, 00:35:18.991 { 00:35:18.991 "name": "BaseBdev2", 00:35:18.991 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:18.991 "is_configured": true, 00:35:18.991 "data_offset": 256, 00:35:18.991 "data_size": 7936 00:35:18.991 } 00:35:18.991 ] 00:35:18.991 }' 00:35:18.991 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:18.991 07:45:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:19.565 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:19.565 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:19.565 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:19.565 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:19.565 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:19.565 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:19.565 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:19.824 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:19.824 "name": "raid_bdev1", 00:35:19.824 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:19.824 "strip_size_kb": 0, 00:35:19.824 "state": "online", 00:35:19.824 "raid_level": "raid1", 00:35:19.824 "superblock": true, 00:35:19.824 "num_base_bdevs": 2, 00:35:19.824 "num_base_bdevs_discovered": 2, 00:35:19.824 "num_base_bdevs_operational": 2, 00:35:19.824 "base_bdevs_list": [ 00:35:19.824 { 00:35:19.824 "name": "spare", 00:35:19.824 "uuid": "73fc4b2a-1170-5762-bf1d-8241e52d0be5", 00:35:19.824 "is_configured": true, 00:35:19.824 "data_offset": 256, 00:35:19.824 "data_size": 7936 00:35:19.824 }, 00:35:19.824 { 00:35:19.824 "name": "BaseBdev2", 00:35:19.824 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:19.824 "is_configured": true, 00:35:19.824 "data_offset": 256, 00:35:19.824 "data_size": 7936 00:35:19.824 } 00:35:19.824 ] 00:35:19.824 }' 00:35:19.824 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:19.824 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:19.824 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:19.824 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:19.824 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:19.824 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:35:20.082 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:35:20.082 07:45:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:20.340 [2024-07-12 07:45:53.993671] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:20.340 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:20.340 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:20.340 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:20.340 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:20.340 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:20.340 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:20.340 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:20.341 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:20.341 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:20.341 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:20.341 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:20.341 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:20.598 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:20.598 "name": "raid_bdev1", 00:35:20.598 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:20.598 "strip_size_kb": 0, 00:35:20.598 "state": "online", 00:35:20.598 "raid_level": "raid1", 00:35:20.598 "superblock": true, 00:35:20.598 "num_base_bdevs": 2, 00:35:20.598 "num_base_bdevs_discovered": 1, 00:35:20.598 "num_base_bdevs_operational": 1, 00:35:20.598 "base_bdevs_list": [ 00:35:20.598 { 00:35:20.598 "name": null, 00:35:20.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:20.598 "is_configured": false, 00:35:20.598 "data_offset": 256, 00:35:20.598 "data_size": 7936 00:35:20.598 }, 00:35:20.598 { 00:35:20.598 "name": "BaseBdev2", 00:35:20.598 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:20.598 "is_configured": true, 00:35:20.598 "data_offset": 256, 00:35:20.598 "data_size": 7936 00:35:20.598 } 00:35:20.598 ] 00:35:20.598 }' 00:35:20.598 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:20.598 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:21.165 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:21.165 [2024-07-12 07:45:54.905840] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:21.165 [2024-07-12 07:45:54.906061] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:35:21.165 [2024-07-12 07:45:54.906164] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:21.165 [2024-07-12 07:45:54.906255] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:21.165 [2024-07-12 07:45:54.910291] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb4f0 00:35:21.165 [2024-07-12 07:45:54.912318] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:21.165 07:45:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:35:22.101 07:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:22.101 07:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:22.101 07:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:22.101 07:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:22.101 07:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:22.101 07:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:22.102 07:45:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:22.362 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:22.362 "name": "raid_bdev1", 00:35:22.362 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:22.362 "strip_size_kb": 0, 00:35:22.362 "state": "online", 00:35:22.362 "raid_level": "raid1", 00:35:22.362 "superblock": true, 00:35:22.362 "num_base_bdevs": 2, 00:35:22.362 "num_base_bdevs_discovered": 2, 00:35:22.362 "num_base_bdevs_operational": 2, 00:35:22.362 "process": { 00:35:22.362 "type": "rebuild", 00:35:22.362 "target": "spare", 00:35:22.362 "progress": { 00:35:22.362 "blocks": 3072, 00:35:22.362 "percent": 38 00:35:22.362 } 00:35:22.362 }, 00:35:22.362 "base_bdevs_list": [ 00:35:22.362 { 00:35:22.362 "name": "spare", 00:35:22.362 "uuid": "73fc4b2a-1170-5762-bf1d-8241e52d0be5", 00:35:22.362 "is_configured": true, 00:35:22.362 "data_offset": 256, 00:35:22.362 "data_size": 7936 00:35:22.362 }, 00:35:22.362 { 00:35:22.362 "name": "BaseBdev2", 00:35:22.362 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:22.362 "is_configured": true, 00:35:22.362 "data_offset": 256, 00:35:22.362 "data_size": 7936 00:35:22.362 } 00:35:22.362 ] 00:35:22.362 }' 00:35:22.362 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:22.362 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:22.362 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:22.622 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:22.622 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:22.622 [2024-07-12 07:45:56.443614] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:22.882 [2024-07-12 07:45:56.519982] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:22.882 [2024-07-12 07:45:56.520153] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:22.882 [2024-07-12 07:45:56.520197] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:22.882 [2024-07-12 07:45:56.520275] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:22.882 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:22.882 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:22.882 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:22.882 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:22.882 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:22.882 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:22.882 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:22.882 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:22.882 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:22.882 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:22.882 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:22.882 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:22.882 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:22.882 "name": "raid_bdev1", 00:35:22.882 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:22.882 "strip_size_kb": 0, 00:35:22.882 "state": "online", 00:35:22.882 "raid_level": "raid1", 00:35:22.882 "superblock": true, 00:35:22.882 "num_base_bdevs": 2, 00:35:22.882 "num_base_bdevs_discovered": 1, 00:35:22.882 "num_base_bdevs_operational": 1, 00:35:22.882 "base_bdevs_list": [ 00:35:22.882 { 00:35:22.882 "name": null, 00:35:22.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:22.882 "is_configured": false, 00:35:22.882 "data_offset": 256, 00:35:22.882 "data_size": 7936 00:35:22.882 }, 00:35:22.882 { 00:35:22.882 "name": "BaseBdev2", 00:35:22.882 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:22.882 "is_configured": true, 00:35:22.882 "data_offset": 256, 00:35:22.882 "data_size": 7936 00:35:22.882 } 00:35:22.882 ] 00:35:22.882 }' 00:35:22.882 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:22.882 07:45:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:23.450 07:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:23.710 [2024-07-12 07:45:57.491834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:23.710 [2024-07-12 07:45:57.492027] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:23.710 [2024-07-12 07:45:57.492087] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:35:23.710 [2024-07-12 07:45:57.492194] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:23.710 [2024-07-12 07:45:57.492623] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:23.710 [2024-07-12 07:45:57.492773] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:23.710 [2024-07-12 07:45:57.492932] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:23.710 [2024-07-12 07:45:57.493017] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:35:23.710 [2024-07-12 07:45:57.493086] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:23.710 [2024-07-12 07:45:57.493159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:23.710 [2024-07-12 07:45:57.495422] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb830 00:35:23.710 [2024-07-12 07:45:57.497488] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:23.710 spare 00:35:23.710 07:45:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:35:24.647 07:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:24.647 07:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:24.647 07:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:24.647 07:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:24.647 07:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:24.647 07:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:24.647 07:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:24.907 07:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:24.907 "name": "raid_bdev1", 00:35:24.908 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:24.908 "strip_size_kb": 0, 00:35:24.908 "state": "online", 00:35:24.908 "raid_level": "raid1", 00:35:24.908 "superblock": true, 00:35:24.908 "num_base_bdevs": 2, 00:35:24.908 "num_base_bdevs_discovered": 2, 00:35:24.908 "num_base_bdevs_operational": 2, 00:35:24.908 "process": { 00:35:24.908 "type": "rebuild", 00:35:24.908 "target": "spare", 00:35:24.908 "progress": { 00:35:24.908 "blocks": 3072, 00:35:24.908 "percent": 38 00:35:24.908 } 00:35:24.908 }, 00:35:24.908 "base_bdevs_list": [ 00:35:24.908 { 00:35:24.908 "name": "spare", 00:35:24.908 "uuid": "73fc4b2a-1170-5762-bf1d-8241e52d0be5", 00:35:24.908 "is_configured": true, 00:35:24.908 "data_offset": 256, 00:35:24.908 "data_size": 7936 00:35:24.908 }, 00:35:24.908 { 00:35:24.908 "name": "BaseBdev2", 00:35:24.908 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:24.908 "is_configured": true, 00:35:24.908 "data_offset": 256, 00:35:24.908 "data_size": 7936 00:35:24.908 } 00:35:24.908 ] 00:35:24.908 }' 00:35:24.908 07:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:25.167 07:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:25.167 07:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:25.167 07:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:25.167 07:45:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:25.426 [2024-07-12 07:45:59.087163] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:25.426 [2024-07-12 07:45:59.105195] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:25.426 [2024-07-12 07:45:59.105381] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:25.426 [2024-07-12 07:45:59.105427] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:25.426 [2024-07-12 07:45:59.105524] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:25.426 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:25.426 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:25.426 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:25.426 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:25.426 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:25.426 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:25.426 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:25.426 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:25.426 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:25.426 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:25.426 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:25.426 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.685 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:25.685 "name": "raid_bdev1", 00:35:25.685 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:25.685 "strip_size_kb": 0, 00:35:25.685 "state": "online", 00:35:25.685 "raid_level": "raid1", 00:35:25.685 "superblock": true, 00:35:25.685 "num_base_bdevs": 2, 00:35:25.685 "num_base_bdevs_discovered": 1, 00:35:25.685 "num_base_bdevs_operational": 1, 00:35:25.685 "base_bdevs_list": [ 00:35:25.685 { 00:35:25.685 "name": null, 00:35:25.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:25.685 "is_configured": false, 00:35:25.685 "data_offset": 256, 00:35:25.685 "data_size": 7936 00:35:25.685 }, 00:35:25.685 { 00:35:25.685 "name": "BaseBdev2", 00:35:25.685 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:25.685 "is_configured": true, 00:35:25.685 "data_offset": 256, 00:35:25.685 "data_size": 7936 00:35:25.685 } 00:35:25.685 ] 00:35:25.685 }' 00:35:25.685 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:25.685 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:26.253 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:26.254 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:26.254 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:26.254 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:26.254 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:26.254 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:26.254 07:45:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:26.513 07:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:26.513 "name": "raid_bdev1", 00:35:26.513 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:26.513 "strip_size_kb": 0, 00:35:26.513 "state": "online", 00:35:26.513 "raid_level": "raid1", 00:35:26.513 "superblock": true, 00:35:26.513 "num_base_bdevs": 2, 00:35:26.513 "num_base_bdevs_discovered": 1, 00:35:26.513 "num_base_bdevs_operational": 1, 00:35:26.513 "base_bdevs_list": [ 00:35:26.513 { 00:35:26.513 "name": null, 00:35:26.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:26.513 "is_configured": false, 00:35:26.513 "data_offset": 256, 00:35:26.513 "data_size": 7936 00:35:26.513 }, 00:35:26.513 { 00:35:26.513 "name": "BaseBdev2", 00:35:26.513 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:26.513 "is_configured": true, 00:35:26.513 "data_offset": 256, 00:35:26.513 "data_size": 7936 00:35:26.513 } 00:35:26.513 ] 00:35:26.513 }' 00:35:26.513 07:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:26.513 07:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:26.513 07:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:26.513 07:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:26.513 07:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:35:26.772 07:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:27.032 [2024-07-12 07:46:00.692813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:27.032 [2024-07-12 07:46:00.693008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:27.032 [2024-07-12 07:46:00.693089] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:35:27.032 [2024-07-12 07:46:00.693180] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:27.032 [2024-07-12 07:46:00.693607] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:27.032 [2024-07-12 07:46:00.693745] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:27.032 [2024-07-12 07:46:00.693902] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:35:27.032 [2024-07-12 07:46:00.694003] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:35:27.032 [2024-07-12 07:46:00.694071] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:27.032 BaseBdev1 00:35:27.032 07:46:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:35:27.968 07:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:27.968 07:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:27.968 07:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:27.968 07:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:27.968 07:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:27.968 07:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:27.968 07:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:27.968 07:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:27.968 07:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:27.968 07:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:27.968 07:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:27.968 07:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:28.226 07:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:28.226 "name": "raid_bdev1", 00:35:28.226 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:28.226 "strip_size_kb": 0, 00:35:28.226 "state": "online", 00:35:28.226 "raid_level": "raid1", 00:35:28.226 "superblock": true, 00:35:28.226 "num_base_bdevs": 2, 00:35:28.226 "num_base_bdevs_discovered": 1, 00:35:28.226 "num_base_bdevs_operational": 1, 00:35:28.226 "base_bdevs_list": [ 00:35:28.226 { 00:35:28.226 "name": null, 00:35:28.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:28.226 "is_configured": false, 00:35:28.226 "data_offset": 256, 00:35:28.226 "data_size": 7936 00:35:28.226 }, 00:35:28.226 { 00:35:28.226 "name": "BaseBdev2", 00:35:28.226 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:28.226 "is_configured": true, 00:35:28.226 "data_offset": 256, 00:35:28.226 "data_size": 7936 00:35:28.226 } 00:35:28.226 ] 00:35:28.226 }' 00:35:28.226 07:46:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:28.226 07:46:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:28.793 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:28.793 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:28.793 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:28.793 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:28.793 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:28.793 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:28.793 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:29.050 "name": "raid_bdev1", 00:35:29.050 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:29.050 "strip_size_kb": 0, 00:35:29.050 "state": "online", 00:35:29.050 "raid_level": "raid1", 00:35:29.050 "superblock": true, 00:35:29.050 "num_base_bdevs": 2, 00:35:29.050 "num_base_bdevs_discovered": 1, 00:35:29.050 "num_base_bdevs_operational": 1, 00:35:29.050 "base_bdevs_list": [ 00:35:29.050 { 00:35:29.050 "name": null, 00:35:29.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:29.050 "is_configured": false, 00:35:29.050 "data_offset": 256, 00:35:29.050 "data_size": 7936 00:35:29.050 }, 00:35:29.050 { 00:35:29.050 "name": "BaseBdev2", 00:35:29.050 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:29.050 "is_configured": true, 00:35:29.050 "data_offset": 256, 00:35:29.050 "data_size": 7936 00:35:29.050 } 00:35:29.050 ] 00:35:29.050 }' 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:29.050 07:46:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:29.327 [2024-07-12 07:46:03.041201] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:29.327 [2024-07-12 07:46:03.041305] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:35:29.327 [2024-07-12 07:46:03.041316] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:29.327 request: 00:35:29.327 { 00:35:29.327 "raid_bdev": "raid_bdev1", 00:35:29.327 "base_bdev": "BaseBdev1", 00:35:29.327 "method": "bdev_raid_add_base_bdev", 00:35:29.327 "req_id": 1 00:35:29.327 } 00:35:29.327 Got JSON-RPC error response 00:35:29.327 response: 00:35:29.327 { 00:35:29.327 "code": -22, 00:35:29.327 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:35:29.327 } 00:35:29.327 07:46:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:35:29.327 07:46:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:29.327 07:46:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:29.327 07:46:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:29.327 07:46:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:35:30.264 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:30.264 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:30.264 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:30.264 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:30.264 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:30.264 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:30.264 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:30.264 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:30.264 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:30.264 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:30.264 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:30.264 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:30.523 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:30.523 "name": "raid_bdev1", 00:35:30.523 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:30.523 "strip_size_kb": 0, 00:35:30.523 "state": "online", 00:35:30.523 "raid_level": "raid1", 00:35:30.523 "superblock": true, 00:35:30.523 "num_base_bdevs": 2, 00:35:30.523 "num_base_bdevs_discovered": 1, 00:35:30.523 "num_base_bdevs_operational": 1, 00:35:30.523 "base_bdevs_list": [ 00:35:30.523 { 00:35:30.523 "name": null, 00:35:30.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:30.523 "is_configured": false, 00:35:30.523 "data_offset": 256, 00:35:30.523 "data_size": 7936 00:35:30.523 }, 00:35:30.523 { 00:35:30.523 "name": "BaseBdev2", 00:35:30.524 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:30.524 "is_configured": true, 00:35:30.524 "data_offset": 256, 00:35:30.524 "data_size": 7936 00:35:30.524 } 00:35:30.524 ] 00:35:30.524 }' 00:35:30.524 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:30.524 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:31.091 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:31.091 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:31.091 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:31.091 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:31.091 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:31.091 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:31.091 07:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:31.349 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:31.349 "name": "raid_bdev1", 00:35:31.349 "uuid": "5b99f337-8125-40c9-abd5-b0af0ee74c89", 00:35:31.349 "strip_size_kb": 0, 00:35:31.349 "state": "online", 00:35:31.349 "raid_level": "raid1", 00:35:31.349 "superblock": true, 00:35:31.349 "num_base_bdevs": 2, 00:35:31.349 "num_base_bdevs_discovered": 1, 00:35:31.349 "num_base_bdevs_operational": 1, 00:35:31.349 "base_bdevs_list": [ 00:35:31.349 { 00:35:31.350 "name": null, 00:35:31.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:31.350 "is_configured": false, 00:35:31.350 "data_offset": 256, 00:35:31.350 "data_size": 7936 00:35:31.350 }, 00:35:31.350 { 00:35:31.350 "name": "BaseBdev2", 00:35:31.350 "uuid": "249238f3-a304-5d5f-bb59-8bf73b228767", 00:35:31.350 "is_configured": true, 00:35:31.350 "data_offset": 256, 00:35:31.350 "data_size": 7936 00:35:31.350 } 00:35:31.350 ] 00:35:31.350 }' 00:35:31.350 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:31.350 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:31.350 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:31.350 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:31.350 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 168894 00:35:31.350 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@946 -- # '[' -z 168894 ']' 00:35:31.350 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # kill -0 168894 00:35:31.350 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@951 -- # uname 00:35:31.609 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:31.609 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 168894 00:35:31.609 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:31.609 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:31.609 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 168894' 00:35:31.609 killing process with pid 168894 00:35:31.609 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@965 -- # kill 168894 00:35:31.609 Received shutdown signal, test time was about 60.000000 seconds 00:35:31.609 00:35:31.609 Latency(us) 00:35:31.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.609 =================================================================================================================== 00:35:31.609 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:31.609 [2024-07-12 07:46:05.259588] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:31.609 [2024-07-12 07:46:05.259672] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:31.609 [2024-07-12 07:46:05.259705] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:31.609 [2024-07-12 07:46:05.259713] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:35:31.609 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # wait 168894 00:35:31.609 [2024-07-12 07:46:05.288305] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:31.869 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:35:31.869 00:35:31.869 real 0m29.672s 00:35:31.869 user 0m46.239s 00:35:31.869 sys 0m4.520s 00:35:31.869 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:31.869 07:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:31.869 ************************************ 00:35:31.869 END TEST raid_rebuild_test_sb_4k 00:35:31.869 ************************************ 00:35:31.869 07:46:05 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:35:31.869 07:46:05 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:35:31.869 07:46:05 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:35:31.869 07:46:05 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:31.869 07:46:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:31.869 ************************************ 00:35:31.869 START TEST raid_state_function_test_sb_md_separate 00:35:31.869 ************************************ 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=169750 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 169750' 00:35:31.869 Process raid pid: 169750 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 169750 /var/tmp/spdk-raid.sock 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@827 -- # '[' -z 169750 ']' 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:31.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:31.869 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:31.869 [2024-07-12 07:46:05.673778] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:31.869 [2024-07-12 07:46:05.673942] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.128 [2024-07-12 07:46:05.814269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.128 [2024-07-12 07:46:05.867064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.128 [2024-07-12 07:46:05.914153] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:32.128 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:32.128 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # return 0 00:35:32.128 07:46:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:32.387 [2024-07-12 07:46:06.195320] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:32.387 [2024-07-12 07:46:06.195389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:32.387 [2024-07-12 07:46:06.195399] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:32.387 [2024-07-12 07:46:06.195415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:32.387 07:46:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:32.387 07:46:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:32.387 07:46:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:32.387 07:46:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:32.387 07:46:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:32.387 07:46:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:32.387 07:46:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:32.387 07:46:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:32.387 07:46:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:32.387 07:46:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:32.387 07:46:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:32.387 07:46:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:32.646 07:46:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:32.646 "name": "Existed_Raid", 00:35:32.646 "uuid": "4f1b4b0c-2321-4f5e-a768-94613c393d2b", 00:35:32.646 "strip_size_kb": 0, 00:35:32.646 "state": "configuring", 00:35:32.646 "raid_level": "raid1", 00:35:32.646 "superblock": true, 00:35:32.646 "num_base_bdevs": 2, 00:35:32.646 "num_base_bdevs_discovered": 0, 00:35:32.646 "num_base_bdevs_operational": 2, 00:35:32.646 "base_bdevs_list": [ 00:35:32.646 { 00:35:32.646 "name": "BaseBdev1", 00:35:32.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:32.646 "is_configured": false, 00:35:32.646 "data_offset": 0, 00:35:32.646 "data_size": 0 00:35:32.646 }, 00:35:32.646 { 00:35:32.646 "name": "BaseBdev2", 00:35:32.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:32.646 "is_configured": false, 00:35:32.646 "data_offset": 0, 00:35:32.646 "data_size": 0 00:35:32.646 } 00:35:32.646 ] 00:35:32.646 }' 00:35:32.646 07:46:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:32.646 07:46:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:33.214 07:46:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:33.473 [2024-07-12 07:46:07.271332] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:33.473 [2024-07-12 07:46:07.271368] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:35:33.473 07:46:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:33.733 [2024-07-12 07:46:07.519370] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:33.733 [2024-07-12 07:46:07.519424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:33.733 [2024-07-12 07:46:07.519433] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:33.733 [2024-07-12 07:46:07.519467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:33.733 07:46:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:35:33.992 [2024-07-12 07:46:07.800915] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:33.992 BaseBdev1 00:35:33.992 07:46:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:35:33.992 07:46:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:35:33.992 07:46:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:35:33.992 07:46:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:35:33.992 07:46:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:35:33.992 07:46:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:35:33.992 07:46:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:34.251 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:34.510 [ 00:35:34.510 { 00:35:34.510 "name": "BaseBdev1", 00:35:34.510 "aliases": [ 00:35:34.510 "36fad2bc-00d3-47fa-97f7-26cc7f235a90" 00:35:34.510 ], 00:35:34.510 "product_name": "Malloc disk", 00:35:34.510 "block_size": 4096, 00:35:34.510 "num_blocks": 8192, 00:35:34.510 "uuid": "36fad2bc-00d3-47fa-97f7-26cc7f235a90", 00:35:34.510 "md_size": 32, 00:35:34.510 "md_interleave": false, 00:35:34.510 "dif_type": 0, 00:35:34.510 "assigned_rate_limits": { 00:35:34.510 "rw_ios_per_sec": 0, 00:35:34.510 "rw_mbytes_per_sec": 0, 00:35:34.510 "r_mbytes_per_sec": 0, 00:35:34.510 "w_mbytes_per_sec": 0 00:35:34.510 }, 00:35:34.510 "claimed": true, 00:35:34.510 "claim_type": "exclusive_write", 00:35:34.510 "zoned": false, 00:35:34.510 "supported_io_types": { 00:35:34.510 "read": true, 00:35:34.510 "write": true, 00:35:34.510 "unmap": true, 00:35:34.510 "write_zeroes": true, 00:35:34.510 "flush": true, 00:35:34.510 "reset": true, 00:35:34.510 "compare": false, 00:35:34.510 "compare_and_write": false, 00:35:34.510 "abort": true, 00:35:34.510 "nvme_admin": false, 00:35:34.510 "nvme_io": false 00:35:34.510 }, 00:35:34.510 "memory_domains": [ 00:35:34.510 { 00:35:34.510 "dma_device_id": "system", 00:35:34.510 "dma_device_type": 1 00:35:34.510 }, 00:35:34.511 { 00:35:34.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:34.511 "dma_device_type": 2 00:35:34.511 } 00:35:34.511 ], 00:35:34.511 "driver_specific": {} 00:35:34.511 } 00:35:34.511 ] 00:35:34.511 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:35:34.511 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:34.511 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:34.511 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:34.511 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:34.511 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:34.511 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:34.511 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:34.511 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:34.511 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:34.511 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:34.511 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:34.511 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:34.770 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:34.770 "name": "Existed_Raid", 00:35:34.770 "uuid": "191b6a2d-59da-475c-a537-e033f2849cf2", 00:35:34.770 "strip_size_kb": 0, 00:35:34.770 "state": "configuring", 00:35:34.770 "raid_level": "raid1", 00:35:34.770 "superblock": true, 00:35:34.770 "num_base_bdevs": 2, 00:35:34.770 "num_base_bdevs_discovered": 1, 00:35:34.770 "num_base_bdevs_operational": 2, 00:35:34.770 "base_bdevs_list": [ 00:35:34.770 { 00:35:34.770 "name": "BaseBdev1", 00:35:34.770 "uuid": "36fad2bc-00d3-47fa-97f7-26cc7f235a90", 00:35:34.770 "is_configured": true, 00:35:34.770 "data_offset": 256, 00:35:34.770 "data_size": 7936 00:35:34.770 }, 00:35:34.770 { 00:35:34.770 "name": "BaseBdev2", 00:35:34.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:34.770 "is_configured": false, 00:35:34.770 "data_offset": 0, 00:35:34.770 "data_size": 0 00:35:34.770 } 00:35:34.770 ] 00:35:34.770 }' 00:35:34.770 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:34.770 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:35.338 07:46:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:35.598 [2024-07-12 07:46:09.237167] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:35.598 [2024-07-12 07:46:09.237210] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:35:35.598 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:35.598 [2024-07-12 07:46:09.409250] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:35.598 [2024-07-12 07:46:09.411157] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:35.598 [2024-07-12 07:46:09.411204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:35.598 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:35:35.598 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:35:35.598 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:35.598 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:35.598 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:35.598 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:35.598 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:35.598 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:35.598 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:35.598 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:35.598 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:35.598 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:35.598 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:35.598 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:35.858 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:35.858 "name": "Existed_Raid", 00:35:35.858 "uuid": "eec3e0b9-2d49-4b06-b1ea-5d4347ae57d0", 00:35:35.859 "strip_size_kb": 0, 00:35:35.859 "state": "configuring", 00:35:35.859 "raid_level": "raid1", 00:35:35.859 "superblock": true, 00:35:35.859 "num_base_bdevs": 2, 00:35:35.859 "num_base_bdevs_discovered": 1, 00:35:35.859 "num_base_bdevs_operational": 2, 00:35:35.859 "base_bdevs_list": [ 00:35:35.859 { 00:35:35.859 "name": "BaseBdev1", 00:35:35.859 "uuid": "36fad2bc-00d3-47fa-97f7-26cc7f235a90", 00:35:35.859 "is_configured": true, 00:35:35.859 "data_offset": 256, 00:35:35.859 "data_size": 7936 00:35:35.859 }, 00:35:35.859 { 00:35:35.859 "name": "BaseBdev2", 00:35:35.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:35.859 "is_configured": false, 00:35:35.859 "data_offset": 0, 00:35:35.859 "data_size": 0 00:35:35.859 } 00:35:35.859 ] 00:35:35.859 }' 00:35:35.859 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:35.859 07:46:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:36.426 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:35:36.684 [2024-07-12 07:46:10.413730] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:36.684 [2024-07-12 07:46:10.413880] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:35:36.684 [2024-07-12 07:46:10.413892] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:36.684 [2024-07-12 07:46:10.414043] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:35:36.684 [2024-07-12 07:46:10.414180] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:35:36.684 [2024-07-12 07:46:10.414190] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:35:36.684 [2024-07-12 07:46:10.414287] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:36.684 BaseBdev2 00:35:36.684 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:35:36.684 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:35:36.684 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:35:36.684 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:35:36.684 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:35:36.684 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:35:36.684 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:36.943 [ 00:35:36.943 { 00:35:36.943 "name": "BaseBdev2", 00:35:36.943 "aliases": [ 00:35:36.943 "7a0e49a8-f0e2-4c75-af54-bec1d9522e1d" 00:35:36.943 ], 00:35:36.943 "product_name": "Malloc disk", 00:35:36.943 "block_size": 4096, 00:35:36.943 "num_blocks": 8192, 00:35:36.943 "uuid": "7a0e49a8-f0e2-4c75-af54-bec1d9522e1d", 00:35:36.943 "md_size": 32, 00:35:36.943 "md_interleave": false, 00:35:36.943 "dif_type": 0, 00:35:36.943 "assigned_rate_limits": { 00:35:36.943 "rw_ios_per_sec": 0, 00:35:36.943 "rw_mbytes_per_sec": 0, 00:35:36.943 "r_mbytes_per_sec": 0, 00:35:36.943 "w_mbytes_per_sec": 0 00:35:36.943 }, 00:35:36.943 "claimed": true, 00:35:36.943 "claim_type": "exclusive_write", 00:35:36.943 "zoned": false, 00:35:36.943 "supported_io_types": { 00:35:36.943 "read": true, 00:35:36.943 "write": true, 00:35:36.943 "unmap": true, 00:35:36.943 "write_zeroes": true, 00:35:36.943 "flush": true, 00:35:36.943 "reset": true, 00:35:36.943 "compare": false, 00:35:36.943 "compare_and_write": false, 00:35:36.943 "abort": true, 00:35:36.943 "nvme_admin": false, 00:35:36.943 "nvme_io": false 00:35:36.943 }, 00:35:36.943 "memory_domains": [ 00:35:36.943 { 00:35:36.943 "dma_device_id": "system", 00:35:36.943 "dma_device_type": 1 00:35:36.943 }, 00:35:36.943 { 00:35:36.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:36.943 "dma_device_type": 2 00:35:36.943 } 00:35:36.943 ], 00:35:36.943 "driver_specific": {} 00:35:36.943 } 00:35:36.943 ] 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:36.943 07:46:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:37.202 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:37.202 "name": "Existed_Raid", 00:35:37.202 "uuid": "eec3e0b9-2d49-4b06-b1ea-5d4347ae57d0", 00:35:37.202 "strip_size_kb": 0, 00:35:37.202 "state": "online", 00:35:37.202 "raid_level": "raid1", 00:35:37.202 "superblock": true, 00:35:37.202 "num_base_bdevs": 2, 00:35:37.202 "num_base_bdevs_discovered": 2, 00:35:37.202 "num_base_bdevs_operational": 2, 00:35:37.202 "base_bdevs_list": [ 00:35:37.202 { 00:35:37.202 "name": "BaseBdev1", 00:35:37.202 "uuid": "36fad2bc-00d3-47fa-97f7-26cc7f235a90", 00:35:37.202 "is_configured": true, 00:35:37.202 "data_offset": 256, 00:35:37.202 "data_size": 7936 00:35:37.202 }, 00:35:37.202 { 00:35:37.202 "name": "BaseBdev2", 00:35:37.202 "uuid": "7a0e49a8-f0e2-4c75-af54-bec1d9522e1d", 00:35:37.202 "is_configured": true, 00:35:37.202 "data_offset": 256, 00:35:37.202 "data_size": 7936 00:35:37.202 } 00:35:37.202 ] 00:35:37.202 }' 00:35:37.202 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:37.202 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:37.769 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:35:37.769 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:35:37.769 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:37.769 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:37.769 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:37.769 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:35:37.769 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:35:37.769 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:38.028 [2024-07-12 07:46:11.810163] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:38.028 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:38.028 "name": "Existed_Raid", 00:35:38.028 "aliases": [ 00:35:38.028 "eec3e0b9-2d49-4b06-b1ea-5d4347ae57d0" 00:35:38.028 ], 00:35:38.028 "product_name": "Raid Volume", 00:35:38.028 "block_size": 4096, 00:35:38.028 "num_blocks": 7936, 00:35:38.028 "uuid": "eec3e0b9-2d49-4b06-b1ea-5d4347ae57d0", 00:35:38.028 "md_size": 32, 00:35:38.028 "md_interleave": false, 00:35:38.028 "dif_type": 0, 00:35:38.028 "assigned_rate_limits": { 00:35:38.028 "rw_ios_per_sec": 0, 00:35:38.028 "rw_mbytes_per_sec": 0, 00:35:38.028 "r_mbytes_per_sec": 0, 00:35:38.028 "w_mbytes_per_sec": 0 00:35:38.028 }, 00:35:38.028 "claimed": false, 00:35:38.028 "zoned": false, 00:35:38.028 "supported_io_types": { 00:35:38.028 "read": true, 00:35:38.028 "write": true, 00:35:38.028 "unmap": false, 00:35:38.028 "write_zeroes": true, 00:35:38.028 "flush": false, 00:35:38.028 "reset": true, 00:35:38.028 "compare": false, 00:35:38.028 "compare_and_write": false, 00:35:38.028 "abort": false, 00:35:38.028 "nvme_admin": false, 00:35:38.028 "nvme_io": false 00:35:38.028 }, 00:35:38.028 "memory_domains": [ 00:35:38.028 { 00:35:38.028 "dma_device_id": "system", 00:35:38.028 "dma_device_type": 1 00:35:38.028 }, 00:35:38.028 { 00:35:38.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:38.028 "dma_device_type": 2 00:35:38.028 }, 00:35:38.028 { 00:35:38.028 "dma_device_id": "system", 00:35:38.028 "dma_device_type": 1 00:35:38.028 }, 00:35:38.028 { 00:35:38.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:38.028 "dma_device_type": 2 00:35:38.028 } 00:35:38.028 ], 00:35:38.028 "driver_specific": { 00:35:38.028 "raid": { 00:35:38.028 "uuid": "eec3e0b9-2d49-4b06-b1ea-5d4347ae57d0", 00:35:38.028 "strip_size_kb": 0, 00:35:38.028 "state": "online", 00:35:38.028 "raid_level": "raid1", 00:35:38.028 "superblock": true, 00:35:38.028 "num_base_bdevs": 2, 00:35:38.028 "num_base_bdevs_discovered": 2, 00:35:38.028 "num_base_bdevs_operational": 2, 00:35:38.028 "base_bdevs_list": [ 00:35:38.028 { 00:35:38.028 "name": "BaseBdev1", 00:35:38.028 "uuid": "36fad2bc-00d3-47fa-97f7-26cc7f235a90", 00:35:38.028 "is_configured": true, 00:35:38.028 "data_offset": 256, 00:35:38.028 "data_size": 7936 00:35:38.028 }, 00:35:38.028 { 00:35:38.028 "name": "BaseBdev2", 00:35:38.028 "uuid": "7a0e49a8-f0e2-4c75-af54-bec1d9522e1d", 00:35:38.028 "is_configured": true, 00:35:38.028 "data_offset": 256, 00:35:38.028 "data_size": 7936 00:35:38.028 } 00:35:38.028 ] 00:35:38.028 } 00:35:38.028 } 00:35:38.028 }' 00:35:38.028 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:38.028 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:35:38.028 BaseBdev2' 00:35:38.028 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:38.028 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:35:38.028 07:46:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:38.296 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:38.296 "name": "BaseBdev1", 00:35:38.296 "aliases": [ 00:35:38.296 "36fad2bc-00d3-47fa-97f7-26cc7f235a90" 00:35:38.296 ], 00:35:38.296 "product_name": "Malloc disk", 00:35:38.296 "block_size": 4096, 00:35:38.296 "num_blocks": 8192, 00:35:38.296 "uuid": "36fad2bc-00d3-47fa-97f7-26cc7f235a90", 00:35:38.296 "md_size": 32, 00:35:38.296 "md_interleave": false, 00:35:38.296 "dif_type": 0, 00:35:38.296 "assigned_rate_limits": { 00:35:38.296 "rw_ios_per_sec": 0, 00:35:38.296 "rw_mbytes_per_sec": 0, 00:35:38.296 "r_mbytes_per_sec": 0, 00:35:38.296 "w_mbytes_per_sec": 0 00:35:38.296 }, 00:35:38.296 "claimed": true, 00:35:38.296 "claim_type": "exclusive_write", 00:35:38.296 "zoned": false, 00:35:38.296 "supported_io_types": { 00:35:38.296 "read": true, 00:35:38.296 "write": true, 00:35:38.296 "unmap": true, 00:35:38.296 "write_zeroes": true, 00:35:38.296 "flush": true, 00:35:38.296 "reset": true, 00:35:38.296 "compare": false, 00:35:38.296 "compare_and_write": false, 00:35:38.296 "abort": true, 00:35:38.296 "nvme_admin": false, 00:35:38.296 "nvme_io": false 00:35:38.296 }, 00:35:38.296 "memory_domains": [ 00:35:38.296 { 00:35:38.296 "dma_device_id": "system", 00:35:38.296 "dma_device_type": 1 00:35:38.296 }, 00:35:38.296 { 00:35:38.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:38.296 "dma_device_type": 2 00:35:38.296 } 00:35:38.296 ], 00:35:38.296 "driver_specific": {} 00:35:38.296 }' 00:35:38.296 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:38.296 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:38.599 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:38.599 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:38.599 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:38.599 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:35:38.599 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:38.599 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:38.599 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:35:38.599 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:38.599 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:38.869 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:35:38.869 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:38.869 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:38.869 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:35:38.869 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:38.869 "name": "BaseBdev2", 00:35:38.869 "aliases": [ 00:35:38.869 "7a0e49a8-f0e2-4c75-af54-bec1d9522e1d" 00:35:38.869 ], 00:35:38.869 "product_name": "Malloc disk", 00:35:38.869 "block_size": 4096, 00:35:38.869 "num_blocks": 8192, 00:35:38.869 "uuid": "7a0e49a8-f0e2-4c75-af54-bec1d9522e1d", 00:35:38.869 "md_size": 32, 00:35:38.869 "md_interleave": false, 00:35:38.869 "dif_type": 0, 00:35:38.869 "assigned_rate_limits": { 00:35:38.869 "rw_ios_per_sec": 0, 00:35:38.869 "rw_mbytes_per_sec": 0, 00:35:38.869 "r_mbytes_per_sec": 0, 00:35:38.869 "w_mbytes_per_sec": 0 00:35:38.869 }, 00:35:38.869 "claimed": true, 00:35:38.869 "claim_type": "exclusive_write", 00:35:38.869 "zoned": false, 00:35:38.869 "supported_io_types": { 00:35:38.869 "read": true, 00:35:38.869 "write": true, 00:35:38.869 "unmap": true, 00:35:38.869 "write_zeroes": true, 00:35:38.869 "flush": true, 00:35:38.869 "reset": true, 00:35:38.869 "compare": false, 00:35:38.869 "compare_and_write": false, 00:35:38.869 "abort": true, 00:35:38.869 "nvme_admin": false, 00:35:38.869 "nvme_io": false 00:35:38.869 }, 00:35:38.869 "memory_domains": [ 00:35:38.869 { 00:35:38.869 "dma_device_id": "system", 00:35:38.869 "dma_device_type": 1 00:35:38.869 }, 00:35:38.869 { 00:35:38.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:38.869 "dma_device_type": 2 00:35:38.869 } 00:35:38.869 ], 00:35:38.869 "driver_specific": {} 00:35:38.869 }' 00:35:39.139 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:39.139 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:39.139 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:39.139 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:39.139 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:39.139 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:35:39.139 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:39.139 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:39.139 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:35:39.139 07:46:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:39.397 [2024-07-12 07:46:13.233838] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:39.397 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:39.655 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:39.655 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:39.655 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:39.913 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:39.913 "name": "Existed_Raid", 00:35:39.913 "uuid": "eec3e0b9-2d49-4b06-b1ea-5d4347ae57d0", 00:35:39.913 "strip_size_kb": 0, 00:35:39.913 "state": "online", 00:35:39.913 "raid_level": "raid1", 00:35:39.913 "superblock": true, 00:35:39.913 "num_base_bdevs": 2, 00:35:39.913 "num_base_bdevs_discovered": 1, 00:35:39.913 "num_base_bdevs_operational": 1, 00:35:39.913 "base_bdevs_list": [ 00:35:39.913 { 00:35:39.913 "name": null, 00:35:39.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:39.913 "is_configured": false, 00:35:39.913 "data_offset": 256, 00:35:39.913 "data_size": 7936 00:35:39.913 }, 00:35:39.913 { 00:35:39.913 "name": "BaseBdev2", 00:35:39.913 "uuid": "7a0e49a8-f0e2-4c75-af54-bec1d9522e1d", 00:35:39.913 "is_configured": true, 00:35:39.914 "data_offset": 256, 00:35:39.914 "data_size": 7936 00:35:39.914 } 00:35:39.914 ] 00:35:39.914 }' 00:35:39.914 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:39.914 07:46:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:40.479 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:35:40.479 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:35:40.479 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:40.479 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:35:40.737 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:35:40.737 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:40.737 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:40.995 [2024-07-12 07:46:14.624454] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:40.995 [2024-07-12 07:46:14.624595] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:40.995 [2024-07-12 07:46:14.646963] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:40.995 [2024-07-12 07:46:14.647015] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:40.995 [2024-07-12 07:46:14.647025] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 169750 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@946 -- # '[' -z 169750 ']' 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # kill -0 169750 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # uname 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 169750 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 169750' 00:35:40.995 killing process with pid 169750 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@965 -- # kill 169750 00:35:40.995 07:46:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # wait 169750 00:35:40.995 [2024-07-12 07:46:14.869858] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:40.995 [2024-07-12 07:46:14.869932] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:41.563 ************************************ 00:35:41.563 END TEST raid_state_function_test_sb_md_separate 00:35:41.563 ************************************ 00:35:41.563 07:46:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:35:41.563 00:35:41.563 real 0m9.660s 00:35:41.563 user 0m17.276s 00:35:41.563 sys 0m1.806s 00:35:41.563 07:46:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:41.563 07:46:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:41.563 07:46:15 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:35:41.563 07:46:15 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:35:41.563 07:46:15 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:41.563 07:46:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:41.563 ************************************ 00:35:41.563 START TEST raid_superblock_test_md_separate 00:35:41.563 ************************************ 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=170107 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 170107 /var/tmp/spdk-raid.sock 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@827 -- # '[' -z 170107 ']' 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:41.563 07:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:41.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:41.564 07:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:41.564 07:46:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:41.564 [2024-07-12 07:46:15.428573] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:41.564 [2024-07-12 07:46:15.428836] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170107 ] 00:35:41.823 [2024-07-12 07:46:15.584115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.823 [2024-07-12 07:46:15.652237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.081 [2024-07-12 07:46:15.710644] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:42.649 07:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:42.649 07:46:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # return 0 00:35:42.649 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:35:42.650 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:42.650 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:35:42.650 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:35:42.650 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:42.650 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:42.650 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:42.650 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:42.650 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:35:42.909 malloc1 00:35:42.909 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:43.168 [2024-07-12 07:46:16.844430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:43.168 [2024-07-12 07:46:16.844525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:43.168 [2024-07-12 07:46:16.844560] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:35:43.168 [2024-07-12 07:46:16.844602] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:43.168 [2024-07-12 07:46:16.846838] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:43.168 [2024-07-12 07:46:16.846907] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:43.168 pt1 00:35:43.168 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:43.168 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:43.168 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:35:43.168 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:35:43.168 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:43.168 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:43.168 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:43.168 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:43.168 07:46:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:35:43.427 malloc2 00:35:43.427 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:43.687 [2024-07-12 07:46:17.361547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:43.687 [2024-07-12 07:46:17.361605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:43.687 [2024-07-12 07:46:17.361640] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:35:43.687 [2024-07-12 07:46:17.361677] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:43.687 [2024-07-12 07:46:17.363667] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:43.687 [2024-07-12 07:46:17.363711] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:43.687 pt2 00:35:43.687 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:43.687 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:43.687 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:35:43.687 [2024-07-12 07:46:17.541629] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:43.687 [2024-07-12 07:46:17.543695] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:43.687 [2024-07-12 07:46:17.543874] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:35:43.687 [2024-07-12 07:46:17.543885] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:43.687 [2024-07-12 07:46:17.544004] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:35:43.687 [2024-07-12 07:46:17.544143] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:35:43.687 [2024-07-12 07:46:17.544151] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:35:43.687 [2024-07-12 07:46:17.544221] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:43.687 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:43.687 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:43.687 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:43.687 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:43.687 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:43.687 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:43.687 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:43.687 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:43.687 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:43.687 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:43.687 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:43.687 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:43.946 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:43.946 "name": "raid_bdev1", 00:35:43.946 "uuid": "843e9cf7-8cfe-4f09-afc3-3234706a9448", 00:35:43.946 "strip_size_kb": 0, 00:35:43.946 "state": "online", 00:35:43.946 "raid_level": "raid1", 00:35:43.946 "superblock": true, 00:35:43.946 "num_base_bdevs": 2, 00:35:43.946 "num_base_bdevs_discovered": 2, 00:35:43.946 "num_base_bdevs_operational": 2, 00:35:43.946 "base_bdevs_list": [ 00:35:43.946 { 00:35:43.946 "name": "pt1", 00:35:43.946 "uuid": "58ed884c-53f6-5a5b-b6c4-9a120beb017c", 00:35:43.946 "is_configured": true, 00:35:43.946 "data_offset": 256, 00:35:43.946 "data_size": 7936 00:35:43.946 }, 00:35:43.946 { 00:35:43.946 "name": "pt2", 00:35:43.946 "uuid": "5aeb7b4b-c02d-5534-9a7a-26ef12d2642b", 00:35:43.947 "is_configured": true, 00:35:43.947 "data_offset": 256, 00:35:43.947 "data_size": 7936 00:35:43.947 } 00:35:43.947 ] 00:35:43.947 }' 00:35:43.947 07:46:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:43.947 07:46:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:44.515 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:35:44.515 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:35:44.515 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:44.515 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:44.515 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:44.515 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:35:44.515 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:44.515 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:44.775 [2024-07-12 07:46:18.397927] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:44.775 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:44.775 "name": "raid_bdev1", 00:35:44.775 "aliases": [ 00:35:44.775 "843e9cf7-8cfe-4f09-afc3-3234706a9448" 00:35:44.775 ], 00:35:44.775 "product_name": "Raid Volume", 00:35:44.775 "block_size": 4096, 00:35:44.775 "num_blocks": 7936, 00:35:44.775 "uuid": "843e9cf7-8cfe-4f09-afc3-3234706a9448", 00:35:44.775 "md_size": 32, 00:35:44.775 "md_interleave": false, 00:35:44.775 "dif_type": 0, 00:35:44.775 "assigned_rate_limits": { 00:35:44.775 "rw_ios_per_sec": 0, 00:35:44.775 "rw_mbytes_per_sec": 0, 00:35:44.775 "r_mbytes_per_sec": 0, 00:35:44.775 "w_mbytes_per_sec": 0 00:35:44.775 }, 00:35:44.775 "claimed": false, 00:35:44.775 "zoned": false, 00:35:44.775 "supported_io_types": { 00:35:44.775 "read": true, 00:35:44.775 "write": true, 00:35:44.775 "unmap": false, 00:35:44.775 "write_zeroes": true, 00:35:44.775 "flush": false, 00:35:44.775 "reset": true, 00:35:44.775 "compare": false, 00:35:44.775 "compare_and_write": false, 00:35:44.775 "abort": false, 00:35:44.775 "nvme_admin": false, 00:35:44.775 "nvme_io": false 00:35:44.775 }, 00:35:44.775 "memory_domains": [ 00:35:44.775 { 00:35:44.775 "dma_device_id": "system", 00:35:44.775 "dma_device_type": 1 00:35:44.775 }, 00:35:44.775 { 00:35:44.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:44.775 "dma_device_type": 2 00:35:44.775 }, 00:35:44.775 { 00:35:44.775 "dma_device_id": "system", 00:35:44.775 "dma_device_type": 1 00:35:44.775 }, 00:35:44.775 { 00:35:44.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:44.775 "dma_device_type": 2 00:35:44.775 } 00:35:44.775 ], 00:35:44.775 "driver_specific": { 00:35:44.775 "raid": { 00:35:44.775 "uuid": "843e9cf7-8cfe-4f09-afc3-3234706a9448", 00:35:44.775 "strip_size_kb": 0, 00:35:44.775 "state": "online", 00:35:44.775 "raid_level": "raid1", 00:35:44.775 "superblock": true, 00:35:44.775 "num_base_bdevs": 2, 00:35:44.775 "num_base_bdevs_discovered": 2, 00:35:44.775 "num_base_bdevs_operational": 2, 00:35:44.775 "base_bdevs_list": [ 00:35:44.775 { 00:35:44.775 "name": "pt1", 00:35:44.775 "uuid": "58ed884c-53f6-5a5b-b6c4-9a120beb017c", 00:35:44.775 "is_configured": true, 00:35:44.775 "data_offset": 256, 00:35:44.775 "data_size": 7936 00:35:44.775 }, 00:35:44.775 { 00:35:44.775 "name": "pt2", 00:35:44.775 "uuid": "5aeb7b4b-c02d-5534-9a7a-26ef12d2642b", 00:35:44.775 "is_configured": true, 00:35:44.775 "data_offset": 256, 00:35:44.775 "data_size": 7936 00:35:44.775 } 00:35:44.775 ] 00:35:44.775 } 00:35:44.775 } 00:35:44.775 }' 00:35:44.775 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:44.775 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:35:44.775 pt2' 00:35:44.775 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:44.775 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:44.775 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:45.034 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:45.034 "name": "pt1", 00:35:45.035 "aliases": [ 00:35:45.035 "58ed884c-53f6-5a5b-b6c4-9a120beb017c" 00:35:45.035 ], 00:35:45.035 "product_name": "passthru", 00:35:45.035 "block_size": 4096, 00:35:45.035 "num_blocks": 8192, 00:35:45.035 "uuid": "58ed884c-53f6-5a5b-b6c4-9a120beb017c", 00:35:45.035 "md_size": 32, 00:35:45.035 "md_interleave": false, 00:35:45.035 "dif_type": 0, 00:35:45.035 "assigned_rate_limits": { 00:35:45.035 "rw_ios_per_sec": 0, 00:35:45.035 "rw_mbytes_per_sec": 0, 00:35:45.035 "r_mbytes_per_sec": 0, 00:35:45.035 "w_mbytes_per_sec": 0 00:35:45.035 }, 00:35:45.035 "claimed": true, 00:35:45.035 "claim_type": "exclusive_write", 00:35:45.035 "zoned": false, 00:35:45.035 "supported_io_types": { 00:35:45.035 "read": true, 00:35:45.035 "write": true, 00:35:45.035 "unmap": true, 00:35:45.035 "write_zeroes": true, 00:35:45.035 "flush": true, 00:35:45.035 "reset": true, 00:35:45.035 "compare": false, 00:35:45.035 "compare_and_write": false, 00:35:45.035 "abort": true, 00:35:45.035 "nvme_admin": false, 00:35:45.035 "nvme_io": false 00:35:45.035 }, 00:35:45.035 "memory_domains": [ 00:35:45.035 { 00:35:45.035 "dma_device_id": "system", 00:35:45.035 "dma_device_type": 1 00:35:45.035 }, 00:35:45.035 { 00:35:45.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:45.035 "dma_device_type": 2 00:35:45.035 } 00:35:45.035 ], 00:35:45.035 "driver_specific": { 00:35:45.035 "passthru": { 00:35:45.035 "name": "pt1", 00:35:45.035 "base_bdev_name": "malloc1" 00:35:45.035 } 00:35:45.035 } 00:35:45.035 }' 00:35:45.035 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:45.035 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:45.035 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:45.035 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:45.035 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:45.035 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:35:45.035 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:45.035 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:45.035 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:35:45.035 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:45.035 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:45.294 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:35:45.294 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:45.294 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:45.294 07:46:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:45.553 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:45.553 "name": "pt2", 00:35:45.553 "aliases": [ 00:35:45.553 "5aeb7b4b-c02d-5534-9a7a-26ef12d2642b" 00:35:45.553 ], 00:35:45.553 "product_name": "passthru", 00:35:45.553 "block_size": 4096, 00:35:45.553 "num_blocks": 8192, 00:35:45.553 "uuid": "5aeb7b4b-c02d-5534-9a7a-26ef12d2642b", 00:35:45.553 "md_size": 32, 00:35:45.553 "md_interleave": false, 00:35:45.553 "dif_type": 0, 00:35:45.553 "assigned_rate_limits": { 00:35:45.553 "rw_ios_per_sec": 0, 00:35:45.553 "rw_mbytes_per_sec": 0, 00:35:45.553 "r_mbytes_per_sec": 0, 00:35:45.553 "w_mbytes_per_sec": 0 00:35:45.553 }, 00:35:45.553 "claimed": true, 00:35:45.553 "claim_type": "exclusive_write", 00:35:45.554 "zoned": false, 00:35:45.554 "supported_io_types": { 00:35:45.554 "read": true, 00:35:45.554 "write": true, 00:35:45.554 "unmap": true, 00:35:45.554 "write_zeroes": true, 00:35:45.554 "flush": true, 00:35:45.554 "reset": true, 00:35:45.554 "compare": false, 00:35:45.554 "compare_and_write": false, 00:35:45.554 "abort": true, 00:35:45.554 "nvme_admin": false, 00:35:45.554 "nvme_io": false 00:35:45.554 }, 00:35:45.554 "memory_domains": [ 00:35:45.554 { 00:35:45.554 "dma_device_id": "system", 00:35:45.554 "dma_device_type": 1 00:35:45.554 }, 00:35:45.554 { 00:35:45.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:45.554 "dma_device_type": 2 00:35:45.554 } 00:35:45.554 ], 00:35:45.554 "driver_specific": { 00:35:45.554 "passthru": { 00:35:45.554 "name": "pt2", 00:35:45.554 "base_bdev_name": "malloc2" 00:35:45.554 } 00:35:45.554 } 00:35:45.554 }' 00:35:45.554 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:45.554 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:45.554 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:45.554 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:45.554 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:45.554 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:35:45.554 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:45.554 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:45.813 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:35:45.813 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:45.813 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:45.813 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:35:45.813 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:45.813 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:35:45.813 [2024-07-12 07:46:19.678160] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:45.813 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=843e9cf7-8cfe-4f09-afc3-3234706a9448 00:35:45.813 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 843e9cf7-8cfe-4f09-afc3-3234706a9448 ']' 00:35:46.072 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:46.072 [2024-07-12 07:46:19.949994] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:46.072 [2024-07-12 07:46:19.950016] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:46.072 [2024-07-12 07:46:19.950103] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:46.072 [2024-07-12 07:46:19.950169] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:46.072 [2024-07-12 07:46:19.950178] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:35:46.331 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:46.331 07:46:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:35:46.331 07:46:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:35:46.331 07:46:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:35:46.331 07:46:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:46.331 07:46:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:46.590 07:46:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:46.590 07:46:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:46.849 07:46:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:35:46.849 07:46:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:47.109 [2024-07-12 07:46:20.962140] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:47.109 [2024-07-12 07:46:20.964096] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:47.109 [2024-07-12 07:46:20.964161] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:47.109 [2024-07-12 07:46:20.964220] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:47.109 [2024-07-12 07:46:20.964248] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:47.109 [2024-07-12 07:46:20.964257] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:35:47.109 request: 00:35:47.109 { 00:35:47.109 "name": "raid_bdev1", 00:35:47.109 "raid_level": "raid1", 00:35:47.109 "base_bdevs": [ 00:35:47.109 "malloc1", 00:35:47.109 "malloc2" 00:35:47.109 ], 00:35:47.109 "superblock": false, 00:35:47.109 "method": "bdev_raid_create", 00:35:47.109 "req_id": 1 00:35:47.109 } 00:35:47.109 Got JSON-RPC error response 00:35:47.109 response: 00:35:47.109 { 00:35:47.109 "code": -17, 00:35:47.109 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:47.109 } 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:47.109 07:46:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:35:47.368 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:35:47.368 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:35:47.368 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:47.627 [2024-07-12 07:46:21.318175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:47.627 [2024-07-12 07:46:21.318239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:47.628 [2024-07-12 07:46:21.318265] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:35:47.628 [2024-07-12 07:46:21.318291] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:47.628 [2024-07-12 07:46:21.320265] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:47.628 [2024-07-12 07:46:21.320314] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:47.628 [2024-07-12 07:46:21.320367] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:47.628 [2024-07-12 07:46:21.320421] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:47.628 pt1 00:35:47.628 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:35:47.628 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:47.628 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:47.628 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:47.628 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:47.628 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:47.628 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:47.628 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:47.628 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:47.628 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:47.628 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:47.628 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:47.887 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:47.887 "name": "raid_bdev1", 00:35:47.887 "uuid": "843e9cf7-8cfe-4f09-afc3-3234706a9448", 00:35:47.887 "strip_size_kb": 0, 00:35:47.887 "state": "configuring", 00:35:47.887 "raid_level": "raid1", 00:35:47.887 "superblock": true, 00:35:47.887 "num_base_bdevs": 2, 00:35:47.887 "num_base_bdevs_discovered": 1, 00:35:47.887 "num_base_bdevs_operational": 2, 00:35:47.887 "base_bdevs_list": [ 00:35:47.887 { 00:35:47.887 "name": "pt1", 00:35:47.887 "uuid": "58ed884c-53f6-5a5b-b6c4-9a120beb017c", 00:35:47.887 "is_configured": true, 00:35:47.887 "data_offset": 256, 00:35:47.887 "data_size": 7936 00:35:47.887 }, 00:35:47.887 { 00:35:47.887 "name": null, 00:35:47.887 "uuid": "5aeb7b4b-c02d-5534-9a7a-26ef12d2642b", 00:35:47.887 "is_configured": false, 00:35:47.887 "data_offset": 256, 00:35:47.887 "data_size": 7936 00:35:47.887 } 00:35:47.887 ] 00:35:47.887 }' 00:35:47.887 07:46:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:47.887 07:46:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:48.456 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:35:48.456 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:35:48.456 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:48.456 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:48.716 [2024-07-12 07:46:22.426393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:48.716 [2024-07-12 07:46:22.426475] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:48.716 [2024-07-12 07:46:22.426511] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:35:48.716 [2024-07-12 07:46:22.426537] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:48.716 [2024-07-12 07:46:22.426691] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:48.716 [2024-07-12 07:46:22.426719] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:48.716 [2024-07-12 07:46:22.426795] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:48.716 [2024-07-12 07:46:22.426812] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:48.716 [2024-07-12 07:46:22.426892] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:35:48.716 [2024-07-12 07:46:22.426902] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:48.716 [2024-07-12 07:46:22.426964] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:35:48.716 [2024-07-12 07:46:22.427038] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:35:48.716 [2024-07-12 07:46:22.427046] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:35:48.716 [2024-07-12 07:46:22.427099] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:48.716 pt2 00:35:48.716 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:35:48.716 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:48.716 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:48.716 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:48.716 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:48.716 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:48.716 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:48.716 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:48.716 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:48.716 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:48.716 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:48.716 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:48.716 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:48.716 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:48.975 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:48.975 "name": "raid_bdev1", 00:35:48.975 "uuid": "843e9cf7-8cfe-4f09-afc3-3234706a9448", 00:35:48.975 "strip_size_kb": 0, 00:35:48.975 "state": "online", 00:35:48.975 "raid_level": "raid1", 00:35:48.975 "superblock": true, 00:35:48.975 "num_base_bdevs": 2, 00:35:48.975 "num_base_bdevs_discovered": 2, 00:35:48.975 "num_base_bdevs_operational": 2, 00:35:48.975 "base_bdevs_list": [ 00:35:48.975 { 00:35:48.975 "name": "pt1", 00:35:48.976 "uuid": "58ed884c-53f6-5a5b-b6c4-9a120beb017c", 00:35:48.976 "is_configured": true, 00:35:48.976 "data_offset": 256, 00:35:48.976 "data_size": 7936 00:35:48.976 }, 00:35:48.976 { 00:35:48.976 "name": "pt2", 00:35:48.976 "uuid": "5aeb7b4b-c02d-5534-9a7a-26ef12d2642b", 00:35:48.976 "is_configured": true, 00:35:48.976 "data_offset": 256, 00:35:48.976 "data_size": 7936 00:35:48.976 } 00:35:48.976 ] 00:35:48.976 }' 00:35:48.976 07:46:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:48.976 07:46:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:49.554 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:35:49.554 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:35:49.554 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:49.554 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:49.554 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:49.554 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:35:49.554 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:49.554 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:49.554 [2024-07-12 07:46:23.382738] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:49.554 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:49.554 "name": "raid_bdev1", 00:35:49.554 "aliases": [ 00:35:49.554 "843e9cf7-8cfe-4f09-afc3-3234706a9448" 00:35:49.554 ], 00:35:49.554 "product_name": "Raid Volume", 00:35:49.554 "block_size": 4096, 00:35:49.554 "num_blocks": 7936, 00:35:49.554 "uuid": "843e9cf7-8cfe-4f09-afc3-3234706a9448", 00:35:49.554 "md_size": 32, 00:35:49.554 "md_interleave": false, 00:35:49.554 "dif_type": 0, 00:35:49.554 "assigned_rate_limits": { 00:35:49.554 "rw_ios_per_sec": 0, 00:35:49.554 "rw_mbytes_per_sec": 0, 00:35:49.554 "r_mbytes_per_sec": 0, 00:35:49.554 "w_mbytes_per_sec": 0 00:35:49.554 }, 00:35:49.554 "claimed": false, 00:35:49.554 "zoned": false, 00:35:49.554 "supported_io_types": { 00:35:49.554 "read": true, 00:35:49.554 "write": true, 00:35:49.554 "unmap": false, 00:35:49.554 "write_zeroes": true, 00:35:49.554 "flush": false, 00:35:49.554 "reset": true, 00:35:49.554 "compare": false, 00:35:49.554 "compare_and_write": false, 00:35:49.554 "abort": false, 00:35:49.554 "nvme_admin": false, 00:35:49.554 "nvme_io": false 00:35:49.554 }, 00:35:49.554 "memory_domains": [ 00:35:49.554 { 00:35:49.554 "dma_device_id": "system", 00:35:49.554 "dma_device_type": 1 00:35:49.554 }, 00:35:49.554 { 00:35:49.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:49.554 "dma_device_type": 2 00:35:49.554 }, 00:35:49.554 { 00:35:49.554 "dma_device_id": "system", 00:35:49.554 "dma_device_type": 1 00:35:49.554 }, 00:35:49.554 { 00:35:49.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:49.554 "dma_device_type": 2 00:35:49.554 } 00:35:49.554 ], 00:35:49.554 "driver_specific": { 00:35:49.554 "raid": { 00:35:49.554 "uuid": "843e9cf7-8cfe-4f09-afc3-3234706a9448", 00:35:49.554 "strip_size_kb": 0, 00:35:49.554 "state": "online", 00:35:49.554 "raid_level": "raid1", 00:35:49.554 "superblock": true, 00:35:49.554 "num_base_bdevs": 2, 00:35:49.554 "num_base_bdevs_discovered": 2, 00:35:49.554 "num_base_bdevs_operational": 2, 00:35:49.554 "base_bdevs_list": [ 00:35:49.554 { 00:35:49.554 "name": "pt1", 00:35:49.554 "uuid": "58ed884c-53f6-5a5b-b6c4-9a120beb017c", 00:35:49.554 "is_configured": true, 00:35:49.554 "data_offset": 256, 00:35:49.554 "data_size": 7936 00:35:49.554 }, 00:35:49.554 { 00:35:49.554 "name": "pt2", 00:35:49.554 "uuid": "5aeb7b4b-c02d-5534-9a7a-26ef12d2642b", 00:35:49.554 "is_configured": true, 00:35:49.554 "data_offset": 256, 00:35:49.554 "data_size": 7936 00:35:49.554 } 00:35:49.554 ] 00:35:49.554 } 00:35:49.554 } 00:35:49.554 }' 00:35:49.554 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:49.812 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:35:49.812 pt2' 00:35:49.812 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:49.812 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:49.812 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:49.812 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:49.812 "name": "pt1", 00:35:49.812 "aliases": [ 00:35:49.812 "58ed884c-53f6-5a5b-b6c4-9a120beb017c" 00:35:49.812 ], 00:35:49.812 "product_name": "passthru", 00:35:49.812 "block_size": 4096, 00:35:49.812 "num_blocks": 8192, 00:35:49.813 "uuid": "58ed884c-53f6-5a5b-b6c4-9a120beb017c", 00:35:49.813 "md_size": 32, 00:35:49.813 "md_interleave": false, 00:35:49.813 "dif_type": 0, 00:35:49.813 "assigned_rate_limits": { 00:35:49.813 "rw_ios_per_sec": 0, 00:35:49.813 "rw_mbytes_per_sec": 0, 00:35:49.813 "r_mbytes_per_sec": 0, 00:35:49.813 "w_mbytes_per_sec": 0 00:35:49.813 }, 00:35:49.813 "claimed": true, 00:35:49.813 "claim_type": "exclusive_write", 00:35:49.813 "zoned": false, 00:35:49.813 "supported_io_types": { 00:35:49.813 "read": true, 00:35:49.813 "write": true, 00:35:49.813 "unmap": true, 00:35:49.813 "write_zeroes": true, 00:35:49.813 "flush": true, 00:35:49.813 "reset": true, 00:35:49.813 "compare": false, 00:35:49.813 "compare_and_write": false, 00:35:49.813 "abort": true, 00:35:49.813 "nvme_admin": false, 00:35:49.813 "nvme_io": false 00:35:49.813 }, 00:35:49.813 "memory_domains": [ 00:35:49.813 { 00:35:49.813 "dma_device_id": "system", 00:35:49.813 "dma_device_type": 1 00:35:49.813 }, 00:35:49.813 { 00:35:49.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:49.813 "dma_device_type": 2 00:35:49.813 } 00:35:49.813 ], 00:35:49.813 "driver_specific": { 00:35:49.813 "passthru": { 00:35:49.813 "name": "pt1", 00:35:49.813 "base_bdev_name": "malloc1" 00:35:49.813 } 00:35:49.813 } 00:35:49.813 }' 00:35:49.813 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:49.813 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:50.071 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:50.071 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:50.071 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:50.071 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:35:50.071 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:50.071 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:50.071 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:35:50.071 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:50.071 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:50.329 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:35:50.329 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:50.329 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:50.329 07:46:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:50.588 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:50.588 "name": "pt2", 00:35:50.588 "aliases": [ 00:35:50.588 "5aeb7b4b-c02d-5534-9a7a-26ef12d2642b" 00:35:50.588 ], 00:35:50.588 "product_name": "passthru", 00:35:50.588 "block_size": 4096, 00:35:50.588 "num_blocks": 8192, 00:35:50.588 "uuid": "5aeb7b4b-c02d-5534-9a7a-26ef12d2642b", 00:35:50.588 "md_size": 32, 00:35:50.588 "md_interleave": false, 00:35:50.588 "dif_type": 0, 00:35:50.588 "assigned_rate_limits": { 00:35:50.588 "rw_ios_per_sec": 0, 00:35:50.588 "rw_mbytes_per_sec": 0, 00:35:50.588 "r_mbytes_per_sec": 0, 00:35:50.588 "w_mbytes_per_sec": 0 00:35:50.588 }, 00:35:50.588 "claimed": true, 00:35:50.588 "claim_type": "exclusive_write", 00:35:50.588 "zoned": false, 00:35:50.588 "supported_io_types": { 00:35:50.588 "read": true, 00:35:50.588 "write": true, 00:35:50.588 "unmap": true, 00:35:50.588 "write_zeroes": true, 00:35:50.588 "flush": true, 00:35:50.588 "reset": true, 00:35:50.588 "compare": false, 00:35:50.588 "compare_and_write": false, 00:35:50.588 "abort": true, 00:35:50.588 "nvme_admin": false, 00:35:50.588 "nvme_io": false 00:35:50.588 }, 00:35:50.588 "memory_domains": [ 00:35:50.588 { 00:35:50.588 "dma_device_id": "system", 00:35:50.588 "dma_device_type": 1 00:35:50.588 }, 00:35:50.588 { 00:35:50.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:50.588 "dma_device_type": 2 00:35:50.588 } 00:35:50.588 ], 00:35:50.588 "driver_specific": { 00:35:50.588 "passthru": { 00:35:50.588 "name": "pt2", 00:35:50.588 "base_bdev_name": "malloc2" 00:35:50.588 } 00:35:50.588 } 00:35:50.588 }' 00:35:50.588 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:50.588 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:50.588 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:50.588 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:50.588 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:50.588 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:35:50.588 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:50.847 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:50.847 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:35:50.847 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:50.847 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:50.847 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:35:50.847 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:50.847 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:35:51.107 [2024-07-12 07:46:24.882986] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:51.107 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 843e9cf7-8cfe-4f09-afc3-3234706a9448 '!=' 843e9cf7-8cfe-4f09-afc3-3234706a9448 ']' 00:35:51.107 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:35:51.107 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:35:51.107 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:35:51.107 07:46:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:51.366 [2024-07-12 07:46:25.138886] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:35:51.366 07:46:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:51.366 07:46:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:51.366 07:46:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:51.366 07:46:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:51.366 07:46:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:51.366 07:46:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:51.366 07:46:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:51.366 07:46:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:51.366 07:46:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:51.366 07:46:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:51.366 07:46:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:51.366 07:46:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:51.625 07:46:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:51.625 "name": "raid_bdev1", 00:35:51.625 "uuid": "843e9cf7-8cfe-4f09-afc3-3234706a9448", 00:35:51.625 "strip_size_kb": 0, 00:35:51.625 "state": "online", 00:35:51.625 "raid_level": "raid1", 00:35:51.625 "superblock": true, 00:35:51.625 "num_base_bdevs": 2, 00:35:51.625 "num_base_bdevs_discovered": 1, 00:35:51.625 "num_base_bdevs_operational": 1, 00:35:51.625 "base_bdevs_list": [ 00:35:51.625 { 00:35:51.625 "name": null, 00:35:51.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.625 "is_configured": false, 00:35:51.625 "data_offset": 256, 00:35:51.625 "data_size": 7936 00:35:51.625 }, 00:35:51.625 { 00:35:51.625 "name": "pt2", 00:35:51.625 "uuid": "5aeb7b4b-c02d-5534-9a7a-26ef12d2642b", 00:35:51.625 "is_configured": true, 00:35:51.625 "data_offset": 256, 00:35:51.625 "data_size": 7936 00:35:51.625 } 00:35:51.625 ] 00:35:51.625 }' 00:35:51.625 07:46:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:51.626 07:46:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:52.193 07:46:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:52.193 [2024-07-12 07:46:26.055016] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:52.193 [2024-07-12 07:46:26.055040] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:52.193 [2024-07-12 07:46:26.055086] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:52.193 [2024-07-12 07:46:26.055122] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:52.193 [2024-07-12 07:46:26.055130] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:35:52.193 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:35:52.193 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:52.452 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:35:52.452 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:35:52.452 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:35:52.452 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:52.452 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:52.712 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:35:52.712 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:52.712 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:35:52.712 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:35:52.712 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:35:52.712 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:52.971 [2024-07-12 07:46:26.739113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:52.971 [2024-07-12 07:46:26.739183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:52.971 [2024-07-12 07:46:26.739210] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:52.971 [2024-07-12 07:46:26.739246] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:52.971 [2024-07-12 07:46:26.741357] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:52.971 [2024-07-12 07:46:26.741420] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:52.971 [2024-07-12 07:46:26.741477] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:52.971 [2024-07-12 07:46:26.741504] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:52.971 [2024-07-12 07:46:26.741552] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:35:52.971 [2024-07-12 07:46:26.741559] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:52.971 [2024-07-12 07:46:26.741616] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:35:52.971 [2024-07-12 07:46:26.741694] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:35:52.971 [2024-07-12 07:46:26.741703] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:35:52.971 [2024-07-12 07:46:26.741746] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:52.971 pt2 00:35:52.971 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:52.971 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:52.971 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:52.971 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:52.971 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:52.971 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:52.971 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:52.971 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:52.971 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:52.971 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:52.971 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:52.971 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:53.231 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:53.232 "name": "raid_bdev1", 00:35:53.232 "uuid": "843e9cf7-8cfe-4f09-afc3-3234706a9448", 00:35:53.232 "strip_size_kb": 0, 00:35:53.232 "state": "online", 00:35:53.232 "raid_level": "raid1", 00:35:53.232 "superblock": true, 00:35:53.232 "num_base_bdevs": 2, 00:35:53.232 "num_base_bdevs_discovered": 1, 00:35:53.232 "num_base_bdevs_operational": 1, 00:35:53.232 "base_bdevs_list": [ 00:35:53.232 { 00:35:53.232 "name": null, 00:35:53.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:53.232 "is_configured": false, 00:35:53.232 "data_offset": 256, 00:35:53.232 "data_size": 7936 00:35:53.232 }, 00:35:53.232 { 00:35:53.232 "name": "pt2", 00:35:53.232 "uuid": "5aeb7b4b-c02d-5534-9a7a-26ef12d2642b", 00:35:53.232 "is_configured": true, 00:35:53.232 "data_offset": 256, 00:35:53.232 "data_size": 7936 00:35:53.232 } 00:35:53.232 ] 00:35:53.232 }' 00:35:53.232 07:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:53.232 07:46:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:53.800 07:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:53.800 [2024-07-12 07:46:27.583244] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:53.800 [2024-07-12 07:46:27.583265] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:53.800 [2024-07-12 07:46:27.583315] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:53.800 [2024-07-12 07:46:27.583344] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:53.800 [2024-07-12 07:46:27.583352] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:35:53.800 07:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:53.800 07:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:35:54.059 07:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:35:54.059 07:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:35:54.059 07:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:35:54.059 07:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:54.318 [2024-07-12 07:46:28.011299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:54.318 [2024-07-12 07:46:28.011371] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:54.318 [2024-07-12 07:46:28.011400] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:35:54.318 [2024-07-12 07:46:28.011417] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:54.318 [2024-07-12 07:46:28.013564] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:54.318 [2024-07-12 07:46:28.013604] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:54.318 [2024-07-12 07:46:28.013658] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:54.318 [2024-07-12 07:46:28.013678] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:54.318 [2024-07-12 07:46:28.013781] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:35:54.318 [2024-07-12 07:46:28.013789] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:54.318 [2024-07-12 07:46:28.013811] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:35:54.318 [2024-07-12 07:46:28.013880] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:54.318 [2024-07-12 07:46:28.013956] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:35:54.318 [2024-07-12 07:46:28.013965] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:54.318 [2024-07-12 07:46:28.014014] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:35:54.318 [2024-07-12 07:46:28.014071] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:35:54.318 [2024-07-12 07:46:28.014079] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:35:54.318 [2024-07-12 07:46:28.014127] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:54.318 pt1 00:35:54.318 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:35:54.318 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:54.318 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:54.318 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:54.318 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:54.318 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:54.318 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:54.318 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:54.318 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:54.318 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:54.318 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:54.318 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:54.318 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:54.576 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:54.576 "name": "raid_bdev1", 00:35:54.576 "uuid": "843e9cf7-8cfe-4f09-afc3-3234706a9448", 00:35:54.576 "strip_size_kb": 0, 00:35:54.576 "state": "online", 00:35:54.576 "raid_level": "raid1", 00:35:54.576 "superblock": true, 00:35:54.576 "num_base_bdevs": 2, 00:35:54.576 "num_base_bdevs_discovered": 1, 00:35:54.576 "num_base_bdevs_operational": 1, 00:35:54.576 "base_bdevs_list": [ 00:35:54.576 { 00:35:54.576 "name": null, 00:35:54.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:54.576 "is_configured": false, 00:35:54.576 "data_offset": 256, 00:35:54.576 "data_size": 7936 00:35:54.576 }, 00:35:54.576 { 00:35:54.576 "name": "pt2", 00:35:54.576 "uuid": "5aeb7b4b-c02d-5534-9a7a-26ef12d2642b", 00:35:54.576 "is_configured": true, 00:35:54.576 "data_offset": 256, 00:35:54.576 "data_size": 7936 00:35:54.576 } 00:35:54.576 ] 00:35:54.576 }' 00:35:54.576 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:54.576 07:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:55.143 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:35:55.143 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:55.143 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:35:55.143 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:55.143 07:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:35:55.401 [2024-07-12 07:46:29.083618] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:55.401 07:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 843e9cf7-8cfe-4f09-afc3-3234706a9448 '!=' 843e9cf7-8cfe-4f09-afc3-3234706a9448 ']' 00:35:55.402 07:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 170107 00:35:55.402 07:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@946 -- # '[' -z 170107 ']' 00:35:55.402 07:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # kill -0 170107 00:35:55.402 07:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # uname 00:35:55.402 07:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:55.402 07:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 170107 00:35:55.402 07:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:55.402 07:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:55.402 07:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 170107' 00:35:55.402 killing process with pid 170107 00:35:55.402 07:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@965 -- # kill 170107 00:35:55.402 [2024-07-12 07:46:29.134964] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:55.402 [2024-07-12 07:46:29.135018] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:55.402 [2024-07-12 07:46:29.135047] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:55.402 [2024-07-12 07:46:29.135054] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:35:55.402 07:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # wait 170107 00:35:55.402 [2024-07-12 07:46:29.158565] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:55.660 07:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:35:55.660 00:35:55.661 real 0m14.043s 00:35:55.661 user 0m25.381s 00:35:55.661 sys 0m2.629s 00:35:55.661 07:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:55.661 07:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:55.661 ************************************ 00:35:55.661 END TEST raid_superblock_test_md_separate 00:35:55.661 ************************************ 00:35:55.661 07:46:29 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:35:55.661 07:46:29 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:35:55.661 07:46:29 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:35:55.661 07:46:29 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:55.661 07:46:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:55.661 ************************************ 00:35:55.661 START TEST raid_rebuild_test_sb_md_separate 00:35:55.661 ************************************ 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false true 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=170608 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 170608 /var/tmp/spdk-raid.sock 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@827 -- # '[' -z 170608 ']' 00:35:55.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:55.661 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:55.919 [2024-07-12 07:46:29.543165] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:55.919 [2024-07-12 07:46:29.543357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170608 ] 00:35:55.919 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:55.919 Zero copy mechanism will not be used. 00:35:55.919 [2024-07-12 07:46:29.686579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.919 [2024-07-12 07:46:29.733751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:55.919 [2024-07-12 07:46:29.777828] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:56.176 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:56.176 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # return 0 00:35:56.176 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:56.177 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:35:56.177 BaseBdev1_malloc 00:35:56.177 07:46:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:56.435 [2024-07-12 07:46:30.159545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:56.435 [2024-07-12 07:46:30.159632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:56.435 [2024-07-12 07:46:30.159671] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:35:56.435 [2024-07-12 07:46:30.159718] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:56.435 [2024-07-12 07:46:30.161763] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:56.435 [2024-07-12 07:46:30.161812] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:56.435 BaseBdev1 00:35:56.435 07:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:56.435 07:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:35:56.695 BaseBdev2_malloc 00:35:56.695 07:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:56.954 [2024-07-12 07:46:30.605621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:56.954 [2024-07-12 07:46:30.605680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:56.954 [2024-07-12 07:46:30.605712] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:35:56.954 [2024-07-12 07:46:30.605747] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:56.954 [2024-07-12 07:46:30.607714] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:56.954 [2024-07-12 07:46:30.607761] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:56.954 BaseBdev2 00:35:56.954 07:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:35:57.212 spare_malloc 00:35:57.212 07:46:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:57.212 spare_delay 00:35:57.212 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:57.470 [2024-07-12 07:46:31.279297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:57.470 [2024-07-12 07:46:31.279380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:57.470 [2024-07-12 07:46:31.279439] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:35:57.470 [2024-07-12 07:46:31.279483] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:57.470 [2024-07-12 07:46:31.282016] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:57.470 [2024-07-12 07:46:31.282097] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:57.470 spare 00:35:57.470 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:35:57.728 [2024-07-12 07:46:31.499426] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:57.728 [2024-07-12 07:46:31.501942] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:57.728 [2024-07-12 07:46:31.502167] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:35:57.728 [2024-07-12 07:46:31.502180] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:57.728 [2024-07-12 07:46:31.502341] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:35:57.728 [2024-07-12 07:46:31.502492] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:35:57.728 [2024-07-12 07:46:31.502512] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:35:57.728 [2024-07-12 07:46:31.502622] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:57.728 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:57.728 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:57.728 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:57.728 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:57.728 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:57.728 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:57.728 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:57.728 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:57.728 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:57.728 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:57.728 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:57.728 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:57.987 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:57.987 "name": "raid_bdev1", 00:35:57.987 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:35:57.987 "strip_size_kb": 0, 00:35:57.987 "state": "online", 00:35:57.987 "raid_level": "raid1", 00:35:57.988 "superblock": true, 00:35:57.988 "num_base_bdevs": 2, 00:35:57.988 "num_base_bdevs_discovered": 2, 00:35:57.988 "num_base_bdevs_operational": 2, 00:35:57.988 "base_bdevs_list": [ 00:35:57.988 { 00:35:57.988 "name": "BaseBdev1", 00:35:57.988 "uuid": "0b29e50c-47dc-5711-bd1d-e722dc0dce29", 00:35:57.988 "is_configured": true, 00:35:57.988 "data_offset": 256, 00:35:57.988 "data_size": 7936 00:35:57.988 }, 00:35:57.988 { 00:35:57.988 "name": "BaseBdev2", 00:35:57.988 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:35:57.988 "is_configured": true, 00:35:57.988 "data_offset": 256, 00:35:57.988 "data_size": 7936 00:35:57.988 } 00:35:57.988 ] 00:35:57.988 }' 00:35:57.988 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:57.988 07:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:58.553 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:35:58.553 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:58.811 [2024-07-12 07:46:32.459747] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:58.811 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:59.070 [2024-07-12 07:46:32.815643] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:35:59.070 /dev/nbd0 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # local i 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # break 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:59.070 1+0 records in 00:35:59.070 1+0 records out 00:35:59.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342736 s, 12.0 MB/s 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # size=4096 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # return 0 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:35:59.070 07:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:36:00.002 7936+0 records in 00:36:00.002 7936+0 records out 00:36:00.002 32505856 bytes (33 MB, 31 MiB) copied, 0.661618 s, 49.1 MB/s 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:00.002 [2024-07-12 07:46:33.807671] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:36:00.002 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:36:00.260 [2024-07-12 07:46:33.983269] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:00.260 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:00.260 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:00.260 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:00.260 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:00.260 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:00.260 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:00.260 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:00.260 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:00.260 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:00.260 07:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:00.260 07:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:00.260 07:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:00.519 07:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:00.519 "name": "raid_bdev1", 00:36:00.519 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:00.519 "strip_size_kb": 0, 00:36:00.519 "state": "online", 00:36:00.519 "raid_level": "raid1", 00:36:00.519 "superblock": true, 00:36:00.519 "num_base_bdevs": 2, 00:36:00.519 "num_base_bdevs_discovered": 1, 00:36:00.519 "num_base_bdevs_operational": 1, 00:36:00.519 "base_bdevs_list": [ 00:36:00.519 { 00:36:00.519 "name": null, 00:36:00.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:00.519 "is_configured": false, 00:36:00.519 "data_offset": 256, 00:36:00.519 "data_size": 7936 00:36:00.519 }, 00:36:00.519 { 00:36:00.519 "name": "BaseBdev2", 00:36:00.519 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:00.519 "is_configured": true, 00:36:00.519 "data_offset": 256, 00:36:00.519 "data_size": 7936 00:36:00.519 } 00:36:00.519 ] 00:36:00.519 }' 00:36:00.519 07:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:00.519 07:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:01.086 07:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:01.086 [2024-07-12 07:46:34.827444] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:01.086 [2024-07-12 07:46:34.830629] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c7c0 00:36:01.086 [2024-07-12 07:46:34.833029] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:01.086 07:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:36:02.019 07:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:02.019 07:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:02.019 07:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:02.019 07:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:02.019 07:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:02.019 07:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:02.019 07:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:02.276 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:02.276 "name": "raid_bdev1", 00:36:02.276 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:02.276 "strip_size_kb": 0, 00:36:02.276 "state": "online", 00:36:02.276 "raid_level": "raid1", 00:36:02.276 "superblock": true, 00:36:02.276 "num_base_bdevs": 2, 00:36:02.276 "num_base_bdevs_discovered": 2, 00:36:02.276 "num_base_bdevs_operational": 2, 00:36:02.276 "process": { 00:36:02.276 "type": "rebuild", 00:36:02.277 "target": "spare", 00:36:02.277 "progress": { 00:36:02.277 "blocks": 3072, 00:36:02.277 "percent": 38 00:36:02.277 } 00:36:02.277 }, 00:36:02.277 "base_bdevs_list": [ 00:36:02.277 { 00:36:02.277 "name": "spare", 00:36:02.277 "uuid": "ab3eafff-7f1a-5309-970f-2875515c47f3", 00:36:02.277 "is_configured": true, 00:36:02.277 "data_offset": 256, 00:36:02.277 "data_size": 7936 00:36:02.277 }, 00:36:02.277 { 00:36:02.277 "name": "BaseBdev2", 00:36:02.277 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:02.277 "is_configured": true, 00:36:02.277 "data_offset": 256, 00:36:02.277 "data_size": 7936 00:36:02.277 } 00:36:02.277 ] 00:36:02.277 }' 00:36:02.277 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:02.277 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:02.277 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:02.535 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:02.535 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:02.535 [2024-07-12 07:46:36.402423] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:02.794 [2024-07-12 07:46:36.445299] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:02.794 [2024-07-12 07:46:36.445412] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:02.794 [2024-07-12 07:46:36.445429] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:02.794 [2024-07-12 07:46:36.445437] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:02.794 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:02.794 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:02.794 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:02.794 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:02.794 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:02.794 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:02.794 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:02.794 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:02.794 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:02.794 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:02.794 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:02.794 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:03.053 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:03.053 "name": "raid_bdev1", 00:36:03.053 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:03.053 "strip_size_kb": 0, 00:36:03.053 "state": "online", 00:36:03.053 "raid_level": "raid1", 00:36:03.053 "superblock": true, 00:36:03.053 "num_base_bdevs": 2, 00:36:03.053 "num_base_bdevs_discovered": 1, 00:36:03.053 "num_base_bdevs_operational": 1, 00:36:03.053 "base_bdevs_list": [ 00:36:03.053 { 00:36:03.053 "name": null, 00:36:03.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:03.053 "is_configured": false, 00:36:03.053 "data_offset": 256, 00:36:03.053 "data_size": 7936 00:36:03.053 }, 00:36:03.053 { 00:36:03.053 "name": "BaseBdev2", 00:36:03.053 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:03.053 "is_configured": true, 00:36:03.053 "data_offset": 256, 00:36:03.053 "data_size": 7936 00:36:03.053 } 00:36:03.053 ] 00:36:03.053 }' 00:36:03.053 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:03.053 07:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:03.620 07:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:03.620 07:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:03.620 07:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:03.620 07:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:03.620 07:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:03.620 07:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:03.620 07:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:03.880 07:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:03.880 "name": "raid_bdev1", 00:36:03.880 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:03.880 "strip_size_kb": 0, 00:36:03.880 "state": "online", 00:36:03.880 "raid_level": "raid1", 00:36:03.880 "superblock": true, 00:36:03.880 "num_base_bdevs": 2, 00:36:03.880 "num_base_bdevs_discovered": 1, 00:36:03.880 "num_base_bdevs_operational": 1, 00:36:03.880 "base_bdevs_list": [ 00:36:03.880 { 00:36:03.880 "name": null, 00:36:03.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:03.880 "is_configured": false, 00:36:03.880 "data_offset": 256, 00:36:03.880 "data_size": 7936 00:36:03.880 }, 00:36:03.880 { 00:36:03.880 "name": "BaseBdev2", 00:36:03.880 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:03.880 "is_configured": true, 00:36:03.880 "data_offset": 256, 00:36:03.880 "data_size": 7936 00:36:03.880 } 00:36:03.880 ] 00:36:03.880 }' 00:36:03.880 07:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:03.880 07:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:03.880 07:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:03.880 07:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:03.880 07:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:04.140 [2024-07-12 07:46:37.915205] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:04.140 [2024-07-12 07:46:37.918208] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:36:04.140 [2024-07-12 07:46:37.920506] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:04.140 07:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:36:05.142 07:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:05.142 07:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:05.142 07:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:05.142 07:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:05.142 07:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:05.142 07:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:05.142 07:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:05.408 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:05.408 "name": "raid_bdev1", 00:36:05.408 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:05.408 "strip_size_kb": 0, 00:36:05.408 "state": "online", 00:36:05.408 "raid_level": "raid1", 00:36:05.408 "superblock": true, 00:36:05.408 "num_base_bdevs": 2, 00:36:05.408 "num_base_bdevs_discovered": 2, 00:36:05.408 "num_base_bdevs_operational": 2, 00:36:05.408 "process": { 00:36:05.408 "type": "rebuild", 00:36:05.408 "target": "spare", 00:36:05.408 "progress": { 00:36:05.408 "blocks": 3072, 00:36:05.408 "percent": 38 00:36:05.408 } 00:36:05.408 }, 00:36:05.408 "base_bdevs_list": [ 00:36:05.408 { 00:36:05.408 "name": "spare", 00:36:05.409 "uuid": "ab3eafff-7f1a-5309-970f-2875515c47f3", 00:36:05.409 "is_configured": true, 00:36:05.409 "data_offset": 256, 00:36:05.409 "data_size": 7936 00:36:05.409 }, 00:36:05.409 { 00:36:05.409 "name": "BaseBdev2", 00:36:05.409 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:05.409 "is_configured": true, 00:36:05.409 "data_offset": 256, 00:36:05.409 "data_size": 7936 00:36:05.409 } 00:36:05.409 ] 00:36:05.409 }' 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:36:05.409 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1310 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:05.409 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:05.668 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:05.669 "name": "raid_bdev1", 00:36:05.669 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:05.669 "strip_size_kb": 0, 00:36:05.669 "state": "online", 00:36:05.669 "raid_level": "raid1", 00:36:05.669 "superblock": true, 00:36:05.669 "num_base_bdevs": 2, 00:36:05.669 "num_base_bdevs_discovered": 2, 00:36:05.669 "num_base_bdevs_operational": 2, 00:36:05.669 "process": { 00:36:05.669 "type": "rebuild", 00:36:05.669 "target": "spare", 00:36:05.669 "progress": { 00:36:05.669 "blocks": 3840, 00:36:05.669 "percent": 48 00:36:05.669 } 00:36:05.669 }, 00:36:05.669 "base_bdevs_list": [ 00:36:05.669 { 00:36:05.669 "name": "spare", 00:36:05.669 "uuid": "ab3eafff-7f1a-5309-970f-2875515c47f3", 00:36:05.669 "is_configured": true, 00:36:05.669 "data_offset": 256, 00:36:05.669 "data_size": 7936 00:36:05.669 }, 00:36:05.669 { 00:36:05.669 "name": "BaseBdev2", 00:36:05.669 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:05.669 "is_configured": true, 00:36:05.669 "data_offset": 256, 00:36:05.669 "data_size": 7936 00:36:05.669 } 00:36:05.669 ] 00:36:05.669 }' 00:36:05.669 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:05.928 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:05.928 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:05.928 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:05.928 07:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:06.866 07:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:06.866 07:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:06.866 07:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:06.866 07:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:06.866 07:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:06.866 07:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:06.867 07:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:06.867 07:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:07.126 07:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:07.126 "name": "raid_bdev1", 00:36:07.126 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:07.126 "strip_size_kb": 0, 00:36:07.126 "state": "online", 00:36:07.126 "raid_level": "raid1", 00:36:07.126 "superblock": true, 00:36:07.127 "num_base_bdevs": 2, 00:36:07.127 "num_base_bdevs_discovered": 2, 00:36:07.127 "num_base_bdevs_operational": 2, 00:36:07.127 "process": { 00:36:07.127 "type": "rebuild", 00:36:07.127 "target": "spare", 00:36:07.127 "progress": { 00:36:07.127 "blocks": 7168, 00:36:07.127 "percent": 90 00:36:07.127 } 00:36:07.127 }, 00:36:07.127 "base_bdevs_list": [ 00:36:07.127 { 00:36:07.127 "name": "spare", 00:36:07.127 "uuid": "ab3eafff-7f1a-5309-970f-2875515c47f3", 00:36:07.127 "is_configured": true, 00:36:07.127 "data_offset": 256, 00:36:07.127 "data_size": 7936 00:36:07.127 }, 00:36:07.127 { 00:36:07.127 "name": "BaseBdev2", 00:36:07.127 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:07.127 "is_configured": true, 00:36:07.127 "data_offset": 256, 00:36:07.127 "data_size": 7936 00:36:07.127 } 00:36:07.127 ] 00:36:07.127 }' 00:36:07.127 07:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:07.127 07:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:07.127 07:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:07.127 07:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:07.127 07:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:07.386 [2024-07-12 07:46:41.040041] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:07.386 [2024-07-12 07:46:41.040100] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:07.386 [2024-07-12 07:46:41.040219] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:08.324 07:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:08.324 07:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:08.324 07:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:08.324 07:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:08.324 07:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:08.324 07:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:08.324 07:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:08.324 07:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:08.324 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:08.324 "name": "raid_bdev1", 00:36:08.324 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:08.324 "strip_size_kb": 0, 00:36:08.324 "state": "online", 00:36:08.324 "raid_level": "raid1", 00:36:08.324 "superblock": true, 00:36:08.324 "num_base_bdevs": 2, 00:36:08.324 "num_base_bdevs_discovered": 2, 00:36:08.324 "num_base_bdevs_operational": 2, 00:36:08.324 "base_bdevs_list": [ 00:36:08.324 { 00:36:08.324 "name": "spare", 00:36:08.324 "uuid": "ab3eafff-7f1a-5309-970f-2875515c47f3", 00:36:08.324 "is_configured": true, 00:36:08.324 "data_offset": 256, 00:36:08.324 "data_size": 7936 00:36:08.324 }, 00:36:08.324 { 00:36:08.324 "name": "BaseBdev2", 00:36:08.324 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:08.324 "is_configured": true, 00:36:08.324 "data_offset": 256, 00:36:08.324 "data_size": 7936 00:36:08.324 } 00:36:08.324 ] 00:36:08.324 }' 00:36:08.324 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:08.584 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:08.584 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:08.584 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:36:08.584 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:36:08.584 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:08.584 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:08.584 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:08.584 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:08.584 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:08.584 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:08.584 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:08.844 "name": "raid_bdev1", 00:36:08.844 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:08.844 "strip_size_kb": 0, 00:36:08.844 "state": "online", 00:36:08.844 "raid_level": "raid1", 00:36:08.844 "superblock": true, 00:36:08.844 "num_base_bdevs": 2, 00:36:08.844 "num_base_bdevs_discovered": 2, 00:36:08.844 "num_base_bdevs_operational": 2, 00:36:08.844 "base_bdevs_list": [ 00:36:08.844 { 00:36:08.844 "name": "spare", 00:36:08.844 "uuid": "ab3eafff-7f1a-5309-970f-2875515c47f3", 00:36:08.844 "is_configured": true, 00:36:08.844 "data_offset": 256, 00:36:08.844 "data_size": 7936 00:36:08.844 }, 00:36:08.844 { 00:36:08.844 "name": "BaseBdev2", 00:36:08.844 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:08.844 "is_configured": true, 00:36:08.844 "data_offset": 256, 00:36:08.844 "data_size": 7936 00:36:08.844 } 00:36:08.844 ] 00:36:08.844 }' 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:08.844 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:09.104 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:09.104 "name": "raid_bdev1", 00:36:09.104 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:09.104 "strip_size_kb": 0, 00:36:09.104 "state": "online", 00:36:09.104 "raid_level": "raid1", 00:36:09.104 "superblock": true, 00:36:09.104 "num_base_bdevs": 2, 00:36:09.104 "num_base_bdevs_discovered": 2, 00:36:09.104 "num_base_bdevs_operational": 2, 00:36:09.104 "base_bdevs_list": [ 00:36:09.104 { 00:36:09.104 "name": "spare", 00:36:09.104 "uuid": "ab3eafff-7f1a-5309-970f-2875515c47f3", 00:36:09.104 "is_configured": true, 00:36:09.104 "data_offset": 256, 00:36:09.104 "data_size": 7936 00:36:09.104 }, 00:36:09.104 { 00:36:09.104 "name": "BaseBdev2", 00:36:09.104 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:09.104 "is_configured": true, 00:36:09.104 "data_offset": 256, 00:36:09.104 "data_size": 7936 00:36:09.104 } 00:36:09.104 ] 00:36:09.104 }' 00:36:09.104 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:09.104 07:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:09.670 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:09.670 [2024-07-12 07:46:43.543712] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:09.670 [2024-07-12 07:46:43.543740] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:09.670 [2024-07-12 07:46:43.543851] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:09.670 [2024-07-12 07:46:43.543935] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:09.670 [2024-07-12 07:46:43.543946] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:36:09.928 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:09.928 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:36:10.187 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:36:10.187 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:36:10.187 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:36:10.188 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:36:10.188 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:10.188 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:36:10.188 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:10.188 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:10.188 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:10.188 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:36:10.188 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:10.188 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:10.188 07:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:36:10.446 /dev/nbd0 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # local i 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # break 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:10.446 1+0 records in 00:36:10.446 1+0 records out 00:36:10.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242322 s, 16.9 MB/s 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # size=4096 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # return 0 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:10.446 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:36:10.446 /dev/nbd1 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # local i 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # break 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:10.704 1+0 records in 00:36:10.704 1+0 records out 00:36:10.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315831 s, 13.0 MB/s 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # size=4096 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # return 0 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:10.704 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:10.962 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:10.962 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:10.962 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:10.962 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:10.962 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:10.962 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:10.962 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:36:10.962 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:36:10.962 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:10.962 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:36:11.220 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:11.220 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:11.220 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:11.220 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:11.220 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:11.220 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:11.220 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:36:11.220 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:36:11.220 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:36:11.220 07:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:11.479 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:11.737 [2024-07-12 07:46:45.406087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:11.737 [2024-07-12 07:46:45.406157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:11.737 [2024-07-12 07:46:45.406194] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:36:11.737 [2024-07-12 07:46:45.406215] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:11.737 [2024-07-12 07:46:45.408306] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:11.737 [2024-07-12 07:46:45.408358] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:11.737 [2024-07-12 07:46:45.408460] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:11.737 [2024-07-12 07:46:45.408526] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:11.737 [2024-07-12 07:46:45.408660] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:11.737 spare 00:36:11.737 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:11.737 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:11.737 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:11.737 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:11.737 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:11.737 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:11.737 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:11.737 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:11.737 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:11.737 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:11.737 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:11.737 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:11.737 [2024-07-12 07:46:45.508735] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:36:11.737 [2024-07-12 07:46:45.508757] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:11.737 [2024-07-12 07:46:45.508912] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:36:11.737 [2024-07-12 07:46:45.509045] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:36:11.737 [2024-07-12 07:46:45.509061] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:36:11.737 [2024-07-12 07:46:45.509125] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:11.996 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:11.996 "name": "raid_bdev1", 00:36:11.996 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:11.996 "strip_size_kb": 0, 00:36:11.996 "state": "online", 00:36:11.996 "raid_level": "raid1", 00:36:11.996 "superblock": true, 00:36:11.996 "num_base_bdevs": 2, 00:36:11.996 "num_base_bdevs_discovered": 2, 00:36:11.996 "num_base_bdevs_operational": 2, 00:36:11.996 "base_bdevs_list": [ 00:36:11.996 { 00:36:11.996 "name": "spare", 00:36:11.996 "uuid": "ab3eafff-7f1a-5309-970f-2875515c47f3", 00:36:11.996 "is_configured": true, 00:36:11.996 "data_offset": 256, 00:36:11.996 "data_size": 7936 00:36:11.996 }, 00:36:11.996 { 00:36:11.996 "name": "BaseBdev2", 00:36:11.996 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:11.996 "is_configured": true, 00:36:11.996 "data_offset": 256, 00:36:11.996 "data_size": 7936 00:36:11.996 } 00:36:11.996 ] 00:36:11.996 }' 00:36:11.996 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:11.996 07:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:12.564 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:12.564 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:12.564 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:12.564 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:12.564 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:12.564 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:12.564 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:12.824 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:12.824 "name": "raid_bdev1", 00:36:12.824 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:12.824 "strip_size_kb": 0, 00:36:12.824 "state": "online", 00:36:12.824 "raid_level": "raid1", 00:36:12.824 "superblock": true, 00:36:12.824 "num_base_bdevs": 2, 00:36:12.824 "num_base_bdevs_discovered": 2, 00:36:12.824 "num_base_bdevs_operational": 2, 00:36:12.824 "base_bdevs_list": [ 00:36:12.824 { 00:36:12.824 "name": "spare", 00:36:12.824 "uuid": "ab3eafff-7f1a-5309-970f-2875515c47f3", 00:36:12.824 "is_configured": true, 00:36:12.824 "data_offset": 256, 00:36:12.824 "data_size": 7936 00:36:12.824 }, 00:36:12.824 { 00:36:12.824 "name": "BaseBdev2", 00:36:12.824 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:12.824 "is_configured": true, 00:36:12.824 "data_offset": 256, 00:36:12.824 "data_size": 7936 00:36:12.824 } 00:36:12.824 ] 00:36:12.824 }' 00:36:12.824 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:12.824 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:12.824 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:12.824 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:12.824 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:12.824 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:36:13.108 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:36:13.108 07:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:13.368 [2024-07-12 07:46:47.006429] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:13.368 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:13.368 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:13.368 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:13.368 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:13.368 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:13.368 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:13.368 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:13.368 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:13.368 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:13.368 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:13.368 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:13.368 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:13.368 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:13.368 "name": "raid_bdev1", 00:36:13.368 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:13.368 "strip_size_kb": 0, 00:36:13.368 "state": "online", 00:36:13.368 "raid_level": "raid1", 00:36:13.368 "superblock": true, 00:36:13.368 "num_base_bdevs": 2, 00:36:13.368 "num_base_bdevs_discovered": 1, 00:36:13.368 "num_base_bdevs_operational": 1, 00:36:13.368 "base_bdevs_list": [ 00:36:13.368 { 00:36:13.368 "name": null, 00:36:13.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.368 "is_configured": false, 00:36:13.368 "data_offset": 256, 00:36:13.368 "data_size": 7936 00:36:13.368 }, 00:36:13.368 { 00:36:13.368 "name": "BaseBdev2", 00:36:13.368 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:13.368 "is_configured": true, 00:36:13.368 "data_offset": 256, 00:36:13.368 "data_size": 7936 00:36:13.368 } 00:36:13.368 ] 00:36:13.368 }' 00:36:13.368 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:13.368 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:13.938 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:14.198 [2024-07-12 07:46:47.962610] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:14.198 [2024-07-12 07:46:47.962727] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:14.198 [2024-07-12 07:46:47.962741] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:14.198 [2024-07-12 07:46:47.962803] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:14.198 [2024-07-12 07:46:47.964337] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb4f0 00:36:14.198 [2024-07-12 07:46:47.966243] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:14.198 07:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:36:15.136 07:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:15.136 07:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:15.136 07:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:15.136 07:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:15.136 07:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:15.136 07:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:15.136 07:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:15.396 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:15.396 "name": "raid_bdev1", 00:36:15.396 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:15.396 "strip_size_kb": 0, 00:36:15.396 "state": "online", 00:36:15.396 "raid_level": "raid1", 00:36:15.396 "superblock": true, 00:36:15.396 "num_base_bdevs": 2, 00:36:15.396 "num_base_bdevs_discovered": 2, 00:36:15.396 "num_base_bdevs_operational": 2, 00:36:15.396 "process": { 00:36:15.396 "type": "rebuild", 00:36:15.396 "target": "spare", 00:36:15.396 "progress": { 00:36:15.396 "blocks": 3072, 00:36:15.396 "percent": 38 00:36:15.396 } 00:36:15.396 }, 00:36:15.396 "base_bdevs_list": [ 00:36:15.396 { 00:36:15.396 "name": "spare", 00:36:15.396 "uuid": "ab3eafff-7f1a-5309-970f-2875515c47f3", 00:36:15.396 "is_configured": true, 00:36:15.396 "data_offset": 256, 00:36:15.396 "data_size": 7936 00:36:15.396 }, 00:36:15.396 { 00:36:15.396 "name": "BaseBdev2", 00:36:15.396 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:15.396 "is_configured": true, 00:36:15.396 "data_offset": 256, 00:36:15.396 "data_size": 7936 00:36:15.396 } 00:36:15.396 ] 00:36:15.396 }' 00:36:15.396 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:15.656 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:15.656 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:15.656 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:15.656 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:15.915 [2024-07-12 07:46:49.603694] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:15.915 [2024-07-12 07:46:49.674796] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:15.915 [2024-07-12 07:46:49.674863] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:15.915 [2024-07-12 07:46:49.674878] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:15.915 [2024-07-12 07:46:49.674886] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:15.915 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:15.915 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:15.915 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:15.915 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:15.915 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:15.915 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:15.915 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:15.915 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:15.915 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:15.915 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:15.915 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:15.915 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:16.174 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:16.174 "name": "raid_bdev1", 00:36:16.174 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:16.174 "strip_size_kb": 0, 00:36:16.174 "state": "online", 00:36:16.174 "raid_level": "raid1", 00:36:16.174 "superblock": true, 00:36:16.174 "num_base_bdevs": 2, 00:36:16.174 "num_base_bdevs_discovered": 1, 00:36:16.174 "num_base_bdevs_operational": 1, 00:36:16.174 "base_bdevs_list": [ 00:36:16.174 { 00:36:16.174 "name": null, 00:36:16.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:16.174 "is_configured": false, 00:36:16.174 "data_offset": 256, 00:36:16.174 "data_size": 7936 00:36:16.174 }, 00:36:16.174 { 00:36:16.174 "name": "BaseBdev2", 00:36:16.174 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:16.174 "is_configured": true, 00:36:16.174 "data_offset": 256, 00:36:16.174 "data_size": 7936 00:36:16.174 } 00:36:16.174 ] 00:36:16.174 }' 00:36:16.174 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:16.174 07:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:16.741 07:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:17.001 [2024-07-12 07:46:50.732937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:17.001 [2024-07-12 07:46:50.733005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:17.001 [2024-07-12 07:46:50.733042] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:36:17.001 [2024-07-12 07:46:50.733071] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:17.001 [2024-07-12 07:46:50.733248] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:17.001 [2024-07-12 07:46:50.733292] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:17.001 [2024-07-12 07:46:50.733379] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:17.001 [2024-07-12 07:46:50.733393] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:17.001 [2024-07-12 07:46:50.733402] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:17.001 [2024-07-12 07:46:50.733443] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:17.001 [2024-07-12 07:46:50.734257] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb830 00:36:17.001 [2024-07-12 07:46:50.736140] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:17.001 spare 00:36:17.001 07:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:36:17.937 07:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:17.937 07:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:17.937 07:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:17.937 07:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:17.937 07:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:17.937 07:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:17.937 07:46:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:18.195 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:18.195 "name": "raid_bdev1", 00:36:18.195 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:18.195 "strip_size_kb": 0, 00:36:18.195 "state": "online", 00:36:18.195 "raid_level": "raid1", 00:36:18.195 "superblock": true, 00:36:18.195 "num_base_bdevs": 2, 00:36:18.195 "num_base_bdevs_discovered": 2, 00:36:18.195 "num_base_bdevs_operational": 2, 00:36:18.195 "process": { 00:36:18.195 "type": "rebuild", 00:36:18.195 "target": "spare", 00:36:18.195 "progress": { 00:36:18.195 "blocks": 3072, 00:36:18.195 "percent": 38 00:36:18.195 } 00:36:18.195 }, 00:36:18.195 "base_bdevs_list": [ 00:36:18.195 { 00:36:18.195 "name": "spare", 00:36:18.195 "uuid": "ab3eafff-7f1a-5309-970f-2875515c47f3", 00:36:18.195 "is_configured": true, 00:36:18.195 "data_offset": 256, 00:36:18.195 "data_size": 7936 00:36:18.195 }, 00:36:18.195 { 00:36:18.195 "name": "BaseBdev2", 00:36:18.195 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:18.195 "is_configured": true, 00:36:18.195 "data_offset": 256, 00:36:18.195 "data_size": 7936 00:36:18.195 } 00:36:18.195 ] 00:36:18.195 }' 00:36:18.195 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:18.195 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:18.195 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:18.453 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:18.453 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:18.712 [2024-07-12 07:46:52.362199] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:18.712 [2024-07-12 07:46:52.444362] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:18.712 [2024-07-12 07:46:52.444440] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:18.712 [2024-07-12 07:46:52.444456] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:18.712 [2024-07-12 07:46:52.444464] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:18.712 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:18.712 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:18.712 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:18.712 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:18.712 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:18.712 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:18.712 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:18.712 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:18.712 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:18.712 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:18.712 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:18.712 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:18.970 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:18.970 "name": "raid_bdev1", 00:36:18.970 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:18.970 "strip_size_kb": 0, 00:36:18.970 "state": "online", 00:36:18.970 "raid_level": "raid1", 00:36:18.970 "superblock": true, 00:36:18.970 "num_base_bdevs": 2, 00:36:18.970 "num_base_bdevs_discovered": 1, 00:36:18.970 "num_base_bdevs_operational": 1, 00:36:18.970 "base_bdevs_list": [ 00:36:18.970 { 00:36:18.970 "name": null, 00:36:18.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:18.970 "is_configured": false, 00:36:18.970 "data_offset": 256, 00:36:18.970 "data_size": 7936 00:36:18.970 }, 00:36:18.971 { 00:36:18.971 "name": "BaseBdev2", 00:36:18.971 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:18.971 "is_configured": true, 00:36:18.971 "data_offset": 256, 00:36:18.971 "data_size": 7936 00:36:18.971 } 00:36:18.971 ] 00:36:18.971 }' 00:36:18.971 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:18.971 07:46:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:19.536 07:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:19.536 07:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:19.536 07:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:19.536 07:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:19.536 07:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:19.536 07:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:19.536 07:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:19.794 07:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:19.795 "name": "raid_bdev1", 00:36:19.795 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:19.795 "strip_size_kb": 0, 00:36:19.795 "state": "online", 00:36:19.795 "raid_level": "raid1", 00:36:19.795 "superblock": true, 00:36:19.795 "num_base_bdevs": 2, 00:36:19.795 "num_base_bdevs_discovered": 1, 00:36:19.795 "num_base_bdevs_operational": 1, 00:36:19.795 "base_bdevs_list": [ 00:36:19.795 { 00:36:19.795 "name": null, 00:36:19.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:19.795 "is_configured": false, 00:36:19.795 "data_offset": 256, 00:36:19.795 "data_size": 7936 00:36:19.795 }, 00:36:19.795 { 00:36:19.795 "name": "BaseBdev2", 00:36:19.795 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:19.795 "is_configured": true, 00:36:19.795 "data_offset": 256, 00:36:19.795 "data_size": 7936 00:36:19.795 } 00:36:19.795 ] 00:36:19.795 }' 00:36:19.795 07:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:19.795 07:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:19.795 07:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:19.795 07:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:19.795 07:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:36:20.053 07:46:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:20.312 [2024-07-12 07:46:54.017645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:20.312 [2024-07-12 07:46:54.017712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:20.312 [2024-07-12 07:46:54.017760] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:36:20.312 [2024-07-12 07:46:54.017780] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:20.312 [2024-07-12 07:46:54.017940] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:20.312 [2024-07-12 07:46:54.017967] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:20.312 [2024-07-12 07:46:54.018043] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:36:20.312 [2024-07-12 07:46:54.018056] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:20.312 [2024-07-12 07:46:54.018063] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:20.312 BaseBdev1 00:36:20.312 07:46:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:36:21.246 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:21.246 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:21.246 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:21.246 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:21.246 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:21.246 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:21.246 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:21.246 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:21.246 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:21.247 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:21.247 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:21.247 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:21.504 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:21.504 "name": "raid_bdev1", 00:36:21.504 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:21.504 "strip_size_kb": 0, 00:36:21.504 "state": "online", 00:36:21.504 "raid_level": "raid1", 00:36:21.504 "superblock": true, 00:36:21.504 "num_base_bdevs": 2, 00:36:21.504 "num_base_bdevs_discovered": 1, 00:36:21.504 "num_base_bdevs_operational": 1, 00:36:21.504 "base_bdevs_list": [ 00:36:21.504 { 00:36:21.504 "name": null, 00:36:21.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:21.504 "is_configured": false, 00:36:21.504 "data_offset": 256, 00:36:21.504 "data_size": 7936 00:36:21.504 }, 00:36:21.504 { 00:36:21.504 "name": "BaseBdev2", 00:36:21.504 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:21.504 "is_configured": true, 00:36:21.504 "data_offset": 256, 00:36:21.504 "data_size": 7936 00:36:21.504 } 00:36:21.504 ] 00:36:21.504 }' 00:36:21.504 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:21.504 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:22.070 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:22.070 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:22.070 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:22.070 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:22.070 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:22.070 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:22.070 07:46:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:22.328 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:22.328 "name": "raid_bdev1", 00:36:22.328 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:22.328 "strip_size_kb": 0, 00:36:22.328 "state": "online", 00:36:22.328 "raid_level": "raid1", 00:36:22.328 "superblock": true, 00:36:22.328 "num_base_bdevs": 2, 00:36:22.328 "num_base_bdevs_discovered": 1, 00:36:22.328 "num_base_bdevs_operational": 1, 00:36:22.329 "base_bdevs_list": [ 00:36:22.329 { 00:36:22.329 "name": null, 00:36:22.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:22.329 "is_configured": false, 00:36:22.329 "data_offset": 256, 00:36:22.329 "data_size": 7936 00:36:22.329 }, 00:36:22.329 { 00:36:22.329 "name": "BaseBdev2", 00:36:22.329 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:22.329 "is_configured": true, 00:36:22.329 "data_offset": 256, 00:36:22.329 "data_size": 7936 00:36:22.329 } 00:36:22.329 ] 00:36:22.329 }' 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:22.329 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:22.587 [2024-07-12 07:46:56.354640] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:22.587 [2024-07-12 07:46:56.354944] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:22.587 [2024-07-12 07:46:56.355052] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:22.587 request: 00:36:22.587 { 00:36:22.587 "raid_bdev": "raid_bdev1", 00:36:22.587 "base_bdev": "BaseBdev1", 00:36:22.587 "method": "bdev_raid_add_base_bdev", 00:36:22.587 "req_id": 1 00:36:22.587 } 00:36:22.587 Got JSON-RPC error response 00:36:22.587 response: 00:36:22.587 { 00:36:22.587 "code": -22, 00:36:22.587 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:36:22.587 } 00:36:22.587 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:36:22.587 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:22.587 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:22.587 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:22.587 07:46:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:36:23.523 07:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:23.523 07:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:23.523 07:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:23.523 07:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:23.523 07:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:23.523 07:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:23.523 07:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:23.523 07:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:23.523 07:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:23.523 07:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:23.523 07:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:23.523 07:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:23.782 07:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:23.782 "name": "raid_bdev1", 00:36:23.782 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:23.782 "strip_size_kb": 0, 00:36:23.782 "state": "online", 00:36:23.782 "raid_level": "raid1", 00:36:23.782 "superblock": true, 00:36:23.782 "num_base_bdevs": 2, 00:36:23.782 "num_base_bdevs_discovered": 1, 00:36:23.782 "num_base_bdevs_operational": 1, 00:36:23.782 "base_bdevs_list": [ 00:36:23.782 { 00:36:23.782 "name": null, 00:36:23.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.782 "is_configured": false, 00:36:23.782 "data_offset": 256, 00:36:23.782 "data_size": 7936 00:36:23.782 }, 00:36:23.782 { 00:36:23.782 "name": "BaseBdev2", 00:36:23.782 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:23.782 "is_configured": true, 00:36:23.782 "data_offset": 256, 00:36:23.782 "data_size": 7936 00:36:23.782 } 00:36:23.782 ] 00:36:23.782 }' 00:36:23.782 07:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:23.782 07:46:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:24.718 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:24.718 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:24.718 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:24.718 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:24.718 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:24.718 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:24.718 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:24.718 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:24.718 "name": "raid_bdev1", 00:36:24.718 "uuid": "a829e7eb-0278-406d-ab0f-7897b2fa11e4", 00:36:24.718 "strip_size_kb": 0, 00:36:24.718 "state": "online", 00:36:24.718 "raid_level": "raid1", 00:36:24.718 "superblock": true, 00:36:24.718 "num_base_bdevs": 2, 00:36:24.718 "num_base_bdevs_discovered": 1, 00:36:24.718 "num_base_bdevs_operational": 1, 00:36:24.718 "base_bdevs_list": [ 00:36:24.718 { 00:36:24.718 "name": null, 00:36:24.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:24.718 "is_configured": false, 00:36:24.718 "data_offset": 256, 00:36:24.718 "data_size": 7936 00:36:24.718 }, 00:36:24.718 { 00:36:24.718 "name": "BaseBdev2", 00:36:24.718 "uuid": "482ac250-e57b-56b5-9a6a-83e02bc55d0c", 00:36:24.718 "is_configured": true, 00:36:24.718 "data_offset": 256, 00:36:24.718 "data_size": 7936 00:36:24.718 } 00:36:24.718 ] 00:36:24.718 }' 00:36:24.718 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:24.718 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:24.718 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:24.978 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:24.978 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 170608 00:36:24.978 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@946 -- # '[' -z 170608 ']' 00:36:24.978 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # kill -0 170608 00:36:24.978 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@951 -- # uname 00:36:24.978 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:24.978 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 170608 00:36:24.978 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:24.978 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:24.978 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 170608' 00:36:24.978 killing process with pid 170608 00:36:24.978 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@965 -- # kill 170608 00:36:24.978 Received shutdown signal, test time was about 60.000000 seconds 00:36:24.978 00:36:24.978 Latency(us) 00:36:24.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.978 =================================================================================================================== 00:36:24.978 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:24.978 07:46:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # wait 170608 00:36:24.978 [2024-07-12 07:46:58.633809] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:24.978 [2024-07-12 07:46:58.634062] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:24.978 [2024-07-12 07:46:58.634189] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:24.978 [2024-07-12 07:46:58.634261] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:36:24.978 [2024-07-12 07:46:58.695095] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:25.238 07:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:36:25.238 00:36:25.238 real 0m29.627s 00:36:25.238 user 0m46.258s 00:36:25.238 sys 0m4.599s 00:36:25.238 07:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:25.238 ************************************ 00:36:25.238 END TEST raid_rebuild_test_sb_md_separate 00:36:25.238 ************************************ 00:36:25.238 07:46:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:25.497 07:46:59 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:36:25.497 07:46:59 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:36:25.497 07:46:59 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:36:25.497 07:46:59 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:25.497 07:46:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:25.497 ************************************ 00:36:25.497 START TEST raid_state_function_test_sb_md_interleaved 00:36:25.497 ************************************ 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=171459 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 171459' 00:36:25.497 Process raid pid: 171459 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 171459 /var/tmp/spdk-raid.sock 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 171459 ']' 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:25.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:25.497 07:46:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:25.497 [2024-07-12 07:46:59.260242] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:25.497 [2024-07-12 07:46:59.261151] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:25.756 [2024-07-12 07:46:59.404960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:25.756 [2024-07-12 07:46:59.484167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:25.756 [2024-07-12 07:46:59.566067] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:26.325 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:26.325 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:36:26.325 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:26.584 [2024-07-12 07:47:00.367748] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:26.584 [2024-07-12 07:47:00.368028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:26.584 [2024-07-12 07:47:00.368135] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:26.584 [2024-07-12 07:47:00.368189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:26.585 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:26.585 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:26.585 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:26.585 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:26.585 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:26.585 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:26.585 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:26.585 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:26.585 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:26.585 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:26.585 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:26.585 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:26.844 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:26.844 "name": "Existed_Raid", 00:36:26.844 "uuid": "2c63e6ca-dce1-4a5b-8a1a-c10350926cc7", 00:36:26.844 "strip_size_kb": 0, 00:36:26.844 "state": "configuring", 00:36:26.844 "raid_level": "raid1", 00:36:26.844 "superblock": true, 00:36:26.844 "num_base_bdevs": 2, 00:36:26.844 "num_base_bdevs_discovered": 0, 00:36:26.844 "num_base_bdevs_operational": 2, 00:36:26.844 "base_bdevs_list": [ 00:36:26.844 { 00:36:26.844 "name": "BaseBdev1", 00:36:26.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:26.844 "is_configured": false, 00:36:26.844 "data_offset": 0, 00:36:26.844 "data_size": 0 00:36:26.844 }, 00:36:26.844 { 00:36:26.844 "name": "BaseBdev2", 00:36:26.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:26.844 "is_configured": false, 00:36:26.844 "data_offset": 0, 00:36:26.844 "data_size": 0 00:36:26.844 } 00:36:26.844 ] 00:36:26.844 }' 00:36:26.844 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:26.844 07:47:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:27.412 07:47:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:27.671 [2024-07-12 07:47:01.459764] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:27.671 [2024-07-12 07:47:01.460008] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:36:27.671 07:47:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:27.930 [2024-07-12 07:47:01.691824] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:27.930 [2024-07-12 07:47:01.692075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:27.930 [2024-07-12 07:47:01.692201] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:27.930 [2024-07-12 07:47:01.692270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:27.930 07:47:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:36:28.189 [2024-07-12 07:47:01.884773] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:28.189 BaseBdev1 00:36:28.189 07:47:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:36:28.189 07:47:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:36:28.189 07:47:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:36:28.189 07:47:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:36:28.189 07:47:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:36:28.189 07:47:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:36:28.189 07:47:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:28.448 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:28.707 [ 00:36:28.707 { 00:36:28.707 "name": "BaseBdev1", 00:36:28.707 "aliases": [ 00:36:28.707 "0d4717b1-704e-48aa-af42-eaecd72ccf1f" 00:36:28.707 ], 00:36:28.707 "product_name": "Malloc disk", 00:36:28.707 "block_size": 4128, 00:36:28.707 "num_blocks": 8192, 00:36:28.707 "uuid": "0d4717b1-704e-48aa-af42-eaecd72ccf1f", 00:36:28.707 "md_size": 32, 00:36:28.707 "md_interleave": true, 00:36:28.707 "dif_type": 0, 00:36:28.707 "assigned_rate_limits": { 00:36:28.707 "rw_ios_per_sec": 0, 00:36:28.707 "rw_mbytes_per_sec": 0, 00:36:28.708 "r_mbytes_per_sec": 0, 00:36:28.708 "w_mbytes_per_sec": 0 00:36:28.708 }, 00:36:28.708 "claimed": true, 00:36:28.708 "claim_type": "exclusive_write", 00:36:28.708 "zoned": false, 00:36:28.708 "supported_io_types": { 00:36:28.708 "read": true, 00:36:28.708 "write": true, 00:36:28.708 "unmap": true, 00:36:28.708 "write_zeroes": true, 00:36:28.708 "flush": true, 00:36:28.708 "reset": true, 00:36:28.708 "compare": false, 00:36:28.708 "compare_and_write": false, 00:36:28.708 "abort": true, 00:36:28.708 "nvme_admin": false, 00:36:28.708 "nvme_io": false 00:36:28.708 }, 00:36:28.708 "memory_domains": [ 00:36:28.708 { 00:36:28.708 "dma_device_id": "system", 00:36:28.708 "dma_device_type": 1 00:36:28.708 }, 00:36:28.708 { 00:36:28.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:28.708 "dma_device_type": 2 00:36:28.708 } 00:36:28.708 ], 00:36:28.708 "driver_specific": {} 00:36:28.708 } 00:36:28.708 ] 00:36:28.708 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:36:28.708 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:28.708 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:28.708 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:28.708 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:28.708 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:28.708 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:28.708 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:28.708 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:28.708 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:28.708 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:28.708 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:28.708 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:28.967 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:28.967 "name": "Existed_Raid", 00:36:28.967 "uuid": "264f49cd-d004-47e2-9907-2e2dda8ae326", 00:36:28.967 "strip_size_kb": 0, 00:36:28.967 "state": "configuring", 00:36:28.967 "raid_level": "raid1", 00:36:28.967 "superblock": true, 00:36:28.967 "num_base_bdevs": 2, 00:36:28.967 "num_base_bdevs_discovered": 1, 00:36:28.967 "num_base_bdevs_operational": 2, 00:36:28.967 "base_bdevs_list": [ 00:36:28.967 { 00:36:28.967 "name": "BaseBdev1", 00:36:28.967 "uuid": "0d4717b1-704e-48aa-af42-eaecd72ccf1f", 00:36:28.967 "is_configured": true, 00:36:28.967 "data_offset": 256, 00:36:28.967 "data_size": 7936 00:36:28.967 }, 00:36:28.967 { 00:36:28.967 "name": "BaseBdev2", 00:36:28.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:28.967 "is_configured": false, 00:36:28.967 "data_offset": 0, 00:36:28.967 "data_size": 0 00:36:28.967 } 00:36:28.967 ] 00:36:28.967 }' 00:36:28.968 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:28.968 07:47:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:29.548 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:29.548 [2024-07-12 07:47:03.309094] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:29.548 [2024-07-12 07:47:03.309369] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:36:29.548 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:29.806 [2024-07-12 07:47:03.481198] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:29.806 [2024-07-12 07:47:03.483711] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:29.806 [2024-07-12 07:47:03.483893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:29.806 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:36:29.806 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:29.806 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:29.806 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:29.806 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:29.806 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:29.806 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:29.806 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:29.806 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:29.806 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:29.806 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:29.806 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:29.806 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:29.806 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:30.064 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:30.064 "name": "Existed_Raid", 00:36:30.064 "uuid": "b46ac01b-ba22-4a6c-88d4-627b46a55691", 00:36:30.064 "strip_size_kb": 0, 00:36:30.064 "state": "configuring", 00:36:30.064 "raid_level": "raid1", 00:36:30.064 "superblock": true, 00:36:30.064 "num_base_bdevs": 2, 00:36:30.064 "num_base_bdevs_discovered": 1, 00:36:30.064 "num_base_bdevs_operational": 2, 00:36:30.064 "base_bdevs_list": [ 00:36:30.064 { 00:36:30.064 "name": "BaseBdev1", 00:36:30.064 "uuid": "0d4717b1-704e-48aa-af42-eaecd72ccf1f", 00:36:30.064 "is_configured": true, 00:36:30.064 "data_offset": 256, 00:36:30.064 "data_size": 7936 00:36:30.064 }, 00:36:30.064 { 00:36:30.064 "name": "BaseBdev2", 00:36:30.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:30.064 "is_configured": false, 00:36:30.064 "data_offset": 0, 00:36:30.064 "data_size": 0 00:36:30.064 } 00:36:30.064 ] 00:36:30.064 }' 00:36:30.064 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:30.064 07:47:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:30.630 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:36:30.630 [2024-07-12 07:47:04.508504] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:30.630 [2024-07-12 07:47:04.508999] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:36:30.630 [2024-07-12 07:47:04.509158] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:36:30.630 [2024-07-12 07:47:04.509421] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:36:30.630 [2024-07-12 07:47:04.509673] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:36:30.630 BaseBdev2 00:36:30.630 [2024-07-12 07:47:04.509834] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:36:30.630 [2024-07-12 07:47:04.510037] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:30.888 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:36:30.888 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:36:30.888 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:36:30.888 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:36:30.888 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:36:30.888 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:36:30.888 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:30.888 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:31.147 [ 00:36:31.147 { 00:36:31.147 "name": "BaseBdev2", 00:36:31.147 "aliases": [ 00:36:31.147 "92e881aa-a90c-4d50-a7fc-d6ba549bcd21" 00:36:31.147 ], 00:36:31.147 "product_name": "Malloc disk", 00:36:31.147 "block_size": 4128, 00:36:31.147 "num_blocks": 8192, 00:36:31.147 "uuid": "92e881aa-a90c-4d50-a7fc-d6ba549bcd21", 00:36:31.147 "md_size": 32, 00:36:31.147 "md_interleave": true, 00:36:31.147 "dif_type": 0, 00:36:31.147 "assigned_rate_limits": { 00:36:31.147 "rw_ios_per_sec": 0, 00:36:31.147 "rw_mbytes_per_sec": 0, 00:36:31.147 "r_mbytes_per_sec": 0, 00:36:31.147 "w_mbytes_per_sec": 0 00:36:31.147 }, 00:36:31.147 "claimed": true, 00:36:31.147 "claim_type": "exclusive_write", 00:36:31.148 "zoned": false, 00:36:31.148 "supported_io_types": { 00:36:31.148 "read": true, 00:36:31.148 "write": true, 00:36:31.148 "unmap": true, 00:36:31.148 "write_zeroes": true, 00:36:31.148 "flush": true, 00:36:31.148 "reset": true, 00:36:31.148 "compare": false, 00:36:31.148 "compare_and_write": false, 00:36:31.148 "abort": true, 00:36:31.148 "nvme_admin": false, 00:36:31.148 "nvme_io": false 00:36:31.148 }, 00:36:31.148 "memory_domains": [ 00:36:31.148 { 00:36:31.148 "dma_device_id": "system", 00:36:31.148 "dma_device_type": 1 00:36:31.148 }, 00:36:31.148 { 00:36:31.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:31.148 "dma_device_type": 2 00:36:31.148 } 00:36:31.148 ], 00:36:31.148 "driver_specific": {} 00:36:31.148 } 00:36:31.148 ] 00:36:31.148 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:36:31.148 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:36:31.148 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:31.148 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:36:31.148 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:31.148 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:31.148 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:31.148 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:31.148 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:31.148 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:31.148 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:31.148 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:31.148 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:31.148 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:31.148 07:47:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:31.406 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:31.407 "name": "Existed_Raid", 00:36:31.407 "uuid": "b46ac01b-ba22-4a6c-88d4-627b46a55691", 00:36:31.407 "strip_size_kb": 0, 00:36:31.407 "state": "online", 00:36:31.407 "raid_level": "raid1", 00:36:31.407 "superblock": true, 00:36:31.407 "num_base_bdevs": 2, 00:36:31.407 "num_base_bdevs_discovered": 2, 00:36:31.407 "num_base_bdevs_operational": 2, 00:36:31.407 "base_bdevs_list": [ 00:36:31.407 { 00:36:31.407 "name": "BaseBdev1", 00:36:31.407 "uuid": "0d4717b1-704e-48aa-af42-eaecd72ccf1f", 00:36:31.407 "is_configured": true, 00:36:31.407 "data_offset": 256, 00:36:31.407 "data_size": 7936 00:36:31.407 }, 00:36:31.407 { 00:36:31.407 "name": "BaseBdev2", 00:36:31.407 "uuid": "92e881aa-a90c-4d50-a7fc-d6ba549bcd21", 00:36:31.407 "is_configured": true, 00:36:31.407 "data_offset": 256, 00:36:31.407 "data_size": 7936 00:36:31.407 } 00:36:31.407 ] 00:36:31.407 }' 00:36:31.407 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:31.407 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:32.029 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:36:32.029 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:36:32.029 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:32.029 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:32.029 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:32.029 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:36:32.029 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:36:32.029 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:32.029 [2024-07-12 07:47:05.773018] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:32.029 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:32.029 "name": "Existed_Raid", 00:36:32.029 "aliases": [ 00:36:32.029 "b46ac01b-ba22-4a6c-88d4-627b46a55691" 00:36:32.029 ], 00:36:32.029 "product_name": "Raid Volume", 00:36:32.029 "block_size": 4128, 00:36:32.029 "num_blocks": 7936, 00:36:32.029 "uuid": "b46ac01b-ba22-4a6c-88d4-627b46a55691", 00:36:32.029 "md_size": 32, 00:36:32.029 "md_interleave": true, 00:36:32.029 "dif_type": 0, 00:36:32.029 "assigned_rate_limits": { 00:36:32.029 "rw_ios_per_sec": 0, 00:36:32.029 "rw_mbytes_per_sec": 0, 00:36:32.029 "r_mbytes_per_sec": 0, 00:36:32.029 "w_mbytes_per_sec": 0 00:36:32.029 }, 00:36:32.029 "claimed": false, 00:36:32.029 "zoned": false, 00:36:32.029 "supported_io_types": { 00:36:32.029 "read": true, 00:36:32.029 "write": true, 00:36:32.029 "unmap": false, 00:36:32.029 "write_zeroes": true, 00:36:32.029 "flush": false, 00:36:32.029 "reset": true, 00:36:32.029 "compare": false, 00:36:32.029 "compare_and_write": false, 00:36:32.029 "abort": false, 00:36:32.029 "nvme_admin": false, 00:36:32.029 "nvme_io": false 00:36:32.029 }, 00:36:32.029 "memory_domains": [ 00:36:32.029 { 00:36:32.029 "dma_device_id": "system", 00:36:32.029 "dma_device_type": 1 00:36:32.029 }, 00:36:32.029 { 00:36:32.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:32.029 "dma_device_type": 2 00:36:32.029 }, 00:36:32.029 { 00:36:32.029 "dma_device_id": "system", 00:36:32.029 "dma_device_type": 1 00:36:32.029 }, 00:36:32.029 { 00:36:32.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:32.029 "dma_device_type": 2 00:36:32.029 } 00:36:32.029 ], 00:36:32.029 "driver_specific": { 00:36:32.029 "raid": { 00:36:32.029 "uuid": "b46ac01b-ba22-4a6c-88d4-627b46a55691", 00:36:32.029 "strip_size_kb": 0, 00:36:32.029 "state": "online", 00:36:32.029 "raid_level": "raid1", 00:36:32.029 "superblock": true, 00:36:32.029 "num_base_bdevs": 2, 00:36:32.029 "num_base_bdevs_discovered": 2, 00:36:32.029 "num_base_bdevs_operational": 2, 00:36:32.029 "base_bdevs_list": [ 00:36:32.029 { 00:36:32.029 "name": "BaseBdev1", 00:36:32.029 "uuid": "0d4717b1-704e-48aa-af42-eaecd72ccf1f", 00:36:32.029 "is_configured": true, 00:36:32.029 "data_offset": 256, 00:36:32.029 "data_size": 7936 00:36:32.029 }, 00:36:32.029 { 00:36:32.029 "name": "BaseBdev2", 00:36:32.029 "uuid": "92e881aa-a90c-4d50-a7fc-d6ba549bcd21", 00:36:32.029 "is_configured": true, 00:36:32.029 "data_offset": 256, 00:36:32.029 "data_size": 7936 00:36:32.029 } 00:36:32.029 ] 00:36:32.029 } 00:36:32.029 } 00:36:32.029 }' 00:36:32.029 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:32.029 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:36:32.029 BaseBdev2' 00:36:32.029 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:32.029 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:36:32.029 07:47:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:32.333 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:32.333 "name": "BaseBdev1", 00:36:32.333 "aliases": [ 00:36:32.333 "0d4717b1-704e-48aa-af42-eaecd72ccf1f" 00:36:32.333 ], 00:36:32.333 "product_name": "Malloc disk", 00:36:32.333 "block_size": 4128, 00:36:32.333 "num_blocks": 8192, 00:36:32.333 "uuid": "0d4717b1-704e-48aa-af42-eaecd72ccf1f", 00:36:32.333 "md_size": 32, 00:36:32.333 "md_interleave": true, 00:36:32.333 "dif_type": 0, 00:36:32.333 "assigned_rate_limits": { 00:36:32.333 "rw_ios_per_sec": 0, 00:36:32.333 "rw_mbytes_per_sec": 0, 00:36:32.333 "r_mbytes_per_sec": 0, 00:36:32.333 "w_mbytes_per_sec": 0 00:36:32.333 }, 00:36:32.333 "claimed": true, 00:36:32.333 "claim_type": "exclusive_write", 00:36:32.333 "zoned": false, 00:36:32.333 "supported_io_types": { 00:36:32.333 "read": true, 00:36:32.333 "write": true, 00:36:32.333 "unmap": true, 00:36:32.333 "write_zeroes": true, 00:36:32.333 "flush": true, 00:36:32.333 "reset": true, 00:36:32.333 "compare": false, 00:36:32.333 "compare_and_write": false, 00:36:32.333 "abort": true, 00:36:32.333 "nvme_admin": false, 00:36:32.333 "nvme_io": false 00:36:32.333 }, 00:36:32.333 "memory_domains": [ 00:36:32.333 { 00:36:32.333 "dma_device_id": "system", 00:36:32.333 "dma_device_type": 1 00:36:32.333 }, 00:36:32.333 { 00:36:32.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:32.333 "dma_device_type": 2 00:36:32.333 } 00:36:32.333 ], 00:36:32.333 "driver_specific": {} 00:36:32.333 }' 00:36:32.333 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:32.333 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:32.333 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:36:32.333 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:32.333 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:32.333 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:32.333 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:32.333 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:32.591 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:36:32.591 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:32.591 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:32.591 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:32.591 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:32.591 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:36:32.591 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:32.850 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:32.850 "name": "BaseBdev2", 00:36:32.850 "aliases": [ 00:36:32.850 "92e881aa-a90c-4d50-a7fc-d6ba549bcd21" 00:36:32.850 ], 00:36:32.850 "product_name": "Malloc disk", 00:36:32.850 "block_size": 4128, 00:36:32.850 "num_blocks": 8192, 00:36:32.850 "uuid": "92e881aa-a90c-4d50-a7fc-d6ba549bcd21", 00:36:32.850 "md_size": 32, 00:36:32.850 "md_interleave": true, 00:36:32.850 "dif_type": 0, 00:36:32.850 "assigned_rate_limits": { 00:36:32.850 "rw_ios_per_sec": 0, 00:36:32.850 "rw_mbytes_per_sec": 0, 00:36:32.850 "r_mbytes_per_sec": 0, 00:36:32.850 "w_mbytes_per_sec": 0 00:36:32.850 }, 00:36:32.850 "claimed": true, 00:36:32.850 "claim_type": "exclusive_write", 00:36:32.850 "zoned": false, 00:36:32.850 "supported_io_types": { 00:36:32.850 "read": true, 00:36:32.850 "write": true, 00:36:32.850 "unmap": true, 00:36:32.850 "write_zeroes": true, 00:36:32.850 "flush": true, 00:36:32.850 "reset": true, 00:36:32.850 "compare": false, 00:36:32.850 "compare_and_write": false, 00:36:32.850 "abort": true, 00:36:32.850 "nvme_admin": false, 00:36:32.850 "nvme_io": false 00:36:32.850 }, 00:36:32.850 "memory_domains": [ 00:36:32.850 { 00:36:32.850 "dma_device_id": "system", 00:36:32.850 "dma_device_type": 1 00:36:32.850 }, 00:36:32.850 { 00:36:32.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:32.850 "dma_device_type": 2 00:36:32.850 } 00:36:32.850 ], 00:36:32.850 "driver_specific": {} 00:36:32.850 }' 00:36:32.850 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:32.850 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:32.850 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:36:32.850 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:32.850 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:32.850 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:32.850 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:33.108 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:33.108 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:36:33.108 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:33.108 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:33.108 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:33.108 07:47:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:33.367 [2024-07-12 07:47:07.045121] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:33.367 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:33.625 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:33.625 "name": "Existed_Raid", 00:36:33.625 "uuid": "b46ac01b-ba22-4a6c-88d4-627b46a55691", 00:36:33.625 "strip_size_kb": 0, 00:36:33.625 "state": "online", 00:36:33.625 "raid_level": "raid1", 00:36:33.625 "superblock": true, 00:36:33.625 "num_base_bdevs": 2, 00:36:33.625 "num_base_bdevs_discovered": 1, 00:36:33.625 "num_base_bdevs_operational": 1, 00:36:33.625 "base_bdevs_list": [ 00:36:33.625 { 00:36:33.625 "name": null, 00:36:33.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:33.625 "is_configured": false, 00:36:33.625 "data_offset": 256, 00:36:33.626 "data_size": 7936 00:36:33.626 }, 00:36:33.626 { 00:36:33.626 "name": "BaseBdev2", 00:36:33.626 "uuid": "92e881aa-a90c-4d50-a7fc-d6ba549bcd21", 00:36:33.626 "is_configured": true, 00:36:33.626 "data_offset": 256, 00:36:33.626 "data_size": 7936 00:36:33.626 } 00:36:33.626 ] 00:36:33.626 }' 00:36:33.626 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:33.626 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:34.194 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:36:34.194 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:34.194 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:34.194 07:47:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:36:34.452 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:36:34.452 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:34.452 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:36:34.711 [2024-07-12 07:47:08.345727] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:34.711 [2024-07-12 07:47:08.345984] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:34.711 [2024-07-12 07:47:08.368062] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:34.711 [2024-07-12 07:47:08.369629] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:34.711 [2024-07-12 07:47:08.370037] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:36:34.711 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:36:34.711 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:34.711 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:34.711 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:36:34.970 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:36:34.970 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:36:34.970 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:36:34.970 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 171459 00:36:34.970 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 171459 ']' 00:36:34.970 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 171459 00:36:34.970 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:36:34.970 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:34.970 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 171459 00:36:34.970 killing process with pid 171459 00:36:34.970 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:34.970 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:34.970 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 171459' 00:36:34.970 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 171459 00:36:34.970 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 171459 00:36:34.970 [2024-07-12 07:47:08.666458] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:34.970 [2024-07-12 07:47:08.666532] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:35.229 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:36:35.229 00:36:35.229 real 0m9.714s 00:36:35.229 user 0m17.147s 00:36:35.229 sys 0m1.699s 00:36:35.229 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:35.229 07:47:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:35.229 ************************************ 00:36:35.229 END TEST raid_state_function_test_sb_md_interleaved 00:36:35.229 ************************************ 00:36:35.229 07:47:08 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:36:35.229 07:47:08 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:36:35.229 07:47:08 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:35.229 07:47:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:35.229 ************************************ 00:36:35.229 START TEST raid_superblock_test_md_interleaved 00:36:35.229 ************************************ 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=171812 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 171812 /var/tmp/spdk-raid.sock 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 171812 ']' 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:35.229 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:35.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:35.230 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:35.230 07:47:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:35.230 [2024-07-12 07:47:09.049899] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:35.230 [2024-07-12 07:47:09.050253] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171812 ] 00:36:35.489 [2024-07-12 07:47:09.190939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.489 [2024-07-12 07:47:09.260093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:35.489 [2024-07-12 07:47:09.342380] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:36.057 07:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:36.057 07:47:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:36:36.057 07:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:36:36.057 07:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:36.057 07:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:36:36.057 07:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:36:36.057 07:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:36.057 07:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:36.057 07:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:36:36.057 07:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:36.057 07:47:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:36:36.316 malloc1 00:36:36.317 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:36.576 [2024-07-12 07:47:10.324961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:36.576 [2024-07-12 07:47:10.325228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:36.576 [2024-07-12 07:47:10.325329] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:36:36.576 [2024-07-12 07:47:10.325589] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:36.576 [2024-07-12 07:47:10.328269] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:36.576 [2024-07-12 07:47:10.328441] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:36.576 pt1 00:36:36.576 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:36:36.576 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:36.576 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:36:36.576 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:36:36.576 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:36.576 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:36.576 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:36:36.576 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:36.576 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:36:36.835 malloc2 00:36:36.835 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:37.095 [2024-07-12 07:47:10.804179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:37.095 [2024-07-12 07:47:10.804393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:37.095 [2024-07-12 07:47:10.804472] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:36:37.095 [2024-07-12 07:47:10.804593] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:37.095 [2024-07-12 07:47:10.807010] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:37.095 [2024-07-12 07:47:10.807173] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:37.095 pt2 00:36:37.095 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:36:37.095 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:37.095 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:36:37.355 [2024-07-12 07:47:10.980336] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:37.355 [2024-07-12 07:47:10.982959] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:37.355 [2024-07-12 07:47:10.983324] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:36:37.355 [2024-07-12 07:47:10.983440] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:36:37.355 [2024-07-12 07:47:10.983640] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:36:37.355 [2024-07-12 07:47:10.983869] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:36:37.355 [2024-07-12 07:47:10.983971] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:36:37.355 [2024-07-12 07:47:10.984232] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:37.355 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:37.355 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:37.355 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:37.355 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:37.355 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:37.355 07:47:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:37.355 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:37.355 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:37.355 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:37.355 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:37.355 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:37.355 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:37.355 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:37.355 "name": "raid_bdev1", 00:36:37.355 "uuid": "dc1c7d45-da39-42a7-aa91-9711b978e06d", 00:36:37.355 "strip_size_kb": 0, 00:36:37.355 "state": "online", 00:36:37.355 "raid_level": "raid1", 00:36:37.355 "superblock": true, 00:36:37.355 "num_base_bdevs": 2, 00:36:37.355 "num_base_bdevs_discovered": 2, 00:36:37.355 "num_base_bdevs_operational": 2, 00:36:37.355 "base_bdevs_list": [ 00:36:37.355 { 00:36:37.355 "name": "pt1", 00:36:37.355 "uuid": "6fa27179-a6ce-5b69-a716-e299a7d56e3e", 00:36:37.355 "is_configured": true, 00:36:37.355 "data_offset": 256, 00:36:37.355 "data_size": 7936 00:36:37.355 }, 00:36:37.355 { 00:36:37.355 "name": "pt2", 00:36:37.355 "uuid": "b47af726-620f-5c49-b55b-794708fc2e8e", 00:36:37.355 "is_configured": true, 00:36:37.355 "data_offset": 256, 00:36:37.355 "data_size": 7936 00:36:37.355 } 00:36:37.355 ] 00:36:37.355 }' 00:36:37.355 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:37.355 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:37.923 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:36:37.923 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:36:37.923 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:37.923 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:37.923 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:37.923 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:36:37.923 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:37.923 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:38.182 [2024-07-12 07:47:11.924713] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:38.182 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:38.182 "name": "raid_bdev1", 00:36:38.182 "aliases": [ 00:36:38.182 "dc1c7d45-da39-42a7-aa91-9711b978e06d" 00:36:38.182 ], 00:36:38.182 "product_name": "Raid Volume", 00:36:38.182 "block_size": 4128, 00:36:38.182 "num_blocks": 7936, 00:36:38.182 "uuid": "dc1c7d45-da39-42a7-aa91-9711b978e06d", 00:36:38.182 "md_size": 32, 00:36:38.182 "md_interleave": true, 00:36:38.182 "dif_type": 0, 00:36:38.182 "assigned_rate_limits": { 00:36:38.182 "rw_ios_per_sec": 0, 00:36:38.182 "rw_mbytes_per_sec": 0, 00:36:38.182 "r_mbytes_per_sec": 0, 00:36:38.182 "w_mbytes_per_sec": 0 00:36:38.182 }, 00:36:38.182 "claimed": false, 00:36:38.182 "zoned": false, 00:36:38.182 "supported_io_types": { 00:36:38.182 "read": true, 00:36:38.182 "write": true, 00:36:38.182 "unmap": false, 00:36:38.182 "write_zeroes": true, 00:36:38.182 "flush": false, 00:36:38.182 "reset": true, 00:36:38.182 "compare": false, 00:36:38.182 "compare_and_write": false, 00:36:38.182 "abort": false, 00:36:38.182 "nvme_admin": false, 00:36:38.182 "nvme_io": false 00:36:38.182 }, 00:36:38.182 "memory_domains": [ 00:36:38.182 { 00:36:38.182 "dma_device_id": "system", 00:36:38.182 "dma_device_type": 1 00:36:38.182 }, 00:36:38.182 { 00:36:38.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:38.182 "dma_device_type": 2 00:36:38.182 }, 00:36:38.182 { 00:36:38.182 "dma_device_id": "system", 00:36:38.182 "dma_device_type": 1 00:36:38.182 }, 00:36:38.182 { 00:36:38.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:38.182 "dma_device_type": 2 00:36:38.182 } 00:36:38.182 ], 00:36:38.182 "driver_specific": { 00:36:38.182 "raid": { 00:36:38.182 "uuid": "dc1c7d45-da39-42a7-aa91-9711b978e06d", 00:36:38.182 "strip_size_kb": 0, 00:36:38.182 "state": "online", 00:36:38.182 "raid_level": "raid1", 00:36:38.182 "superblock": true, 00:36:38.182 "num_base_bdevs": 2, 00:36:38.182 "num_base_bdevs_discovered": 2, 00:36:38.182 "num_base_bdevs_operational": 2, 00:36:38.182 "base_bdevs_list": [ 00:36:38.182 { 00:36:38.182 "name": "pt1", 00:36:38.182 "uuid": "6fa27179-a6ce-5b69-a716-e299a7d56e3e", 00:36:38.182 "is_configured": true, 00:36:38.182 "data_offset": 256, 00:36:38.182 "data_size": 7936 00:36:38.182 }, 00:36:38.182 { 00:36:38.182 "name": "pt2", 00:36:38.182 "uuid": "b47af726-620f-5c49-b55b-794708fc2e8e", 00:36:38.182 "is_configured": true, 00:36:38.182 "data_offset": 256, 00:36:38.182 "data_size": 7936 00:36:38.182 } 00:36:38.182 ] 00:36:38.182 } 00:36:38.182 } 00:36:38.182 }' 00:36:38.182 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:38.182 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:36:38.182 pt2' 00:36:38.182 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:38.182 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:38.182 07:47:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:36:38.442 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:38.442 "name": "pt1", 00:36:38.442 "aliases": [ 00:36:38.442 "6fa27179-a6ce-5b69-a716-e299a7d56e3e" 00:36:38.442 ], 00:36:38.442 "product_name": "passthru", 00:36:38.442 "block_size": 4128, 00:36:38.442 "num_blocks": 8192, 00:36:38.442 "uuid": "6fa27179-a6ce-5b69-a716-e299a7d56e3e", 00:36:38.442 "md_size": 32, 00:36:38.442 "md_interleave": true, 00:36:38.442 "dif_type": 0, 00:36:38.442 "assigned_rate_limits": { 00:36:38.442 "rw_ios_per_sec": 0, 00:36:38.442 "rw_mbytes_per_sec": 0, 00:36:38.442 "r_mbytes_per_sec": 0, 00:36:38.442 "w_mbytes_per_sec": 0 00:36:38.442 }, 00:36:38.442 "claimed": true, 00:36:38.442 "claim_type": "exclusive_write", 00:36:38.442 "zoned": false, 00:36:38.442 "supported_io_types": { 00:36:38.442 "read": true, 00:36:38.442 "write": true, 00:36:38.442 "unmap": true, 00:36:38.442 "write_zeroes": true, 00:36:38.442 "flush": true, 00:36:38.442 "reset": true, 00:36:38.442 "compare": false, 00:36:38.442 "compare_and_write": false, 00:36:38.442 "abort": true, 00:36:38.442 "nvme_admin": false, 00:36:38.442 "nvme_io": false 00:36:38.442 }, 00:36:38.442 "memory_domains": [ 00:36:38.442 { 00:36:38.442 "dma_device_id": "system", 00:36:38.442 "dma_device_type": 1 00:36:38.442 }, 00:36:38.442 { 00:36:38.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:38.442 "dma_device_type": 2 00:36:38.442 } 00:36:38.442 ], 00:36:38.442 "driver_specific": { 00:36:38.442 "passthru": { 00:36:38.442 "name": "pt1", 00:36:38.442 "base_bdev_name": "malloc1" 00:36:38.442 } 00:36:38.442 } 00:36:38.442 }' 00:36:38.442 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:38.442 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:38.442 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:36:38.442 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:38.442 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:38.442 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:38.442 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:38.701 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:38.701 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:36:38.701 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:38.701 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:38.701 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:38.701 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:38.701 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:38.701 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:36:38.960 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:38.960 "name": "pt2", 00:36:38.960 "aliases": [ 00:36:38.960 "b47af726-620f-5c49-b55b-794708fc2e8e" 00:36:38.960 ], 00:36:38.961 "product_name": "passthru", 00:36:38.961 "block_size": 4128, 00:36:38.961 "num_blocks": 8192, 00:36:38.961 "uuid": "b47af726-620f-5c49-b55b-794708fc2e8e", 00:36:38.961 "md_size": 32, 00:36:38.961 "md_interleave": true, 00:36:38.961 "dif_type": 0, 00:36:38.961 "assigned_rate_limits": { 00:36:38.961 "rw_ios_per_sec": 0, 00:36:38.961 "rw_mbytes_per_sec": 0, 00:36:38.961 "r_mbytes_per_sec": 0, 00:36:38.961 "w_mbytes_per_sec": 0 00:36:38.961 }, 00:36:38.961 "claimed": true, 00:36:38.961 "claim_type": "exclusive_write", 00:36:38.961 "zoned": false, 00:36:38.961 "supported_io_types": { 00:36:38.961 "read": true, 00:36:38.961 "write": true, 00:36:38.961 "unmap": true, 00:36:38.961 "write_zeroes": true, 00:36:38.961 "flush": true, 00:36:38.961 "reset": true, 00:36:38.961 "compare": false, 00:36:38.961 "compare_and_write": false, 00:36:38.961 "abort": true, 00:36:38.961 "nvme_admin": false, 00:36:38.961 "nvme_io": false 00:36:38.961 }, 00:36:38.961 "memory_domains": [ 00:36:38.961 { 00:36:38.961 "dma_device_id": "system", 00:36:38.961 "dma_device_type": 1 00:36:38.961 }, 00:36:38.961 { 00:36:38.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:38.961 "dma_device_type": 2 00:36:38.961 } 00:36:38.961 ], 00:36:38.961 "driver_specific": { 00:36:38.961 "passthru": { 00:36:38.961 "name": "pt2", 00:36:38.961 "base_bdev_name": "malloc2" 00:36:38.961 } 00:36:38.961 } 00:36:38.961 }' 00:36:38.961 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:38.961 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:38.961 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:36:38.961 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:38.961 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:38.961 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:38.961 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:38.961 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:39.220 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:36:39.220 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:39.220 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:39.220 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:39.220 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:39.220 07:47:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:36:39.220 [2024-07-12 07:47:13.064875] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:39.220 07:47:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=dc1c7d45-da39-42a7-aa91-9711b978e06d 00:36:39.220 07:47:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z dc1c7d45-da39-42a7-aa91-9711b978e06d ']' 00:36:39.220 07:47:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:39.479 [2024-07-12 07:47:13.332703] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:39.479 [2024-07-12 07:47:13.332840] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:39.479 [2024-07-12 07:47:13.333087] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:39.479 [2024-07-12 07:47:13.333284] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:39.479 [2024-07-12 07:47:13.333368] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:36:39.479 07:47:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:39.479 07:47:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:36:39.738 07:47:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:36:39.738 07:47:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:36:39.738 07:47:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:36:39.738 07:47:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:39.997 07:47:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:36:39.997 07:47:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:39.997 07:47:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:39.997 07:47:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:36:40.257 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:36:40.257 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:36:40.257 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:36:40.257 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:36:40.257 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:40.257 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:40.257 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:40.257 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:40.257 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:40.257 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:40.257 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:40.257 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:40.257 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:36:40.515 [2024-07-12 07:47:14.300868] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:40.515 [2024-07-12 07:47:14.303398] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:40.515 [2024-07-12 07:47:14.303598] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:36:40.515 [2024-07-12 07:47:14.303805] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:36:40.515 [2024-07-12 07:47:14.303928] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:40.515 [2024-07-12 07:47:14.304010] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:36:40.515 request: 00:36:40.515 { 00:36:40.515 "name": "raid_bdev1", 00:36:40.515 "raid_level": "raid1", 00:36:40.515 "base_bdevs": [ 00:36:40.515 "malloc1", 00:36:40.515 "malloc2" 00:36:40.515 ], 00:36:40.515 "superblock": false, 00:36:40.515 "method": "bdev_raid_create", 00:36:40.515 "req_id": 1 00:36:40.515 } 00:36:40.515 Got JSON-RPC error response 00:36:40.515 response: 00:36:40.515 { 00:36:40.515 "code": -17, 00:36:40.515 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:40.515 } 00:36:40.515 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:36:40.515 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:40.515 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:40.515 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:40.515 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:40.515 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:36:40.773 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:36:40.773 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:36:40.773 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:41.031 [2024-07-12 07:47:14.704878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:41.031 [2024-07-12 07:47:14.705080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:41.031 [2024-07-12 07:47:14.705146] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:36:41.031 [2024-07-12 07:47:14.705246] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:41.031 [2024-07-12 07:47:14.707633] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:41.031 [2024-07-12 07:47:14.707818] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:41.031 [2024-07-12 07:47:14.707944] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:41.031 [2024-07-12 07:47:14.708049] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:41.031 pt1 00:36:41.031 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:36:41.031 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:41.031 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:41.031 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:41.031 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:41.031 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:41.031 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:41.031 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:41.031 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:41.031 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:41.031 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:41.032 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:41.032 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:41.032 "name": "raid_bdev1", 00:36:41.032 "uuid": "dc1c7d45-da39-42a7-aa91-9711b978e06d", 00:36:41.032 "strip_size_kb": 0, 00:36:41.032 "state": "configuring", 00:36:41.032 "raid_level": "raid1", 00:36:41.032 "superblock": true, 00:36:41.032 "num_base_bdevs": 2, 00:36:41.032 "num_base_bdevs_discovered": 1, 00:36:41.032 "num_base_bdevs_operational": 2, 00:36:41.032 "base_bdevs_list": [ 00:36:41.032 { 00:36:41.032 "name": "pt1", 00:36:41.032 "uuid": "6fa27179-a6ce-5b69-a716-e299a7d56e3e", 00:36:41.032 "is_configured": true, 00:36:41.032 "data_offset": 256, 00:36:41.032 "data_size": 7936 00:36:41.032 }, 00:36:41.032 { 00:36:41.032 "name": null, 00:36:41.032 "uuid": "b47af726-620f-5c49-b55b-794708fc2e8e", 00:36:41.032 "is_configured": false, 00:36:41.032 "data_offset": 256, 00:36:41.032 "data_size": 7936 00:36:41.032 } 00:36:41.032 ] 00:36:41.032 }' 00:36:41.032 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:41.032 07:47:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:41.599 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:36:41.599 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:36:41.599 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:36:41.599 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:41.858 [2024-07-12 07:47:15.713053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:41.858 [2024-07-12 07:47:15.713283] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:41.858 [2024-07-12 07:47:15.713350] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:36:41.858 [2024-07-12 07:47:15.713453] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:41.858 [2024-07-12 07:47:15.713646] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:41.858 [2024-07-12 07:47:15.713816] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:41.858 [2024-07-12 07:47:15.713910] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:41.858 [2024-07-12 07:47:15.714014] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:41.858 [2024-07-12 07:47:15.714312] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:36:41.858 [2024-07-12 07:47:15.714348] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:36:41.858 [2024-07-12 07:47:15.714438] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:36:41.858 [2024-07-12 07:47:15.714683] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:36:41.858 [2024-07-12 07:47:15.714720] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:36:41.858 [2024-07-12 07:47:15.714839] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:41.858 pt2 00:36:41.858 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:36:41.858 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:36:41.858 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:41.858 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:41.858 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:41.858 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:42.116 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:42.116 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:42.116 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:42.116 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:42.116 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:42.116 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:42.116 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:42.116 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:42.116 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:42.116 "name": "raid_bdev1", 00:36:42.116 "uuid": "dc1c7d45-da39-42a7-aa91-9711b978e06d", 00:36:42.116 "strip_size_kb": 0, 00:36:42.116 "state": "online", 00:36:42.116 "raid_level": "raid1", 00:36:42.116 "superblock": true, 00:36:42.116 "num_base_bdevs": 2, 00:36:42.116 "num_base_bdevs_discovered": 2, 00:36:42.116 "num_base_bdevs_operational": 2, 00:36:42.116 "base_bdevs_list": [ 00:36:42.116 { 00:36:42.116 "name": "pt1", 00:36:42.116 "uuid": "6fa27179-a6ce-5b69-a716-e299a7d56e3e", 00:36:42.116 "is_configured": true, 00:36:42.116 "data_offset": 256, 00:36:42.116 "data_size": 7936 00:36:42.116 }, 00:36:42.116 { 00:36:42.116 "name": "pt2", 00:36:42.116 "uuid": "b47af726-620f-5c49-b55b-794708fc2e8e", 00:36:42.116 "is_configured": true, 00:36:42.116 "data_offset": 256, 00:36:42.116 "data_size": 7936 00:36:42.116 } 00:36:42.116 ] 00:36:42.116 }' 00:36:42.116 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:42.116 07:47:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:42.683 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:36:42.683 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:36:42.683 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:42.683 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:42.683 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:42.683 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:36:42.683 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:42.683 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:42.941 [2024-07-12 07:47:16.605331] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:42.941 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:42.941 "name": "raid_bdev1", 00:36:42.941 "aliases": [ 00:36:42.941 "dc1c7d45-da39-42a7-aa91-9711b978e06d" 00:36:42.941 ], 00:36:42.941 "product_name": "Raid Volume", 00:36:42.941 "block_size": 4128, 00:36:42.941 "num_blocks": 7936, 00:36:42.941 "uuid": "dc1c7d45-da39-42a7-aa91-9711b978e06d", 00:36:42.941 "md_size": 32, 00:36:42.941 "md_interleave": true, 00:36:42.941 "dif_type": 0, 00:36:42.941 "assigned_rate_limits": { 00:36:42.941 "rw_ios_per_sec": 0, 00:36:42.941 "rw_mbytes_per_sec": 0, 00:36:42.941 "r_mbytes_per_sec": 0, 00:36:42.941 "w_mbytes_per_sec": 0 00:36:42.941 }, 00:36:42.941 "claimed": false, 00:36:42.941 "zoned": false, 00:36:42.941 "supported_io_types": { 00:36:42.941 "read": true, 00:36:42.941 "write": true, 00:36:42.941 "unmap": false, 00:36:42.941 "write_zeroes": true, 00:36:42.941 "flush": false, 00:36:42.941 "reset": true, 00:36:42.941 "compare": false, 00:36:42.941 "compare_and_write": false, 00:36:42.941 "abort": false, 00:36:42.941 "nvme_admin": false, 00:36:42.941 "nvme_io": false 00:36:42.941 }, 00:36:42.941 "memory_domains": [ 00:36:42.941 { 00:36:42.941 "dma_device_id": "system", 00:36:42.941 "dma_device_type": 1 00:36:42.941 }, 00:36:42.941 { 00:36:42.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:42.941 "dma_device_type": 2 00:36:42.941 }, 00:36:42.941 { 00:36:42.941 "dma_device_id": "system", 00:36:42.941 "dma_device_type": 1 00:36:42.941 }, 00:36:42.941 { 00:36:42.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:42.941 "dma_device_type": 2 00:36:42.941 } 00:36:42.941 ], 00:36:42.941 "driver_specific": { 00:36:42.941 "raid": { 00:36:42.941 "uuid": "dc1c7d45-da39-42a7-aa91-9711b978e06d", 00:36:42.941 "strip_size_kb": 0, 00:36:42.941 "state": "online", 00:36:42.941 "raid_level": "raid1", 00:36:42.941 "superblock": true, 00:36:42.941 "num_base_bdevs": 2, 00:36:42.941 "num_base_bdevs_discovered": 2, 00:36:42.941 "num_base_bdevs_operational": 2, 00:36:42.941 "base_bdevs_list": [ 00:36:42.941 { 00:36:42.941 "name": "pt1", 00:36:42.941 "uuid": "6fa27179-a6ce-5b69-a716-e299a7d56e3e", 00:36:42.941 "is_configured": true, 00:36:42.941 "data_offset": 256, 00:36:42.941 "data_size": 7936 00:36:42.941 }, 00:36:42.941 { 00:36:42.941 "name": "pt2", 00:36:42.941 "uuid": "b47af726-620f-5c49-b55b-794708fc2e8e", 00:36:42.941 "is_configured": true, 00:36:42.941 "data_offset": 256, 00:36:42.941 "data_size": 7936 00:36:42.941 } 00:36:42.941 ] 00:36:42.941 } 00:36:42.941 } 00:36:42.941 }' 00:36:42.941 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:42.941 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:36:42.941 pt2' 00:36:42.941 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:42.941 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:36:42.942 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:43.200 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:43.200 "name": "pt1", 00:36:43.200 "aliases": [ 00:36:43.200 "6fa27179-a6ce-5b69-a716-e299a7d56e3e" 00:36:43.200 ], 00:36:43.200 "product_name": "passthru", 00:36:43.200 "block_size": 4128, 00:36:43.200 "num_blocks": 8192, 00:36:43.200 "uuid": "6fa27179-a6ce-5b69-a716-e299a7d56e3e", 00:36:43.201 "md_size": 32, 00:36:43.201 "md_interleave": true, 00:36:43.201 "dif_type": 0, 00:36:43.201 "assigned_rate_limits": { 00:36:43.201 "rw_ios_per_sec": 0, 00:36:43.201 "rw_mbytes_per_sec": 0, 00:36:43.201 "r_mbytes_per_sec": 0, 00:36:43.201 "w_mbytes_per_sec": 0 00:36:43.201 }, 00:36:43.201 "claimed": true, 00:36:43.201 "claim_type": "exclusive_write", 00:36:43.201 "zoned": false, 00:36:43.201 "supported_io_types": { 00:36:43.201 "read": true, 00:36:43.201 "write": true, 00:36:43.201 "unmap": true, 00:36:43.201 "write_zeroes": true, 00:36:43.201 "flush": true, 00:36:43.201 "reset": true, 00:36:43.201 "compare": false, 00:36:43.201 "compare_and_write": false, 00:36:43.201 "abort": true, 00:36:43.201 "nvme_admin": false, 00:36:43.201 "nvme_io": false 00:36:43.201 }, 00:36:43.201 "memory_domains": [ 00:36:43.201 { 00:36:43.201 "dma_device_id": "system", 00:36:43.201 "dma_device_type": 1 00:36:43.201 }, 00:36:43.201 { 00:36:43.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:43.201 "dma_device_type": 2 00:36:43.201 } 00:36:43.201 ], 00:36:43.201 "driver_specific": { 00:36:43.201 "passthru": { 00:36:43.201 "name": "pt1", 00:36:43.201 "base_bdev_name": "malloc1" 00:36:43.201 } 00:36:43.201 } 00:36:43.201 }' 00:36:43.201 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:43.201 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:43.201 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:36:43.201 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:43.201 07:47:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:43.201 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:43.201 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:43.201 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:43.459 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:36:43.460 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:43.460 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:43.460 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:43.460 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:43.460 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:36:43.460 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:43.719 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:43.719 "name": "pt2", 00:36:43.719 "aliases": [ 00:36:43.719 "b47af726-620f-5c49-b55b-794708fc2e8e" 00:36:43.719 ], 00:36:43.719 "product_name": "passthru", 00:36:43.719 "block_size": 4128, 00:36:43.719 "num_blocks": 8192, 00:36:43.719 "uuid": "b47af726-620f-5c49-b55b-794708fc2e8e", 00:36:43.719 "md_size": 32, 00:36:43.719 "md_interleave": true, 00:36:43.719 "dif_type": 0, 00:36:43.719 "assigned_rate_limits": { 00:36:43.719 "rw_ios_per_sec": 0, 00:36:43.719 "rw_mbytes_per_sec": 0, 00:36:43.719 "r_mbytes_per_sec": 0, 00:36:43.719 "w_mbytes_per_sec": 0 00:36:43.719 }, 00:36:43.719 "claimed": true, 00:36:43.719 "claim_type": "exclusive_write", 00:36:43.719 "zoned": false, 00:36:43.719 "supported_io_types": { 00:36:43.719 "read": true, 00:36:43.719 "write": true, 00:36:43.719 "unmap": true, 00:36:43.719 "write_zeroes": true, 00:36:43.719 "flush": true, 00:36:43.719 "reset": true, 00:36:43.719 "compare": false, 00:36:43.719 "compare_and_write": false, 00:36:43.719 "abort": true, 00:36:43.719 "nvme_admin": false, 00:36:43.719 "nvme_io": false 00:36:43.719 }, 00:36:43.719 "memory_domains": [ 00:36:43.719 { 00:36:43.719 "dma_device_id": "system", 00:36:43.719 "dma_device_type": 1 00:36:43.719 }, 00:36:43.719 { 00:36:43.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:43.719 "dma_device_type": 2 00:36:43.719 } 00:36:43.719 ], 00:36:43.719 "driver_specific": { 00:36:43.719 "passthru": { 00:36:43.719 "name": "pt2", 00:36:43.719 "base_bdev_name": "malloc2" 00:36:43.719 } 00:36:43.719 } 00:36:43.719 }' 00:36:43.719 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:43.719 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:43.719 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:36:43.719 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:43.719 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:43.978 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:43.978 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:43.978 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:43.978 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:36:43.978 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:43.978 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:43.978 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:43.978 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:43.978 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:36:44.237 [2024-07-12 07:47:17.969609] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:44.237 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' dc1c7d45-da39-42a7-aa91-9711b978e06d '!=' dc1c7d45-da39-42a7-aa91-9711b978e06d ']' 00:36:44.237 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:36:44.237 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:36:44.237 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:36:44.237 07:47:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:44.497 [2024-07-12 07:47:18.249537] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:36:44.497 07:47:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:44.497 07:47:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:44.497 07:47:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:44.497 07:47:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:44.497 07:47:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:44.497 07:47:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:44.497 07:47:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:44.497 07:47:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:44.497 07:47:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:44.497 07:47:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:44.497 07:47:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:44.497 07:47:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:44.757 07:47:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:44.757 "name": "raid_bdev1", 00:36:44.757 "uuid": "dc1c7d45-da39-42a7-aa91-9711b978e06d", 00:36:44.757 "strip_size_kb": 0, 00:36:44.757 "state": "online", 00:36:44.757 "raid_level": "raid1", 00:36:44.757 "superblock": true, 00:36:44.757 "num_base_bdevs": 2, 00:36:44.757 "num_base_bdevs_discovered": 1, 00:36:44.757 "num_base_bdevs_operational": 1, 00:36:44.757 "base_bdevs_list": [ 00:36:44.757 { 00:36:44.757 "name": null, 00:36:44.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:44.757 "is_configured": false, 00:36:44.757 "data_offset": 256, 00:36:44.757 "data_size": 7936 00:36:44.757 }, 00:36:44.757 { 00:36:44.757 "name": "pt2", 00:36:44.757 "uuid": "b47af726-620f-5c49-b55b-794708fc2e8e", 00:36:44.757 "is_configured": true, 00:36:44.757 "data_offset": 256, 00:36:44.757 "data_size": 7936 00:36:44.757 } 00:36:44.757 ] 00:36:44.757 }' 00:36:44.757 07:47:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:44.757 07:47:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:45.327 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:45.587 [2024-07-12 07:47:19.341696] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:45.587 [2024-07-12 07:47:19.341819] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:45.587 [2024-07-12 07:47:19.342006] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:45.587 [2024-07-12 07:47:19.342130] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:45.587 [2024-07-12 07:47:19.342216] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:36:45.587 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:45.587 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:36:45.846 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:36:45.846 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:36:45.846 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:36:45.846 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:36:45.846 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:46.104 [2024-07-12 07:47:19.941780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:46.104 [2024-07-12 07:47:19.941973] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:46.104 [2024-07-12 07:47:19.942064] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:46.104 [2024-07-12 07:47:19.942169] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:46.104 [2024-07-12 07:47:19.944093] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:46.104 [2024-07-12 07:47:19.944261] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:46.104 [2024-07-12 07:47:19.944387] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:46.104 [2024-07-12 07:47:19.944477] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:46.104 [2024-07-12 07:47:19.944555] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:36:46.104 [2024-07-12 07:47:19.944632] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:36:46.104 [2024-07-12 07:47:19.944730] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:36:46.104 [2024-07-12 07:47:19.944834] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:36:46.104 [2024-07-12 07:47:19.944867] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:36:46.104 [2024-07-12 07:47:19.944922] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:46.104 pt2 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:46.104 07:47:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:46.363 07:47:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:46.363 "name": "raid_bdev1", 00:36:46.363 "uuid": "dc1c7d45-da39-42a7-aa91-9711b978e06d", 00:36:46.363 "strip_size_kb": 0, 00:36:46.363 "state": "online", 00:36:46.363 "raid_level": "raid1", 00:36:46.363 "superblock": true, 00:36:46.363 "num_base_bdevs": 2, 00:36:46.363 "num_base_bdevs_discovered": 1, 00:36:46.363 "num_base_bdevs_operational": 1, 00:36:46.363 "base_bdevs_list": [ 00:36:46.363 { 00:36:46.363 "name": null, 00:36:46.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:46.363 "is_configured": false, 00:36:46.363 "data_offset": 256, 00:36:46.363 "data_size": 7936 00:36:46.363 }, 00:36:46.363 { 00:36:46.363 "name": "pt2", 00:36:46.363 "uuid": "b47af726-620f-5c49-b55b-794708fc2e8e", 00:36:46.363 "is_configured": true, 00:36:46.363 "data_offset": 256, 00:36:46.363 "data_size": 7936 00:36:46.363 } 00:36:46.363 ] 00:36:46.363 }' 00:36:46.363 07:47:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:46.363 07:47:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:46.931 07:47:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:47.190 [2024-07-12 07:47:20.858112] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:47.190 [2024-07-12 07:47:20.858230] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:47.190 [2024-07-12 07:47:20.858415] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:47.190 [2024-07-12 07:47:20.858476] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:47.190 [2024-07-12 07:47:20.858553] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:36:47.190 07:47:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:47.190 07:47:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:36:47.190 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:36:47.190 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:36:47.190 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:36:47.190 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:47.449 [2024-07-12 07:47:21.302173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:47.449 [2024-07-12 07:47:21.302357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:47.449 [2024-07-12 07:47:21.302415] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:36:47.449 [2024-07-12 07:47:21.302506] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:47.449 [2024-07-12 07:47:21.304467] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:47.449 [2024-07-12 07:47:21.304621] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:47.449 [2024-07-12 07:47:21.304774] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:47.449 [2024-07-12 07:47:21.304863] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:47.449 [2024-07-12 07:47:21.304980] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:36:47.449 [2024-07-12 07:47:21.305097] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:47.449 [2024-07-12 07:47:21.305199] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:36:47.449 [2024-07-12 07:47:21.305330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:47.449 [2024-07-12 07:47:21.305477] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:36:47.449 [2024-07-12 07:47:21.305562] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:36:47.449 [2024-07-12 07:47:21.305660] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:36:47.449 [2024-07-12 07:47:21.305815] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:36:47.449 [2024-07-12 07:47:21.305849] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:36:47.449 [2024-07-12 07:47:21.305981] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:47.449 pt1 00:36:47.449 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:36:47.449 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:47.449 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:47.449 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:47.449 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:47.449 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:47.449 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:47.449 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:47.449 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:47.449 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:47.449 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:47.449 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:47.449 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:47.708 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:47.708 "name": "raid_bdev1", 00:36:47.708 "uuid": "dc1c7d45-da39-42a7-aa91-9711b978e06d", 00:36:47.708 "strip_size_kb": 0, 00:36:47.708 "state": "online", 00:36:47.708 "raid_level": "raid1", 00:36:47.708 "superblock": true, 00:36:47.708 "num_base_bdevs": 2, 00:36:47.708 "num_base_bdevs_discovered": 1, 00:36:47.708 "num_base_bdevs_operational": 1, 00:36:47.708 "base_bdevs_list": [ 00:36:47.708 { 00:36:47.708 "name": null, 00:36:47.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:47.708 "is_configured": false, 00:36:47.708 "data_offset": 256, 00:36:47.708 "data_size": 7936 00:36:47.708 }, 00:36:47.708 { 00:36:47.708 "name": "pt2", 00:36:47.708 "uuid": "b47af726-620f-5c49-b55b-794708fc2e8e", 00:36:47.708 "is_configured": true, 00:36:47.708 "data_offset": 256, 00:36:47.708 "data_size": 7936 00:36:47.708 } 00:36:47.708 ] 00:36:47.708 }' 00:36:47.708 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:47.708 07:47:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:48.275 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:36:48.275 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:36:48.536 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:36:48.536 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:48.536 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:36:48.794 [2024-07-12 07:47:22.510505] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:48.794 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' dc1c7d45-da39-42a7-aa91-9711b978e06d '!=' dc1c7d45-da39-42a7-aa91-9711b978e06d ']' 00:36:48.794 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 171812 00:36:48.794 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 171812 ']' 00:36:48.794 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 171812 00:36:48.794 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:36:48.794 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:48.794 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 171812 00:36:48.794 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:48.794 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:48.794 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 171812' 00:36:48.794 killing process with pid 171812 00:36:48.794 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@965 -- # kill 171812 00:36:48.794 [2024-07-12 07:47:22.558361] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:48.794 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # wait 171812 00:36:48.794 [2024-07-12 07:47:22.558546] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:48.794 [2024-07-12 07:47:22.558581] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:48.794 [2024-07-12 07:47:22.558598] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:36:48.794 [2024-07-12 07:47:22.581529] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:49.052 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:36:49.052 00:36:49.052 real 0m13.845s 00:36:49.052 user 0m25.034s 00:36:49.052 sys 0m2.451s 00:36:49.052 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:49.052 07:47:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:49.052 ************************************ 00:36:49.052 END TEST raid_superblock_test_md_interleaved 00:36:49.052 ************************************ 00:36:49.052 07:47:22 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:36:49.052 07:47:22 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:36:49.052 07:47:22 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:49.052 07:47:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:49.052 ************************************ 00:36:49.052 START TEST raid_rebuild_test_sb_md_interleaved 00:36:49.052 ************************************ 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false false 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=172308 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 172308 /var/tmp/spdk-raid.sock 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 172308 ']' 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:49.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:49.052 07:47:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:49.311 [2024-07-12 07:47:22.973325] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:49.311 [2024-07-12 07:47:22.973754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172308 ] 00:36:49.311 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:49.311 Zero copy mechanism will not be used. 00:36:49.311 [2024-07-12 07:47:23.108291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:49.311 [2024-07-12 07:47:23.150081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:49.311 [2024-07-12 07:47:23.191589] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:50.245 07:47:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:50.245 07:47:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:36:50.245 07:47:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:50.245 07:47:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:36:50.503 BaseBdev1_malloc 00:36:50.503 07:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:50.503 [2024-07-12 07:47:24.353449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:50.503 [2024-07-12 07:47:24.353753] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:50.503 [2024-07-12 07:47:24.353823] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:36:50.503 [2024-07-12 07:47:24.353946] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:50.503 [2024-07-12 07:47:24.356108] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:50.503 [2024-07-12 07:47:24.356276] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:50.503 BaseBdev1 00:36:50.503 07:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:50.503 07:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:36:51.070 BaseBdev2_malloc 00:36:51.070 07:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:36:51.070 [2024-07-12 07:47:24.857155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:36:51.070 [2024-07-12 07:47:24.857390] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:51.070 [2024-07-12 07:47:24.857568] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:36:51.070 [2024-07-12 07:47:24.857696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:51.070 [2024-07-12 07:47:24.859787] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:51.070 [2024-07-12 07:47:24.859945] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:51.070 BaseBdev2 00:36:51.070 07:47:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:36:51.328 spare_malloc 00:36:51.328 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:51.328 spare_delay 00:36:51.586 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:51.586 [2024-07-12 07:47:25.441911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:51.586 [2024-07-12 07:47:25.442107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:51.586 [2024-07-12 07:47:25.442171] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:36:51.586 [2024-07-12 07:47:25.442287] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:51.586 [2024-07-12 07:47:25.444357] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:51.586 [2024-07-12 07:47:25.444529] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:51.586 spare 00:36:51.586 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:36:51.845 [2024-07-12 07:47:25.614004] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:51.845 [2024-07-12 07:47:25.616099] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:51.845 [2024-07-12 07:47:25.616392] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:36:51.845 [2024-07-12 07:47:25.616498] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:36:51.845 [2024-07-12 07:47:25.616626] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:36:51.845 [2024-07-12 07:47:25.616804] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:36:51.845 [2024-07-12 07:47:25.616884] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:36:51.845 [2024-07-12 07:47:25.617007] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:51.845 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:51.845 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:51.846 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:51.846 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:51.846 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:51.846 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:51.846 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:51.846 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:51.846 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:51.846 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:51.846 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:51.846 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:52.105 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:52.105 "name": "raid_bdev1", 00:36:52.105 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:36:52.105 "strip_size_kb": 0, 00:36:52.105 "state": "online", 00:36:52.105 "raid_level": "raid1", 00:36:52.105 "superblock": true, 00:36:52.105 "num_base_bdevs": 2, 00:36:52.105 "num_base_bdevs_discovered": 2, 00:36:52.105 "num_base_bdevs_operational": 2, 00:36:52.105 "base_bdevs_list": [ 00:36:52.105 { 00:36:52.105 "name": "BaseBdev1", 00:36:52.105 "uuid": "8bb3d98e-145c-5993-ad8c-48c28fb0d628", 00:36:52.105 "is_configured": true, 00:36:52.105 "data_offset": 256, 00:36:52.105 "data_size": 7936 00:36:52.105 }, 00:36:52.105 { 00:36:52.105 "name": "BaseBdev2", 00:36:52.105 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:36:52.105 "is_configured": true, 00:36:52.105 "data_offset": 256, 00:36:52.105 "data_size": 7936 00:36:52.105 } 00:36:52.105 ] 00:36:52.105 }' 00:36:52.105 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:52.105 07:47:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:52.674 07:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:36:52.674 07:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:52.934 [2024-07-12 07:47:26.578326] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:52.934 07:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:36:52.934 07:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:52.934 07:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:53.193 07:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:36:53.193 07:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:36:53.193 07:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:36:53.193 07:47:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:36:53.453 [2024-07-12 07:47:27.106254] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:53.453 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:53.453 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:53.453 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:53.453 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:53.453 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:53.453 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:53.453 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:53.453 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:53.453 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:53.453 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:53.453 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:53.453 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:53.712 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:53.712 "name": "raid_bdev1", 00:36:53.712 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:36:53.712 "strip_size_kb": 0, 00:36:53.712 "state": "online", 00:36:53.712 "raid_level": "raid1", 00:36:53.712 "superblock": true, 00:36:53.712 "num_base_bdevs": 2, 00:36:53.712 "num_base_bdevs_discovered": 1, 00:36:53.712 "num_base_bdevs_operational": 1, 00:36:53.712 "base_bdevs_list": [ 00:36:53.712 { 00:36:53.712 "name": null, 00:36:53.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:53.712 "is_configured": false, 00:36:53.712 "data_offset": 256, 00:36:53.712 "data_size": 7936 00:36:53.712 }, 00:36:53.712 { 00:36:53.712 "name": "BaseBdev2", 00:36:53.712 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:36:53.712 "is_configured": true, 00:36:53.712 "data_offset": 256, 00:36:53.712 "data_size": 7936 00:36:53.712 } 00:36:53.712 ] 00:36:53.712 }' 00:36:53.712 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:53.712 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:53.971 07:47:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:54.231 [2024-07-12 07:47:28.014439] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:54.231 [2024-07-12 07:47:28.017381] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:36:54.231 [2024-07-12 07:47:28.019561] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:54.231 07:47:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:36:55.168 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:55.168 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:55.168 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:55.168 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:55.168 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:55.169 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:55.169 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:55.428 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:55.428 "name": "raid_bdev1", 00:36:55.428 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:36:55.428 "strip_size_kb": 0, 00:36:55.428 "state": "online", 00:36:55.428 "raid_level": "raid1", 00:36:55.428 "superblock": true, 00:36:55.428 "num_base_bdevs": 2, 00:36:55.428 "num_base_bdevs_discovered": 2, 00:36:55.428 "num_base_bdevs_operational": 2, 00:36:55.428 "process": { 00:36:55.428 "type": "rebuild", 00:36:55.428 "target": "spare", 00:36:55.428 "progress": { 00:36:55.428 "blocks": 3072, 00:36:55.428 "percent": 38 00:36:55.428 } 00:36:55.428 }, 00:36:55.428 "base_bdevs_list": [ 00:36:55.428 { 00:36:55.428 "name": "spare", 00:36:55.428 "uuid": "a3d25876-2ee7-52de-addf-44699cbdde69", 00:36:55.428 "is_configured": true, 00:36:55.428 "data_offset": 256, 00:36:55.428 "data_size": 7936 00:36:55.428 }, 00:36:55.428 { 00:36:55.428 "name": "BaseBdev2", 00:36:55.428 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:36:55.428 "is_configured": true, 00:36:55.428 "data_offset": 256, 00:36:55.428 "data_size": 7936 00:36:55.428 } 00:36:55.428 ] 00:36:55.428 }' 00:36:55.428 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:55.687 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:55.687 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:55.687 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:55.688 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:55.947 [2024-07-12 07:47:29.629246] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:55.947 [2024-07-12 07:47:29.730093] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:55.947 [2024-07-12 07:47:29.730290] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:55.947 [2024-07-12 07:47:29.730335] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:55.947 [2024-07-12 07:47:29.730408] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:55.947 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:55.947 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:55.947 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:55.947 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:55.947 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:55.947 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:55.947 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:55.947 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:55.947 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:55.947 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:55.947 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:55.947 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:56.206 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:56.206 "name": "raid_bdev1", 00:36:56.206 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:36:56.206 "strip_size_kb": 0, 00:36:56.206 "state": "online", 00:36:56.206 "raid_level": "raid1", 00:36:56.206 "superblock": true, 00:36:56.206 "num_base_bdevs": 2, 00:36:56.206 "num_base_bdevs_discovered": 1, 00:36:56.206 "num_base_bdevs_operational": 1, 00:36:56.206 "base_bdevs_list": [ 00:36:56.206 { 00:36:56.206 "name": null, 00:36:56.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.206 "is_configured": false, 00:36:56.206 "data_offset": 256, 00:36:56.206 "data_size": 7936 00:36:56.206 }, 00:36:56.206 { 00:36:56.206 "name": "BaseBdev2", 00:36:56.206 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:36:56.206 "is_configured": true, 00:36:56.206 "data_offset": 256, 00:36:56.206 "data_size": 7936 00:36:56.206 } 00:36:56.206 ] 00:36:56.206 }' 00:36:56.206 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:56.206 07:47:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:56.775 07:47:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:56.775 07:47:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:56.775 07:47:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:56.775 07:47:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:56.775 07:47:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:56.775 07:47:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:56.775 07:47:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:57.034 07:47:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:57.034 "name": "raid_bdev1", 00:36:57.034 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:36:57.034 "strip_size_kb": 0, 00:36:57.034 "state": "online", 00:36:57.034 "raid_level": "raid1", 00:36:57.034 "superblock": true, 00:36:57.034 "num_base_bdevs": 2, 00:36:57.034 "num_base_bdevs_discovered": 1, 00:36:57.034 "num_base_bdevs_operational": 1, 00:36:57.034 "base_bdevs_list": [ 00:36:57.034 { 00:36:57.034 "name": null, 00:36:57.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:57.034 "is_configured": false, 00:36:57.034 "data_offset": 256, 00:36:57.034 "data_size": 7936 00:36:57.034 }, 00:36:57.034 { 00:36:57.034 "name": "BaseBdev2", 00:36:57.034 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:36:57.034 "is_configured": true, 00:36:57.034 "data_offset": 256, 00:36:57.034 "data_size": 7936 00:36:57.034 } 00:36:57.034 ] 00:36:57.034 }' 00:36:57.034 07:47:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:57.034 07:47:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:57.034 07:47:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:57.034 07:47:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:57.034 07:47:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:57.307 [2024-07-12 07:47:31.037639] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:57.307 [2024-07-12 07:47:31.039239] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:36:57.307 [2024-07-12 07:47:31.041274] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:57.307 07:47:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:36:58.320 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:58.320 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:58.320 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:58.320 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:58.320 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:58.320 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:58.320 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:58.580 "name": "raid_bdev1", 00:36:58.580 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:36:58.580 "strip_size_kb": 0, 00:36:58.580 "state": "online", 00:36:58.580 "raid_level": "raid1", 00:36:58.580 "superblock": true, 00:36:58.580 "num_base_bdevs": 2, 00:36:58.580 "num_base_bdevs_discovered": 2, 00:36:58.580 "num_base_bdevs_operational": 2, 00:36:58.580 "process": { 00:36:58.580 "type": "rebuild", 00:36:58.580 "target": "spare", 00:36:58.580 "progress": { 00:36:58.580 "blocks": 2816, 00:36:58.580 "percent": 35 00:36:58.580 } 00:36:58.580 }, 00:36:58.580 "base_bdevs_list": [ 00:36:58.580 { 00:36:58.580 "name": "spare", 00:36:58.580 "uuid": "a3d25876-2ee7-52de-addf-44699cbdde69", 00:36:58.580 "is_configured": true, 00:36:58.580 "data_offset": 256, 00:36:58.580 "data_size": 7936 00:36:58.580 }, 00:36:58.580 { 00:36:58.580 "name": "BaseBdev2", 00:36:58.580 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:36:58.580 "is_configured": true, 00:36:58.580 "data_offset": 256, 00:36:58.580 "data_size": 7936 00:36:58.580 } 00:36:58.580 ] 00:36:58.580 }' 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:36:58.580 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1363 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:58.580 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:58.840 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:58.840 "name": "raid_bdev1", 00:36:58.840 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:36:58.840 "strip_size_kb": 0, 00:36:58.840 "state": "online", 00:36:58.840 "raid_level": "raid1", 00:36:58.840 "superblock": true, 00:36:58.840 "num_base_bdevs": 2, 00:36:58.840 "num_base_bdevs_discovered": 2, 00:36:58.840 "num_base_bdevs_operational": 2, 00:36:58.840 "process": { 00:36:58.840 "type": "rebuild", 00:36:58.840 "target": "spare", 00:36:58.840 "progress": { 00:36:58.840 "blocks": 3584, 00:36:58.840 "percent": 45 00:36:58.840 } 00:36:58.840 }, 00:36:58.840 "base_bdevs_list": [ 00:36:58.840 { 00:36:58.840 "name": "spare", 00:36:58.840 "uuid": "a3d25876-2ee7-52de-addf-44699cbdde69", 00:36:58.840 "is_configured": true, 00:36:58.840 "data_offset": 256, 00:36:58.840 "data_size": 7936 00:36:58.840 }, 00:36:58.840 { 00:36:58.840 "name": "BaseBdev2", 00:36:58.840 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:36:58.840 "is_configured": true, 00:36:58.840 "data_offset": 256, 00:36:58.840 "data_size": 7936 00:36:58.840 } 00:36:58.840 ] 00:36:58.840 }' 00:36:58.840 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:58.840 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:58.840 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:58.840 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:58.840 07:47:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:59.776 07:47:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:59.776 07:47:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:59.776 07:47:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:59.776 07:47:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:59.776 07:47:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:59.776 07:47:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:59.776 07:47:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:59.776 07:47:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:00.035 07:47:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:00.035 "name": "raid_bdev1", 00:37:00.035 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:00.035 "strip_size_kb": 0, 00:37:00.035 "state": "online", 00:37:00.035 "raid_level": "raid1", 00:37:00.035 "superblock": true, 00:37:00.035 "num_base_bdevs": 2, 00:37:00.035 "num_base_bdevs_discovered": 2, 00:37:00.035 "num_base_bdevs_operational": 2, 00:37:00.035 "process": { 00:37:00.035 "type": "rebuild", 00:37:00.035 "target": "spare", 00:37:00.035 "progress": { 00:37:00.035 "blocks": 6912, 00:37:00.035 "percent": 87 00:37:00.035 } 00:37:00.035 }, 00:37:00.035 "base_bdevs_list": [ 00:37:00.035 { 00:37:00.035 "name": "spare", 00:37:00.035 "uuid": "a3d25876-2ee7-52de-addf-44699cbdde69", 00:37:00.035 "is_configured": true, 00:37:00.035 "data_offset": 256, 00:37:00.035 "data_size": 7936 00:37:00.035 }, 00:37:00.035 { 00:37:00.035 "name": "BaseBdev2", 00:37:00.035 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:00.035 "is_configured": true, 00:37:00.035 "data_offset": 256, 00:37:00.035 "data_size": 7936 00:37:00.035 } 00:37:00.035 ] 00:37:00.035 }' 00:37:00.035 07:47:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:00.035 07:47:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:00.035 07:47:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:00.295 07:47:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:00.295 07:47:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:37:00.295 [2024-07-12 07:47:34.156929] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:00.295 [2024-07-12 07:47:34.157154] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:00.295 [2024-07-12 07:47:34.157365] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:01.233 07:47:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:01.233 07:47:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:01.233 07:47:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:01.233 07:47:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:01.233 07:47:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:01.233 07:47:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:01.233 07:47:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:01.233 07:47:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:01.492 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:01.492 "name": "raid_bdev1", 00:37:01.492 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:01.492 "strip_size_kb": 0, 00:37:01.492 "state": "online", 00:37:01.492 "raid_level": "raid1", 00:37:01.492 "superblock": true, 00:37:01.492 "num_base_bdevs": 2, 00:37:01.492 "num_base_bdevs_discovered": 2, 00:37:01.492 "num_base_bdevs_operational": 2, 00:37:01.492 "base_bdevs_list": [ 00:37:01.492 { 00:37:01.492 "name": "spare", 00:37:01.492 "uuid": "a3d25876-2ee7-52de-addf-44699cbdde69", 00:37:01.492 "is_configured": true, 00:37:01.492 "data_offset": 256, 00:37:01.492 "data_size": 7936 00:37:01.492 }, 00:37:01.492 { 00:37:01.492 "name": "BaseBdev2", 00:37:01.492 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:01.492 "is_configured": true, 00:37:01.492 "data_offset": 256, 00:37:01.492 "data_size": 7936 00:37:01.492 } 00:37:01.492 ] 00:37:01.492 }' 00:37:01.492 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:01.492 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:01.492 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:01.492 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:37:01.492 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:37:01.492 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:01.492 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:01.492 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:01.492 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:01.492 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:01.492 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:01.492 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:01.752 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:01.752 "name": "raid_bdev1", 00:37:01.752 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:01.752 "strip_size_kb": 0, 00:37:01.752 "state": "online", 00:37:01.752 "raid_level": "raid1", 00:37:01.752 "superblock": true, 00:37:01.752 "num_base_bdevs": 2, 00:37:01.752 "num_base_bdevs_discovered": 2, 00:37:01.752 "num_base_bdevs_operational": 2, 00:37:01.752 "base_bdevs_list": [ 00:37:01.752 { 00:37:01.752 "name": "spare", 00:37:01.752 "uuid": "a3d25876-2ee7-52de-addf-44699cbdde69", 00:37:01.752 "is_configured": true, 00:37:01.752 "data_offset": 256, 00:37:01.752 "data_size": 7936 00:37:01.752 }, 00:37:01.752 { 00:37:01.752 "name": "BaseBdev2", 00:37:01.752 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:01.752 "is_configured": true, 00:37:01.752 "data_offset": 256, 00:37:01.752 "data_size": 7936 00:37:01.752 } 00:37:01.752 ] 00:37:01.752 }' 00:37:01.752 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:01.752 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:01.752 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:01.752 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:01.752 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:01.753 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:01.753 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:01.753 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:01.753 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:01.753 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:01.753 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:01.753 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:01.753 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:01.753 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:01.753 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:01.753 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:02.012 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:02.012 "name": "raid_bdev1", 00:37:02.012 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:02.012 "strip_size_kb": 0, 00:37:02.012 "state": "online", 00:37:02.012 "raid_level": "raid1", 00:37:02.012 "superblock": true, 00:37:02.012 "num_base_bdevs": 2, 00:37:02.012 "num_base_bdevs_discovered": 2, 00:37:02.012 "num_base_bdevs_operational": 2, 00:37:02.012 "base_bdevs_list": [ 00:37:02.012 { 00:37:02.012 "name": "spare", 00:37:02.012 "uuid": "a3d25876-2ee7-52de-addf-44699cbdde69", 00:37:02.012 "is_configured": true, 00:37:02.012 "data_offset": 256, 00:37:02.012 "data_size": 7936 00:37:02.012 }, 00:37:02.012 { 00:37:02.012 "name": "BaseBdev2", 00:37:02.012 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:02.012 "is_configured": true, 00:37:02.012 "data_offset": 256, 00:37:02.012 "data_size": 7936 00:37:02.012 } 00:37:02.012 ] 00:37:02.012 }' 00:37:02.012 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:02.012 07:47:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:02.581 07:47:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:02.581 [2024-07-12 07:47:36.412725] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:02.581 [2024-07-12 07:47:36.412854] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:02.581 [2024-07-12 07:47:36.413113] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:02.581 [2024-07-12 07:47:36.413301] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:02.581 [2024-07-12 07:47:36.413384] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:37:02.581 07:47:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:02.581 07:47:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:37:02.840 07:47:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:37:02.841 07:47:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:37:02.841 07:47:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:37:02.841 07:47:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:03.100 07:47:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:03.360 [2024-07-12 07:47:37.140815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:03.360 [2024-07-12 07:47:37.141002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:03.360 [2024-07-12 07:47:37.141068] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:03.360 [2024-07-12 07:47:37.141153] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:03.360 [2024-07-12 07:47:37.143255] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:03.360 [2024-07-12 07:47:37.143410] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:03.360 [2024-07-12 07:47:37.143596] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:03.360 [2024-07-12 07:47:37.143749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:03.360 [2024-07-12 07:47:37.143988] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:03.360 spare 00:37:03.360 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:03.360 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:03.360 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:03.360 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:03.360 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:03.360 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:03.360 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:03.360 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:03.360 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:03.360 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:03.360 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:03.360 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:03.620 [2024-07-12 07:47:37.244143] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:37:03.620 [2024-07-12 07:47:37.244249] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:03.620 [2024-07-12 07:47:37.244382] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:37:03.620 [2024-07-12 07:47:37.244548] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:37:03.620 [2024-07-12 07:47:37.244678] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:37:03.620 [2024-07-12 07:47:37.244845] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:03.620 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:03.620 "name": "raid_bdev1", 00:37:03.620 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:03.620 "strip_size_kb": 0, 00:37:03.620 "state": "online", 00:37:03.620 "raid_level": "raid1", 00:37:03.620 "superblock": true, 00:37:03.620 "num_base_bdevs": 2, 00:37:03.620 "num_base_bdevs_discovered": 2, 00:37:03.620 "num_base_bdevs_operational": 2, 00:37:03.620 "base_bdevs_list": [ 00:37:03.620 { 00:37:03.620 "name": "spare", 00:37:03.620 "uuid": "a3d25876-2ee7-52de-addf-44699cbdde69", 00:37:03.620 "is_configured": true, 00:37:03.620 "data_offset": 256, 00:37:03.620 "data_size": 7936 00:37:03.620 }, 00:37:03.620 { 00:37:03.620 "name": "BaseBdev2", 00:37:03.620 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:03.620 "is_configured": true, 00:37:03.620 "data_offset": 256, 00:37:03.620 "data_size": 7936 00:37:03.620 } 00:37:03.620 ] 00:37:03.620 }' 00:37:03.620 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:03.620 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:04.189 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:04.189 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:04.189 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:04.189 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:04.189 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:04.189 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:04.189 07:47:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:04.448 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:04.448 "name": "raid_bdev1", 00:37:04.448 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:04.448 "strip_size_kb": 0, 00:37:04.448 "state": "online", 00:37:04.448 "raid_level": "raid1", 00:37:04.449 "superblock": true, 00:37:04.449 "num_base_bdevs": 2, 00:37:04.449 "num_base_bdevs_discovered": 2, 00:37:04.449 "num_base_bdevs_operational": 2, 00:37:04.449 "base_bdevs_list": [ 00:37:04.449 { 00:37:04.449 "name": "spare", 00:37:04.449 "uuid": "a3d25876-2ee7-52de-addf-44699cbdde69", 00:37:04.449 "is_configured": true, 00:37:04.449 "data_offset": 256, 00:37:04.449 "data_size": 7936 00:37:04.449 }, 00:37:04.449 { 00:37:04.449 "name": "BaseBdev2", 00:37:04.449 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:04.449 "is_configured": true, 00:37:04.449 "data_offset": 256, 00:37:04.449 "data_size": 7936 00:37:04.449 } 00:37:04.449 ] 00:37:04.449 }' 00:37:04.449 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:04.449 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:04.449 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:04.449 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:04.449 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:04.449 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:37:04.708 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:37:04.708 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:04.969 [2024-07-12 07:47:38.741179] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:04.969 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:04.969 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:04.969 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:04.969 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:04.969 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:04.969 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:04.969 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:04.969 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:04.969 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:04.969 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:04.969 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:04.969 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:05.228 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:05.228 "name": "raid_bdev1", 00:37:05.228 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:05.228 "strip_size_kb": 0, 00:37:05.228 "state": "online", 00:37:05.228 "raid_level": "raid1", 00:37:05.228 "superblock": true, 00:37:05.228 "num_base_bdevs": 2, 00:37:05.228 "num_base_bdevs_discovered": 1, 00:37:05.228 "num_base_bdevs_operational": 1, 00:37:05.228 "base_bdevs_list": [ 00:37:05.228 { 00:37:05.228 "name": null, 00:37:05.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:05.228 "is_configured": false, 00:37:05.228 "data_offset": 256, 00:37:05.228 "data_size": 7936 00:37:05.228 }, 00:37:05.228 { 00:37:05.228 "name": "BaseBdev2", 00:37:05.228 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:05.228 "is_configured": true, 00:37:05.228 "data_offset": 256, 00:37:05.228 "data_size": 7936 00:37:05.228 } 00:37:05.228 ] 00:37:05.228 }' 00:37:05.228 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:05.228 07:47:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:05.797 07:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:05.797 [2024-07-12 07:47:39.665368] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:05.797 [2024-07-12 07:47:39.665613] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:05.797 [2024-07-12 07:47:39.665714] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:05.797 [2024-07-12 07:47:39.665808] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:05.797 [2024-07-12 07:47:39.667777] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:37:05.797 [2024-07-12 07:47:39.669838] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:06.055 07:47:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:37:06.990 07:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:06.990 07:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:06.990 07:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:06.990 07:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:06.990 07:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:06.990 07:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:06.990 07:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:07.248 07:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:07.248 "name": "raid_bdev1", 00:37:07.248 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:07.248 "strip_size_kb": 0, 00:37:07.248 "state": "online", 00:37:07.248 "raid_level": "raid1", 00:37:07.248 "superblock": true, 00:37:07.248 "num_base_bdevs": 2, 00:37:07.248 "num_base_bdevs_discovered": 2, 00:37:07.248 "num_base_bdevs_operational": 2, 00:37:07.248 "process": { 00:37:07.248 "type": "rebuild", 00:37:07.248 "target": "spare", 00:37:07.248 "progress": { 00:37:07.248 "blocks": 3072, 00:37:07.248 "percent": 38 00:37:07.248 } 00:37:07.248 }, 00:37:07.248 "base_bdevs_list": [ 00:37:07.248 { 00:37:07.248 "name": "spare", 00:37:07.248 "uuid": "a3d25876-2ee7-52de-addf-44699cbdde69", 00:37:07.248 "is_configured": true, 00:37:07.248 "data_offset": 256, 00:37:07.248 "data_size": 7936 00:37:07.248 }, 00:37:07.248 { 00:37:07.248 "name": "BaseBdev2", 00:37:07.248 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:07.248 "is_configured": true, 00:37:07.248 "data_offset": 256, 00:37:07.248 "data_size": 7936 00:37:07.248 } 00:37:07.248 ] 00:37:07.248 }' 00:37:07.248 07:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:07.248 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:07.248 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:07.248 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:07.248 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:07.507 [2024-07-12 07:47:41.283142] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:07.507 [2024-07-12 07:47:41.378127] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:07.507 [2024-07-12 07:47:41.378307] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:07.507 [2024-07-12 07:47:41.378351] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:07.507 [2024-07-12 07:47:41.378430] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:07.765 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:07.766 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:07.766 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:07.766 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:07.766 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:07.766 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:07.766 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:07.766 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:07.766 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:07.766 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:07.766 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:07.766 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:08.024 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:08.024 "name": "raid_bdev1", 00:37:08.024 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:08.024 "strip_size_kb": 0, 00:37:08.024 "state": "online", 00:37:08.024 "raid_level": "raid1", 00:37:08.024 "superblock": true, 00:37:08.024 "num_base_bdevs": 2, 00:37:08.024 "num_base_bdevs_discovered": 1, 00:37:08.024 "num_base_bdevs_operational": 1, 00:37:08.024 "base_bdevs_list": [ 00:37:08.024 { 00:37:08.024 "name": null, 00:37:08.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:08.024 "is_configured": false, 00:37:08.024 "data_offset": 256, 00:37:08.024 "data_size": 7936 00:37:08.024 }, 00:37:08.024 { 00:37:08.024 "name": "BaseBdev2", 00:37:08.024 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:08.024 "is_configured": true, 00:37:08.024 "data_offset": 256, 00:37:08.024 "data_size": 7936 00:37:08.024 } 00:37:08.024 ] 00:37:08.024 }' 00:37:08.024 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:08.024 07:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:08.591 07:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:08.591 [2024-07-12 07:47:42.453099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:08.591 [2024-07-12 07:47:42.453295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:08.591 [2024-07-12 07:47:42.453363] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:37:08.591 [2024-07-12 07:47:42.453461] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:08.591 [2024-07-12 07:47:42.453720] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:08.591 [2024-07-12 07:47:42.453833] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:08.591 [2024-07-12 07:47:42.453931] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:08.591 [2024-07-12 07:47:42.453964] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:08.591 [2024-07-12 07:47:42.453990] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:08.591 [2024-07-12 07:47:42.454061] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:08.591 [2024-07-12 07:47:42.455438] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:37:08.591 [2024-07-12 07:47:42.457579] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:08.591 spare 00:37:08.591 07:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:37:09.966 07:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:09.966 07:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:09.966 07:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:09.966 07:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:09.966 07:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:09.967 07:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:09.967 07:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:09.967 07:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:09.967 "name": "raid_bdev1", 00:37:09.967 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:09.967 "strip_size_kb": 0, 00:37:09.967 "state": "online", 00:37:09.967 "raid_level": "raid1", 00:37:09.967 "superblock": true, 00:37:09.967 "num_base_bdevs": 2, 00:37:09.967 "num_base_bdevs_discovered": 2, 00:37:09.967 "num_base_bdevs_operational": 2, 00:37:09.967 "process": { 00:37:09.967 "type": "rebuild", 00:37:09.967 "target": "spare", 00:37:09.967 "progress": { 00:37:09.967 "blocks": 3072, 00:37:09.967 "percent": 38 00:37:09.967 } 00:37:09.967 }, 00:37:09.967 "base_bdevs_list": [ 00:37:09.967 { 00:37:09.967 "name": "spare", 00:37:09.967 "uuid": "a3d25876-2ee7-52de-addf-44699cbdde69", 00:37:09.967 "is_configured": true, 00:37:09.967 "data_offset": 256, 00:37:09.967 "data_size": 7936 00:37:09.967 }, 00:37:09.967 { 00:37:09.967 "name": "BaseBdev2", 00:37:09.967 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:09.967 "is_configured": true, 00:37:09.967 "data_offset": 256, 00:37:09.967 "data_size": 7936 00:37:09.967 } 00:37:09.967 ] 00:37:09.967 }' 00:37:09.967 07:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:09.967 07:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:09.967 07:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:09.967 07:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:09.967 07:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:10.225 [2024-07-12 07:47:44.046709] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:10.225 [2024-07-12 07:47:44.065182] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:10.225 [2024-07-12 07:47:44.065384] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:10.225 [2024-07-12 07:47:44.065432] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:10.225 [2024-07-12 07:47:44.065504] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:10.225 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:10.225 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:10.225 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:10.225 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:10.225 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:10.225 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:10.225 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:10.225 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:10.225 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:10.225 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:10.225 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:10.225 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:10.483 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:10.483 "name": "raid_bdev1", 00:37:10.483 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:10.483 "strip_size_kb": 0, 00:37:10.483 "state": "online", 00:37:10.483 "raid_level": "raid1", 00:37:10.483 "superblock": true, 00:37:10.483 "num_base_bdevs": 2, 00:37:10.483 "num_base_bdevs_discovered": 1, 00:37:10.483 "num_base_bdevs_operational": 1, 00:37:10.483 "base_bdevs_list": [ 00:37:10.483 { 00:37:10.483 "name": null, 00:37:10.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:10.483 "is_configured": false, 00:37:10.483 "data_offset": 256, 00:37:10.483 "data_size": 7936 00:37:10.483 }, 00:37:10.483 { 00:37:10.483 "name": "BaseBdev2", 00:37:10.483 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:10.483 "is_configured": true, 00:37:10.483 "data_offset": 256, 00:37:10.484 "data_size": 7936 00:37:10.484 } 00:37:10.484 ] 00:37:10.484 }' 00:37:10.484 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:10.484 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:11.051 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:11.051 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:11.051 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:11.051 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:11.051 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:11.051 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:11.051 07:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:11.311 07:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:11.311 "name": "raid_bdev1", 00:37:11.311 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:11.311 "strip_size_kb": 0, 00:37:11.311 "state": "online", 00:37:11.311 "raid_level": "raid1", 00:37:11.311 "superblock": true, 00:37:11.311 "num_base_bdevs": 2, 00:37:11.311 "num_base_bdevs_discovered": 1, 00:37:11.311 "num_base_bdevs_operational": 1, 00:37:11.311 "base_bdevs_list": [ 00:37:11.311 { 00:37:11.311 "name": null, 00:37:11.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:11.311 "is_configured": false, 00:37:11.311 "data_offset": 256, 00:37:11.311 "data_size": 7936 00:37:11.311 }, 00:37:11.311 { 00:37:11.311 "name": "BaseBdev2", 00:37:11.311 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:11.311 "is_configured": true, 00:37:11.311 "data_offset": 256, 00:37:11.311 "data_size": 7936 00:37:11.311 } 00:37:11.311 ] 00:37:11.311 }' 00:37:11.311 07:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:11.311 07:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:11.311 07:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:11.311 07:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:11.311 07:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:37:11.571 07:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:11.830 [2024-07-12 07:47:45.648406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:11.830 [2024-07-12 07:47:45.649196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:11.830 [2024-07-12 07:47:45.649487] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:37:11.830 [2024-07-12 07:47:45.649707] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:11.830 [2024-07-12 07:47:45.650044] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:11.830 [2024-07-12 07:47:45.650266] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:11.830 [2024-07-12 07:47:45.650515] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:37:11.830 [2024-07-12 07:47:45.650611] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:11.830 [2024-07-12 07:47:45.650693] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:11.830 BaseBdev1 00:37:11.830 07:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:37:13.208 07:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:13.208 07:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:13.208 07:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:13.208 07:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:13.208 07:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:13.208 07:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:13.208 07:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:13.208 07:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:13.208 07:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:13.208 07:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:13.208 07:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:13.208 07:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:13.208 07:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:13.208 "name": "raid_bdev1", 00:37:13.208 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:13.208 "strip_size_kb": 0, 00:37:13.208 "state": "online", 00:37:13.208 "raid_level": "raid1", 00:37:13.208 "superblock": true, 00:37:13.208 "num_base_bdevs": 2, 00:37:13.208 "num_base_bdevs_discovered": 1, 00:37:13.208 "num_base_bdevs_operational": 1, 00:37:13.208 "base_bdevs_list": [ 00:37:13.208 { 00:37:13.208 "name": null, 00:37:13.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:13.208 "is_configured": false, 00:37:13.208 "data_offset": 256, 00:37:13.208 "data_size": 7936 00:37:13.208 }, 00:37:13.208 { 00:37:13.208 "name": "BaseBdev2", 00:37:13.208 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:13.208 "is_configured": true, 00:37:13.208 "data_offset": 256, 00:37:13.208 "data_size": 7936 00:37:13.208 } 00:37:13.208 ] 00:37:13.208 }' 00:37:13.208 07:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:13.208 07:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:13.776 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:13.776 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:13.776 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:13.776 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:13.776 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:13.776 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:13.776 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:14.035 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:14.035 "name": "raid_bdev1", 00:37:14.035 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:14.035 "strip_size_kb": 0, 00:37:14.035 "state": "online", 00:37:14.035 "raid_level": "raid1", 00:37:14.035 "superblock": true, 00:37:14.035 "num_base_bdevs": 2, 00:37:14.035 "num_base_bdevs_discovered": 1, 00:37:14.035 "num_base_bdevs_operational": 1, 00:37:14.035 "base_bdevs_list": [ 00:37:14.035 { 00:37:14.035 "name": null, 00:37:14.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:14.035 "is_configured": false, 00:37:14.035 "data_offset": 256, 00:37:14.035 "data_size": 7936 00:37:14.035 }, 00:37:14.035 { 00:37:14.035 "name": "BaseBdev2", 00:37:14.035 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:14.035 "is_configured": true, 00:37:14.035 "data_offset": 256, 00:37:14.035 "data_size": 7936 00:37:14.035 } 00:37:14.035 ] 00:37:14.035 }' 00:37:14.035 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:14.035 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:14.035 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:14.035 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:14.035 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:14.036 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:37:14.036 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:14.036 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:14.036 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:14.036 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:14.036 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:14.036 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:14.036 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:14.036 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:14.036 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:14.036 07:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:14.295 [2024-07-12 07:47:48.051275] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:14.295 [2024-07-12 07:47:48.051522] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:14.295 [2024-07-12 07:47:48.051613] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:14.295 request: 00:37:14.295 { 00:37:14.295 "raid_bdev": "raid_bdev1", 00:37:14.295 "base_bdev": "BaseBdev1", 00:37:14.295 "method": "bdev_raid_add_base_bdev", 00:37:14.295 "req_id": 1 00:37:14.295 } 00:37:14.295 Got JSON-RPC error response 00:37:14.295 response: 00:37:14.295 { 00:37:14.295 "code": -22, 00:37:14.295 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:37:14.295 } 00:37:14.295 07:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:37:14.295 07:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:14.295 07:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:14.295 07:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:14.295 07:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:37:15.232 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:15.232 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:15.232 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:15.232 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:15.232 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:15.232 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:15.232 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:15.232 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:15.232 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:15.232 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:15.232 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:15.232 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:15.491 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:15.491 "name": "raid_bdev1", 00:37:15.491 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:15.491 "strip_size_kb": 0, 00:37:15.491 "state": "online", 00:37:15.491 "raid_level": "raid1", 00:37:15.491 "superblock": true, 00:37:15.491 "num_base_bdevs": 2, 00:37:15.491 "num_base_bdevs_discovered": 1, 00:37:15.491 "num_base_bdevs_operational": 1, 00:37:15.491 "base_bdevs_list": [ 00:37:15.491 { 00:37:15.491 "name": null, 00:37:15.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:15.491 "is_configured": false, 00:37:15.491 "data_offset": 256, 00:37:15.491 "data_size": 7936 00:37:15.491 }, 00:37:15.491 { 00:37:15.491 "name": "BaseBdev2", 00:37:15.491 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:15.491 "is_configured": true, 00:37:15.491 "data_offset": 256, 00:37:15.491 "data_size": 7936 00:37:15.491 } 00:37:15.491 ] 00:37:15.491 }' 00:37:15.491 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:15.491 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:16.059 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:16.059 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:16.059 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:16.059 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:16.059 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:16.059 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:16.059 07:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:16.317 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:16.317 "name": "raid_bdev1", 00:37:16.317 "uuid": "32e0da5b-f209-4085-a0fb-e2e47d47934c", 00:37:16.317 "strip_size_kb": 0, 00:37:16.317 "state": "online", 00:37:16.317 "raid_level": "raid1", 00:37:16.317 "superblock": true, 00:37:16.317 "num_base_bdevs": 2, 00:37:16.317 "num_base_bdevs_discovered": 1, 00:37:16.317 "num_base_bdevs_operational": 1, 00:37:16.317 "base_bdevs_list": [ 00:37:16.317 { 00:37:16.317 "name": null, 00:37:16.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:16.318 "is_configured": false, 00:37:16.318 "data_offset": 256, 00:37:16.318 "data_size": 7936 00:37:16.318 }, 00:37:16.318 { 00:37:16.318 "name": "BaseBdev2", 00:37:16.318 "uuid": "dbb61375-8bfb-5ae9-be53-9101ebd07c99", 00:37:16.318 "is_configured": true, 00:37:16.318 "data_offset": 256, 00:37:16.318 "data_size": 7936 00:37:16.318 } 00:37:16.318 ] 00:37:16.318 }' 00:37:16.318 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:16.318 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:16.318 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:16.318 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:16.318 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 172308 00:37:16.318 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 172308 ']' 00:37:16.318 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 172308 00:37:16.318 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:37:16.318 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:16.318 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 172308 00:37:16.318 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:16.318 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:16.318 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 172308' 00:37:16.318 killing process with pid 172308 00:37:16.318 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 172308 00:37:16.318 Received shutdown signal, test time was about 60.000000 seconds 00:37:16.318 00:37:16.318 Latency(us) 00:37:16.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:16.318 =================================================================================================================== 00:37:16.318 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:16.318 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 172308 00:37:16.318 [2024-07-12 07:47:50.178382] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:16.318 [2024-07-12 07:47:50.178471] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:16.318 [2024-07-12 07:47:50.178542] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:16.318 [2024-07-12 07:47:50.178626] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:37:16.576 [2024-07-12 07:47:50.210650] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:16.835 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:37:16.835 00:37:16.835 real 0m27.555s 00:37:16.835 user 0m43.813s 00:37:16.835 sys 0m3.310s 00:37:16.835 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:16.835 07:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:16.835 ************************************ 00:37:16.835 END TEST raid_rebuild_test_sb_md_interleaved 00:37:16.835 ************************************ 00:37:16.835 07:47:50 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:37:16.835 07:47:50 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:37:16.835 07:47:50 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 172308 ']' 00:37:16.835 07:47:50 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 172308 00:37:16.835 07:47:50 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:37:16.835 ************************************ 00:37:16.835 END TEST bdev_raid 00:37:16.835 ************************************ 00:37:16.835 00:37:16.835 real 22m31.334s 00:37:16.835 user 38m3.048s 00:37:16.835 sys 3m54.063s 00:37:16.835 07:47:50 bdev_raid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:16.835 07:47:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:16.835 07:47:50 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:37:16.835 07:47:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:16.835 07:47:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:16.835 07:47:50 -- common/autotest_common.sh@10 -- # set +x 00:37:16.835 ************************************ 00:37:16.835 START TEST bdevperf_config 00:37:16.835 ************************************ 00:37:16.835 07:47:50 bdevperf_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:37:17.095 * Looking for test storage... 00:37:17.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:17.095 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:17.095 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:17.095 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:17.095 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:17.095 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:17.095 07:47:50 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:20.386 07:47:53 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-12 07:47:50.840954] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:20.386 [2024-07-12 07:47:50.841209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173129 ] 00:37:20.386 Using job config with 4 jobs 00:37:20.386 [2024-07-12 07:47:50.994356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:20.386 [2024-07-12 07:47:51.059886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:20.386 cpumask for '\''job0'\'' is too big 00:37:20.386 cpumask for '\''job1'\'' is too big 00:37:20.386 cpumask for '\''job2'\'' is too big 00:37:20.386 cpumask for '\''job3'\'' is too big 00:37:20.386 Running I/O for 2 seconds... 00:37:20.386 00:37:20.386 Latency(us) 00:37:20.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.386 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:20.386 Malloc0 : 2.01 35587.38 34.75 0.00 0.00 7186.03 1388.74 11421.99 00:37:20.386 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:20.386 Malloc0 : 2.02 35565.66 34.73 0.00 0.00 7178.75 1349.73 10048.85 00:37:20.386 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:20.386 Malloc0 : 2.02 35544.43 34.71 0.00 0.00 7171.27 1357.53 8738.13 00:37:20.386 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:20.386 Malloc0 : 2.02 35523.43 34.69 0.00 0.00 7164.55 1318.52 8488.47 00:37:20.386 =================================================================================================================== 00:37:20.386 Total : 142220.91 138.89 0.00 0.00 7175.15 1318.52 11421.99' 00:37:20.386 07:47:53 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-12 07:47:50.840954] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:20.386 [2024-07-12 07:47:50.841209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173129 ] 00:37:20.386 Using job config with 4 jobs 00:37:20.386 [2024-07-12 07:47:50.994356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:20.386 [2024-07-12 07:47:51.059886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:20.386 cpumask for '\''job0'\'' is too big 00:37:20.386 cpumask for '\''job1'\'' is too big 00:37:20.386 cpumask for '\''job2'\'' is too big 00:37:20.386 cpumask for '\''job3'\'' is too big 00:37:20.386 Running I/O for 2 seconds... 00:37:20.386 00:37:20.386 Latency(us) 00:37:20.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.386 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:20.386 Malloc0 : 2.01 35587.38 34.75 0.00 0.00 7186.03 1388.74 11421.99 00:37:20.386 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:20.386 Malloc0 : 2.02 35565.66 34.73 0.00 0.00 7178.75 1349.73 10048.85 00:37:20.386 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:20.386 Malloc0 : 2.02 35544.43 34.71 0.00 0.00 7171.27 1357.53 8738.13 00:37:20.386 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:20.386 Malloc0 : 2.02 35523.43 34.69 0.00 0.00 7164.55 1318.52 8488.47 00:37:20.386 =================================================================================================================== 00:37:20.386 Total : 142220.91 138.89 0.00 0.00 7175.15 1318.52 11421.99' 00:37:20.386 07:47:53 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-12 07:47:50.840954] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:20.386 [2024-07-12 07:47:50.841209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173129 ] 00:37:20.386 Using job config with 4 jobs 00:37:20.386 [2024-07-12 07:47:50.994356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:20.386 [2024-07-12 07:47:51.059886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:20.386 cpumask for '\''job0'\'' is too big 00:37:20.386 cpumask for '\''job1'\'' is too big 00:37:20.386 cpumask for '\''job2'\'' is too big 00:37:20.386 cpumask for '\''job3'\'' is too big 00:37:20.386 Running I/O for 2 seconds... 00:37:20.386 00:37:20.386 Latency(us) 00:37:20.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.386 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:20.386 Malloc0 : 2.01 35587.38 34.75 0.00 0.00 7186.03 1388.74 11421.99 00:37:20.386 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:20.386 Malloc0 : 2.02 35565.66 34.73 0.00 0.00 7178.75 1349.73 10048.85 00:37:20.386 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:20.386 Malloc0 : 2.02 35544.43 34.71 0.00 0.00 7171.27 1357.53 8738.13 00:37:20.386 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:20.386 Malloc0 : 2.02 35523.43 34.69 0.00 0.00 7164.55 1318.52 8488.47 00:37:20.386 =================================================================================================================== 00:37:20.386 Total : 142220.91 138.89 0.00 0.00 7175.15 1318.52 11421.99' 00:37:20.386 07:47:53 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:37:20.386 07:47:53 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:37:20.386 07:47:53 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:37:20.386 07:47:53 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:20.386 [2024-07-12 07:47:53.600427] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:20.386 [2024-07-12 07:47:53.601020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173165 ] 00:37:20.386 [2024-07-12 07:47:53.757807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:20.386 [2024-07-12 07:47:53.818156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:20.386 cpumask for 'job0' is too big 00:37:20.386 cpumask for 'job1' is too big 00:37:20.386 cpumask for 'job2' is too big 00:37:20.386 cpumask for 'job3' is too big 00:37:22.921 07:47:56 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:37:22.921 Running I/O for 2 seconds... 00:37:22.921 00:37:22.921 Latency(us) 00:37:22.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:22.921 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:22.921 Malloc0 : 2.01 35399.78 34.57 0.00 0.00 7225.57 1326.32 11297.16 00:37:22.921 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:22.921 Malloc0 : 2.01 35378.05 34.55 0.00 0.00 7218.62 1310.72 9986.44 00:37:22.921 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:22.921 Malloc0 : 2.02 35419.87 34.59 0.00 0.00 7198.63 1318.52 8675.72 00:37:22.921 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:22.921 Malloc0 : 2.02 35398.60 34.57 0.00 0.00 7191.83 1318.52 7801.90 00:37:22.921 =================================================================================================================== 00:37:22.921 Total : 141596.30 138.28 0.00 0.00 7208.64 1310.72 11297.16' 00:37:22.921 07:47:56 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:37:22.921 07:47:56 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:22.921 07:47:56 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:37:22.921 07:47:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:37:22.921 07:47:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:37:22.921 07:47:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:37:22.921 07:47:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:37:22.921 07:47:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:37:22.921 00:37:22.921 07:47:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:22.921 07:47:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:22.922 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:37:22.922 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:22.922 07:47:56 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:25.460 07:47:58 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-12 07:47:56.348721] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:25.460 [2024-07-12 07:47:56.348876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173208 ] 00:37:25.460 Using job config with 3 jobs 00:37:25.460 [2024-07-12 07:47:56.489694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.460 [2024-07-12 07:47:56.550961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:25.460 cpumask for '\''job0'\'' is too big 00:37:25.460 cpumask for '\''job1'\'' is too big 00:37:25.460 cpumask for '\''job2'\'' is too big 00:37:25.460 Running I/O for 2 seconds... 00:37:25.460 00:37:25.460 Latency(us) 00:37:25.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:25.460 Malloc0 : 2.01 48062.45 46.94 0.00 0.00 5319.99 1310.72 8738.13 00:37:25.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:25.460 Malloc0 : 2.01 48033.46 46.91 0.00 0.00 5314.80 1349.73 7302.58 00:37:25.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:25.460 Malloc0 : 2.01 48087.80 46.96 0.00 0.00 5299.06 643.66 5867.03 00:37:25.460 =================================================================================================================== 00:37:25.460 Total : 144183.72 140.80 0.00 0.00 5311.27 643.66 8738.13' 00:37:25.460 07:47:58 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-12 07:47:56.348721] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:25.460 [2024-07-12 07:47:56.348876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173208 ] 00:37:25.460 Using job config with 3 jobs 00:37:25.460 [2024-07-12 07:47:56.489694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.460 [2024-07-12 07:47:56.550961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:25.460 cpumask for '\''job0'\'' is too big 00:37:25.460 cpumask for '\''job1'\'' is too big 00:37:25.460 cpumask for '\''job2'\'' is too big 00:37:25.460 Running I/O for 2 seconds... 00:37:25.460 00:37:25.460 Latency(us) 00:37:25.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:25.460 Malloc0 : 2.01 48062.45 46.94 0.00 0.00 5319.99 1310.72 8738.13 00:37:25.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:25.460 Malloc0 : 2.01 48033.46 46.91 0.00 0.00 5314.80 1349.73 7302.58 00:37:25.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:25.460 Malloc0 : 2.01 48087.80 46.96 0.00 0.00 5299.06 643.66 5867.03 00:37:25.460 =================================================================================================================== 00:37:25.460 Total : 144183.72 140.80 0.00 0.00 5311.27 643.66 8738.13' 00:37:25.460 07:47:58 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-12 07:47:56.348721] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:25.460 [2024-07-12 07:47:56.348876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173208 ] 00:37:25.460 Using job config with 3 jobs 00:37:25.460 [2024-07-12 07:47:56.489694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.460 [2024-07-12 07:47:56.550961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:25.460 cpumask for '\''job0'\'' is too big 00:37:25.460 cpumask for '\''job1'\'' is too big 00:37:25.460 cpumask for '\''job2'\'' is too big 00:37:25.460 Running I/O for 2 seconds... 00:37:25.460 00:37:25.460 Latency(us) 00:37:25.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:25.460 Malloc0 : 2.01 48062.45 46.94 0.00 0.00 5319.99 1310.72 8738.13 00:37:25.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:25.460 Malloc0 : 2.01 48033.46 46.91 0.00 0.00 5314.80 1349.73 7302.58 00:37:25.460 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:25.460 Malloc0 : 2.01 48087.80 46.96 0.00 0.00 5299.06 643.66 5867.03 00:37:25.460 =================================================================================================================== 00:37:25.460 Total : 144183.72 140.80 0.00 0.00 5311.27 643.66 8738.13' 00:37:25.461 07:47:58 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:37:25.461 07:47:58 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:37:25.461 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:25.461 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:25.461 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:25.461 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:25.461 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:25.461 07:47:59 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:27.996 07:48:01 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-12 07:47:59.112258] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:27.996 [2024-07-12 07:47:59.112529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173256 ] 00:37:27.996 Using job config with 4 jobs 00:37:27.996 [2024-07-12 07:47:59.266084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.996 [2024-07-12 07:47:59.327242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:27.996 cpumask for '\''job0'\'' is too big 00:37:27.996 cpumask for '\''job1'\'' is too big 00:37:27.996 cpumask for '\''job2'\'' is too big 00:37:27.996 cpumask for '\''job3'\'' is too big 00:37:27.996 Running I/O for 2 seconds... 00:37:27.996 00:37:27.996 Latency(us) 00:37:27.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.996 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc0 : 2.02 17719.65 17.30 0.00 0.00 14436.81 2730.67 22843.98 00:37:27.996 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc1 : 2.02 17708.78 17.29 0.00 0.00 14434.98 3261.20 22843.98 00:37:27.996 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc0 : 2.03 17698.30 17.28 0.00 0.00 14406.42 2699.46 20097.71 00:37:27.996 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc1 : 2.03 17687.52 17.27 0.00 0.00 14406.20 3183.18 20097.71 00:37:27.996 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc0 : 2.03 17677.04 17.26 0.00 0.00 14378.73 2715.06 17351.44 00:37:27.996 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc1 : 2.03 17666.40 17.25 0.00 0.00 14378.55 3214.38 17226.61 00:37:27.996 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc0 : 2.03 17749.75 17.33 0.00 0.00 14276.40 2449.80 15603.81 00:37:27.996 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc1 : 2.03 17738.93 17.32 0.00 0.00 14274.37 1888.06 15603.81 00:37:27.996 =================================================================================================================== 00:37:27.996 Total : 141646.38 138.33 0.00 0.00 14373.88 1888.06 22843.98' 00:37:27.996 07:48:01 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-12 07:47:59.112258] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:27.996 [2024-07-12 07:47:59.112529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173256 ] 00:37:27.996 Using job config with 4 jobs 00:37:27.996 [2024-07-12 07:47:59.266084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.996 [2024-07-12 07:47:59.327242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:27.996 cpumask for '\''job0'\'' is too big 00:37:27.996 cpumask for '\''job1'\'' is too big 00:37:27.996 cpumask for '\''job2'\'' is too big 00:37:27.996 cpumask for '\''job3'\'' is too big 00:37:27.996 Running I/O for 2 seconds... 00:37:27.996 00:37:27.996 Latency(us) 00:37:27.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.996 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc0 : 2.02 17719.65 17.30 0.00 0.00 14436.81 2730.67 22843.98 00:37:27.996 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc1 : 2.02 17708.78 17.29 0.00 0.00 14434.98 3261.20 22843.98 00:37:27.996 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc0 : 2.03 17698.30 17.28 0.00 0.00 14406.42 2699.46 20097.71 00:37:27.996 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc1 : 2.03 17687.52 17.27 0.00 0.00 14406.20 3183.18 20097.71 00:37:27.996 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc0 : 2.03 17677.04 17.26 0.00 0.00 14378.73 2715.06 17351.44 00:37:27.996 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc1 : 2.03 17666.40 17.25 0.00 0.00 14378.55 3214.38 17226.61 00:37:27.996 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc0 : 2.03 17749.75 17.33 0.00 0.00 14276.40 2449.80 15603.81 00:37:27.996 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.996 Malloc1 : 2.03 17738.93 17.32 0.00 0.00 14274.37 1888.06 15603.81 00:37:27.996 =================================================================================================================== 00:37:27.996 Total : 141646.38 138.33 0.00 0.00 14373.88 1888.06 22843.98' 00:37:27.996 07:48:01 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:37:27.996 07:48:01 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-12 07:47:59.112258] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:27.996 [2024-07-12 07:47:59.112529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173256 ] 00:37:27.996 Using job config with 4 jobs 00:37:27.996 [2024-07-12 07:47:59.266084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.996 [2024-07-12 07:47:59.327242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:27.997 cpumask for '\''job0'\'' is too big 00:37:27.997 cpumask for '\''job1'\'' is too big 00:37:27.997 cpumask for '\''job2'\'' is too big 00:37:27.997 cpumask for '\''job3'\'' is too big 00:37:27.997 Running I/O for 2 seconds... 00:37:27.997 00:37:27.997 Latency(us) 00:37:27.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.997 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.997 Malloc0 : 2.02 17719.65 17.30 0.00 0.00 14436.81 2730.67 22843.98 00:37:27.997 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.997 Malloc1 : 2.02 17708.78 17.29 0.00 0.00 14434.98 3261.20 22843.98 00:37:27.997 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.997 Malloc0 : 2.03 17698.30 17.28 0.00 0.00 14406.42 2699.46 20097.71 00:37:27.997 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.997 Malloc1 : 2.03 17687.52 17.27 0.00 0.00 14406.20 3183.18 20097.71 00:37:27.997 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.997 Malloc0 : 2.03 17677.04 17.26 0.00 0.00 14378.73 2715.06 17351.44 00:37:27.997 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.997 Malloc1 : 2.03 17666.40 17.25 0.00 0.00 14378.55 3214.38 17226.61 00:37:27.997 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.997 Malloc0 : 2.03 17749.75 17.33 0.00 0.00 14276.40 2449.80 15603.81 00:37:27.997 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:27.997 Malloc1 : 2.03 17738.93 17.32 0.00 0.00 14274.37 1888.06 15603.81 00:37:27.997 =================================================================================================================== 00:37:27.997 Total : 141646.38 138.33 0.00 0.00 14373.88 1888.06 22843.98' 00:37:27.997 07:48:01 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:37:27.997 07:48:01 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:37:27.997 07:48:01 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:37:27.997 07:48:01 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:27.997 07:48:01 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:37:27.997 00:37:27.997 real 0m11.208s 00:37:27.997 user 0m9.578s 00:37:27.997 sys 0m1.057s 00:37:27.997 ************************************ 00:37:27.997 END TEST bdevperf_config 00:37:27.997 ************************************ 00:37:27.997 07:48:01 bdevperf_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:27.997 07:48:01 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:37:27.997 07:48:01 -- spdk/autotest.sh@192 -- # uname -s 00:37:28.257 07:48:01 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:37:28.258 07:48:01 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:37:28.258 07:48:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:28.258 07:48:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:28.258 07:48:01 -- common/autotest_common.sh@10 -- # set +x 00:37:28.258 ************************************ 00:37:28.258 START TEST reactor_set_interrupt 00:37:28.258 ************************************ 00:37:28.258 07:48:01 reactor_set_interrupt -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:37:28.258 * Looking for test storage... 00:37:28.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:37:28.258 07:48:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:37:28.258 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:37:28.258 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:37:28.258 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:37:28.258 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:37:28.258 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:28.258 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:37:28.258 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:37:28.258 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:37:28.258 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:37:28.258 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:37:28.258 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:37:28.258 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:37:28.258 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:37:28.258 07:48:02 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:37:28.258 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:37:28.258 07:48:02 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:37:28.258 07:48:02 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:37:28.258 07:48:02 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:37:28.258 07:48:02 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:37:28.258 07:48:02 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:37:28.258 07:48:02 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:37:28.258 07:48:02 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:37:28.258 07:48:02 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:37:28.258 07:48:02 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:37:28.258 07:48:02 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:37:28.258 07:48:02 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:37:28.258 07:48:02 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:37:28.258 07:48:02 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:37:28.258 07:48:02 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:37:28.258 07:48:02 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:37:28.258 #define SPDK_CONFIG_H 00:37:28.258 #define SPDK_CONFIG_APPS 1 00:37:28.258 #define SPDK_CONFIG_ARCH native 00:37:28.258 #define SPDK_CONFIG_ASAN 1 00:37:28.258 #undef SPDK_CONFIG_AVAHI 00:37:28.259 #undef SPDK_CONFIG_CET 00:37:28.259 #define SPDK_CONFIG_COVERAGE 1 00:37:28.259 #define SPDK_CONFIG_CROSS_PREFIX 00:37:28.259 #undef SPDK_CONFIG_CRYPTO 00:37:28.259 #undef SPDK_CONFIG_CRYPTO_MLX5 00:37:28.259 #undef SPDK_CONFIG_CUSTOMOCF 00:37:28.259 #undef SPDK_CONFIG_DAOS 00:37:28.259 #define SPDK_CONFIG_DAOS_DIR 00:37:28.259 #define SPDK_CONFIG_DEBUG 1 00:37:28.259 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:37:28.259 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:37:28.259 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:37:28.259 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:37:28.259 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:37:28.259 #undef SPDK_CONFIG_DPDK_UADK 00:37:28.259 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:37:28.259 #define SPDK_CONFIG_EXAMPLES 1 00:37:28.259 #undef SPDK_CONFIG_FC 00:37:28.259 #define SPDK_CONFIG_FC_PATH 00:37:28.259 #define SPDK_CONFIG_FIO_PLUGIN 1 00:37:28.259 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:37:28.259 #undef SPDK_CONFIG_FUSE 00:37:28.259 #undef SPDK_CONFIG_FUZZER 00:37:28.259 #define SPDK_CONFIG_FUZZER_LIB 00:37:28.259 #undef SPDK_CONFIG_GOLANG 00:37:28.259 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:37:28.259 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:37:28.259 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:37:28.259 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:37:28.259 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:37:28.259 #undef SPDK_CONFIG_HAVE_LIBBSD 00:37:28.259 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:37:28.259 #define SPDK_CONFIG_IDXD 1 00:37:28.259 #undef SPDK_CONFIG_IDXD_KERNEL 00:37:28.259 #undef SPDK_CONFIG_IPSEC_MB 00:37:28.259 #define SPDK_CONFIG_IPSEC_MB_DIR 00:37:28.259 #define SPDK_CONFIG_ISAL 1 00:37:28.259 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:37:28.259 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:37:28.259 #define SPDK_CONFIG_LIBDIR 00:37:28.259 #undef SPDK_CONFIG_LTO 00:37:28.259 #define SPDK_CONFIG_MAX_LCORES 00:37:28.259 #define SPDK_CONFIG_NVME_CUSE 1 00:37:28.259 #undef SPDK_CONFIG_OCF 00:37:28.259 #define SPDK_CONFIG_OCF_PATH 00:37:28.259 #define SPDK_CONFIG_OPENSSL_PATH 00:37:28.259 #undef SPDK_CONFIG_PGO_CAPTURE 00:37:28.259 #define SPDK_CONFIG_PGO_DIR 00:37:28.259 #undef SPDK_CONFIG_PGO_USE 00:37:28.259 #define SPDK_CONFIG_PREFIX /usr/local 00:37:28.259 #define SPDK_CONFIG_RAID5F 1 00:37:28.259 #undef SPDK_CONFIG_RBD 00:37:28.259 #define SPDK_CONFIG_RDMA 1 00:37:28.259 #define SPDK_CONFIG_RDMA_PROV verbs 00:37:28.259 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:37:28.259 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:37:28.259 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:37:28.259 #undef SPDK_CONFIG_SHARED 00:37:28.259 #undef SPDK_CONFIG_SMA 00:37:28.259 #define SPDK_CONFIG_TESTS 1 00:37:28.259 #undef SPDK_CONFIG_TSAN 00:37:28.259 #undef SPDK_CONFIG_UBLK 00:37:28.259 #define SPDK_CONFIG_UBSAN 1 00:37:28.259 #define SPDK_CONFIG_UNIT_TESTS 1 00:37:28.259 #undef SPDK_CONFIG_URING 00:37:28.259 #define SPDK_CONFIG_URING_PATH 00:37:28.259 #undef SPDK_CONFIG_URING_ZNS 00:37:28.259 #undef SPDK_CONFIG_USDT 00:37:28.259 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:37:28.259 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:37:28.259 #undef SPDK_CONFIG_VFIO_USER 00:37:28.259 #define SPDK_CONFIG_VFIO_USER_DIR 00:37:28.259 #define SPDK_CONFIG_VHOST 1 00:37:28.259 #define SPDK_CONFIG_VIRTIO 1 00:37:28.259 #undef SPDK_CONFIG_VTUNE 00:37:28.259 #define SPDK_CONFIG_VTUNE_DIR 00:37:28.259 #define SPDK_CONFIG_WERROR 1 00:37:28.259 #define SPDK_CONFIG_WPDK_DIR 00:37:28.259 #undef SPDK_CONFIG_XNVME 00:37:28.259 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:37:28.259 07:48:02 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:28.259 07:48:02 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:28.259 07:48:02 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:28.259 07:48:02 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:28.259 07:48:02 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:28.259 07:48:02 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:28.259 07:48:02 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:28.259 07:48:02 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:37:28.259 07:48:02 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:37:28.259 07:48:02 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@57 -- # : 1 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@61 -- # : 0 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@63 -- # : 0 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@65 -- # : 1 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@67 -- # : 1 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@69 -- # : 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@71 -- # : 0 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@73 -- # : 0 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@75 -- # : 0 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@77 -- # : 0 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@79 -- # : 1 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@81 -- # : 0 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@83 -- # : 0 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@85 -- # : 0 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@87 -- # : 0 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@89 -- # : 0 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@91 -- # : 0 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@93 -- # : 0 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@95 -- # : 0 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@97 -- # : 0 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:37:28.259 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@99 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@101 -- # : rdma 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@103 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@105 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@107 -- # : 1 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@109 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@111 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@113 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@115 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@117 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@119 -- # : 1 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@121 -- # : 1 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@123 -- # : /home/vagrant/spdk_repo/dpdk/build 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@125 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@127 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@129 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@131 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@133 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@135 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@137 -- # : v22.11.4 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@139 -- # : true 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@141 -- # : 1 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@143 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@145 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@147 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@149 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@151 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@153 -- # : 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@155 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@157 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@159 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@161 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@163 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@168 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@170 -- # : 0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@199 -- # cat 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@252 -- # export QEMU_BIN= 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@252 -- # QEMU_BIN= 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:37:28.260 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@262 -- # export valgrind= 00:37:28.261 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@262 -- # valgrind= 00:37:28.261 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@268 -- # uname -s 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@278 -- # MAKE=make 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@298 -- # TEST_MODE= 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@317 -- # [[ -z 173335 ]] 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@317 -- # kill -0 173335 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local mount target_dir 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:37:28.521 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.hHVG5W 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.hHVG5W/tests/interrupt /tmp/spdk.hHVG5W 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@326 -- # df -T 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=1248956416 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253683200 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=4726784 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda1 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=ext4 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=9192808448 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=20616794112 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=11407208448 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=6265024512 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=6268399616 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=5242880 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=5242880 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda15 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=vfat 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=103061504 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=109395968 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=6334464 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=1253675008 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253679104 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=98281414656 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=1421365248 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:37:28.522 * Looking for test storage... 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@367 -- # local target_space new_size 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@371 -- # mount=/ 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@373 -- # target_space=9192808448 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@379 -- # [[ ext4 == tmpfs ]] 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@379 -- # [[ ext4 == ramfs ]] 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@380 -- # new_size=13621800960 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:37:28.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@388 -- # return 0 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@1678 -- # set -o errtrace 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # true 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@1685 -- # xtrace_fd 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=173378 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 173378 /var/tmp/spdk.sock 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@827 -- # '[' -z 173378 ']' 00:37:28.522 07:48:02 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:28.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:28.522 07:48:02 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:28.522 [2024-07-12 07:48:02.256539] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:28.522 [2024-07-12 07:48:02.257040] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173378 ] 00:37:28.781 [2024-07-12 07:48:02.421677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:28.781 [2024-07-12 07:48:02.476323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:28.781 [2024-07-12 07:48:02.476512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:28.781 [2024-07-12 07:48:02.476510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:28.781 [2024-07-12 07:48:02.543094] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:29.459 07:48:03 reactor_set_interrupt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:29.459 07:48:03 reactor_set_interrupt -- common/autotest_common.sh@860 -- # return 0 00:37:29.459 07:48:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:37:29.459 07:48:03 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:29.717 Malloc0 00:37:29.717 Malloc1 00:37:29.717 Malloc2 00:37:29.717 07:48:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:37:29.717 07:48:03 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:37:29.717 07:48:03 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:29.717 07:48:03 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:37:29.717 5000+0 records in 00:37:29.717 5000+0 records out 00:37:29.717 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0361763 s, 283 MB/s 00:37:29.717 07:48:03 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:37:29.976 AIO0 00:37:29.976 07:48:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 173378 00:37:29.976 07:48:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 173378 without_thd 00:37:29.976 07:48:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=173378 00:37:29.976 07:48:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:37:29.976 07:48:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:37:29.976 07:48:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:37:29.976 07:48:03 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:37:29.976 07:48:03 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:37:29.976 07:48:03 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:37:29.976 07:48:03 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:29.976 07:48:03 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:37:29.976 07:48:03 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:30.234 07:48:03 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:37:30.235 07:48:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:37:30.235 07:48:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:37:30.235 07:48:03 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:37:30.235 07:48:03 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:37:30.235 07:48:03 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:37:30.235 07:48:03 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:30.235 07:48:03 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:37:30.235 07:48:03 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:37:30.493 spdk_thread ids are 1 on reactor0. 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 173378 0 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 173378 0 idle 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173378 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173378 -w 256 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173378 root 20 0 20.1t 62220 28840 S 0.0 0.5 0:00.32 reactor_0' 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173378 root 20 0 20.1t 62220 28840 S 0.0 0.5 0:00.32 reactor_0 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 173378 1 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 173378 1 idle 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173378 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173378 -w 256 00:37:30.493 07:48:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173381 root 20 0 20.1t 62220 28840 S 0.0 0.5 0:00.00 reactor_1' 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173381 root 20 0 20.1t 62220 28840 S 0.0 0.5 0:00.00 reactor_1 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 173378 2 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 173378 2 idle 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173378 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:37:30.752 07:48:04 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:30.753 07:48:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:30.753 07:48:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:30.753 07:48:04 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:30.753 07:48:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:30.753 07:48:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:30.753 07:48:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173378 -w 256 00:37:30.753 07:48:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:37:31.012 07:48:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173382 root 20 0 20.1t 62220 28840 S 0.0 0.5 0:00.00 reactor_2' 00:37:31.012 07:48:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173382 root 20 0 20.1t 62220 28840 S 0.0 0.5 0:00.00 reactor_2 00:37:31.012 07:48:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:31.012 07:48:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:31.012 07:48:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:31.012 07:48:04 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:31.012 07:48:04 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:31.012 07:48:04 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:31.012 07:48:04 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:31.012 07:48:04 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:31.012 07:48:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:37:31.012 07:48:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:37:31.012 07:48:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:37:31.270 [2024-07-12 07:48:04.939423] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:31.270 07:48:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:37:31.529 [2024-07-12 07:48:05.191056] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:37:31.529 [2024-07-12 07:48:05.192680] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:37:31.529 07:48:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:37:31.788 [2024-07-12 07:48:05.458875] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:37:31.788 [2024-07-12 07:48:05.460154] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 173378 0 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 173378 0 busy 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173378 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173378 -w 256 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173378 root 20 0 20.1t 62368 28840 R 99.9 0.5 0:00.78 reactor_0' 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173378 root 20 0 20.1t 62368 28840 R 99.9 0.5 0:00.78 reactor_0 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:37:31.788 07:48:05 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:37:31.789 07:48:05 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:31.789 07:48:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:37:31.789 07:48:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 173378 2 00:37:31.789 07:48:05 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 173378 2 busy 00:37:31.789 07:48:05 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173378 00:37:31.789 07:48:05 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:37:31.789 07:48:05 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:31.789 07:48:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:37:31.789 07:48:05 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:31.789 07:48:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:31.789 07:48:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:31.789 07:48:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:37:31.789 07:48:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173378 -w 256 00:37:32.048 07:48:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173382 root 20 0 20.1t 62368 28840 R 99.9 0.5 0:00.35 reactor_2' 00:37:32.048 07:48:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173382 root 20 0 20.1t 62368 28840 R 99.9 0.5 0:00.35 reactor_2 00:37:32.048 07:48:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:32.048 07:48:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:32.048 07:48:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:37:32.048 07:48:05 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:37:32.048 07:48:05 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:37:32.048 07:48:05 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:37:32.048 07:48:05 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:37:32.048 07:48:05 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:32.048 07:48:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:37:32.307 [2024-07-12 07:48:05.990861] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:37:32.307 [2024-07-12 07:48:05.992461] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 173378 2 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 173378 2 idle 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173378 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173378 -w 256 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173382 root 20 0 20.1t 62472 28840 S 0.0 0.5 0:00.53 reactor_2' 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173382 root 20 0 20.1t 62472 28840 S 0.0 0.5 0:00.53 reactor_2 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:32.307 07:48:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:32.308 07:48:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:37:32.567 [2024-07-12 07:48:06.434920] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:37:32.567 [2024-07-12 07:48:06.436203] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:37:32.826 07:48:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:37:32.826 07:48:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:37:32.826 07:48:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:37:32.826 [2024-07-12 07:48:06.611320] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:32.826 07:48:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 173378 0 00:37:32.826 07:48:06 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 173378 0 idle 00:37:32.826 07:48:06 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173378 00:37:32.826 07:48:06 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:32.826 07:48:06 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:32.826 07:48:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:32.826 07:48:06 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:32.826 07:48:06 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:32.826 07:48:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:32.826 07:48:06 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:32.826 07:48:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173378 -w 256 00:37:32.826 07:48:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:37:33.085 07:48:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173378 root 20 0 20.1t 62568 28840 S 0.0 0.5 0:01.57 reactor_0' 00:37:33.085 07:48:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173378 root 20 0 20.1t 62568 28840 S 0.0 0.5 0:01.57 reactor_0 00:37:33.085 07:48:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:33.085 07:48:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:33.085 07:48:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:33.085 07:48:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:33.085 07:48:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:33.085 07:48:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:33.085 07:48:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:33.085 07:48:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:33.085 07:48:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:37:33.085 07:48:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:37:33.085 07:48:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:37:33.085 07:48:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 173378 00:37:33.085 07:48:06 reactor_set_interrupt -- common/autotest_common.sh@946 -- # '[' -z 173378 ']' 00:37:33.085 07:48:06 reactor_set_interrupt -- common/autotest_common.sh@950 -- # kill -0 173378 00:37:33.085 07:48:06 reactor_set_interrupt -- common/autotest_common.sh@951 -- # uname 00:37:33.085 07:48:06 reactor_set_interrupt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:33.085 07:48:06 reactor_set_interrupt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 173378 00:37:33.085 07:48:06 reactor_set_interrupt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:33.085 07:48:06 reactor_set_interrupt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:33.085 killing process with pid 173378 00:37:33.085 07:48:06 reactor_set_interrupt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 173378' 00:37:33.085 07:48:06 reactor_set_interrupt -- common/autotest_common.sh@965 -- # kill 173378 00:37:33.085 07:48:06 reactor_set_interrupt -- common/autotest_common.sh@970 -- # wait 173378 00:37:33.345 07:48:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:37:33.345 07:48:07 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:37:33.345 07:48:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:37:33.345 07:48:07 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:33.345 07:48:07 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:37:33.345 07:48:07 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=173516 00:37:33.345 07:48:07 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:33.345 07:48:07 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:37:33.345 07:48:07 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 173516 /var/tmp/spdk.sock 00:37:33.345 07:48:07 reactor_set_interrupt -- common/autotest_common.sh@827 -- # '[' -z 173516 ']' 00:37:33.345 07:48:07 reactor_set_interrupt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:33.345 07:48:07 reactor_set_interrupt -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:33.345 07:48:07 reactor_set_interrupt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:33.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:33.345 07:48:07 reactor_set_interrupt -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:33.345 07:48:07 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:33.345 [2024-07-12 07:48:07.206261] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:33.345 [2024-07-12 07:48:07.206743] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173516 ] 00:37:33.605 [2024-07-12 07:48:07.370431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:33.605 [2024-07-12 07:48:07.413481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.605 [2024-07-12 07:48:07.413684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.605 [2024-07-12 07:48:07.413659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:33.605 [2024-07-12 07:48:07.473066] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:34.550 07:48:08 reactor_set_interrupt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:34.550 07:48:08 reactor_set_interrupt -- common/autotest_common.sh@860 -- # return 0 00:37:34.550 07:48:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:37:34.550 07:48:08 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:34.808 Malloc0 00:37:34.808 Malloc1 00:37:34.808 Malloc2 00:37:34.808 07:48:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:37:34.808 07:48:08 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:37:34.808 07:48:08 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:34.808 07:48:08 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:37:34.808 5000+0 records in 00:37:34.808 5000+0 records out 00:37:34.808 10240000 bytes (10 MB, 9.8 MiB) copied, 0.037195 s, 275 MB/s 00:37:34.808 07:48:08 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:37:35.066 AIO0 00:37:35.066 07:48:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 173516 00:37:35.066 07:48:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 173516 00:37:35.066 07:48:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=173516 00:37:35.066 07:48:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:37:35.066 07:48:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:37:35.066 07:48:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:37:35.066 07:48:08 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:37:35.066 07:48:08 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:37:35.066 07:48:08 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:37:35.066 07:48:08 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:35.066 07:48:08 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:37:35.066 07:48:08 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:35.324 07:48:08 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:37:35.324 07:48:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:37:35.324 07:48:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:37:35.324 07:48:09 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:37:35.324 07:48:09 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:37:35.324 07:48:09 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:37:35.324 07:48:09 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:35.324 07:48:09 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:37:35.324 07:48:09 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:37:35.583 spdk_thread ids are 1 on reactor0. 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 173516 0 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 173516 0 idle 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173516 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173516 -w 256 00:37:35.583 07:48:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173516 root 20 0 20.1t 62248 28872 S 0.0 0.5 0:00.30 reactor_0' 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173516 root 20 0 20.1t 62248 28872 S 0.0 0.5 0:00.30 reactor_0 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 173516 1 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 173516 1 idle 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173516 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173516 -w 256 00:37:35.584 07:48:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173521 root 20 0 20.1t 62248 28872 S 0.0 0.5 0:00.00 reactor_1' 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173521 root 20 0 20.1t 62248 28872 S 0.0 0.5 0:00.00 reactor_1 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 173516 2 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 173516 2 idle 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173516 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173516 -w 256 00:37:35.843 07:48:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:37:36.102 07:48:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173522 root 20 0 20.1t 62248 28872 S 0.0 0.5 0:00.00 reactor_2' 00:37:36.102 07:48:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173522 root 20 0 20.1t 62248 28872 S 0.0 0.5 0:00.00 reactor_2 00:37:36.102 07:48:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:36.102 07:48:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:36.102 07:48:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:36.102 07:48:09 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:36.102 07:48:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:36.102 07:48:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:36.102 07:48:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:36.102 07:48:09 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:36.102 07:48:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:37:36.102 07:48:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:37:36.102 [2024-07-12 07:48:09.960167] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:37:36.102 [2024-07-12 07:48:09.960748] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:37:36.102 [2024-07-12 07:48:09.961475] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:37:36.102 07:48:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:37:36.361 [2024-07-12 07:48:10.231961] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:37:36.361 [2024-07-12 07:48:10.232931] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 173516 0 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 173516 0 busy 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173516 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173516 -w 256 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173516 root 20 0 20.1t 62392 28872 R 99.9 0.5 0:00.77 reactor_0' 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173516 root 20 0 20.1t 62392 28872 R 99.9 0.5 0:00.77 reactor_0 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 173516 2 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 173516 2 busy 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173516 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:37:36.620 07:48:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173516 -w 256 00:37:36.879 07:48:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173522 root 20 0 20.1t 62392 28872 R 99.9 0.5 0:00.36 reactor_2' 00:37:36.879 07:48:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173522 root 20 0 20.1t 62392 28872 R 99.9 0.5 0:00.36 reactor_2 00:37:36.879 07:48:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:36.879 07:48:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:36.879 07:48:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:37:36.879 07:48:10 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:37:36.879 07:48:10 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:37:36.879 07:48:10 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:37:36.879 07:48:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:37:36.879 07:48:10 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:36.879 07:48:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:37:37.137 [2024-07-12 07:48:10.776210] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:37:37.137 [2024-07-12 07:48:10.776985] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 173516 2 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 173516 2 idle 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173516 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173516 -w 256 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173522 root 20 0 20.1t 62392 28872 S 0.0 0.5 0:00.54 reactor_2' 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173522 root 20 0 20.1t 62392 28872 S 0.0 0.5 0:00.54 reactor_2 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:37.137 07:48:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:37:37.395 [2024-07-12 07:48:11.204262] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:37:37.395 [2024-07-12 07:48:11.205664] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:37:37.395 [2024-07-12 07:48:11.205804] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:37:37.395 07:48:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:37:37.395 07:48:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 173516 0 00:37:37.395 07:48:11 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 173516 0 idle 00:37:37.395 07:48:11 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=173516 00:37:37.395 07:48:11 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:37.395 07:48:11 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:37.395 07:48:11 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:37.395 07:48:11 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:37.395 07:48:11 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:37.395 07:48:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:37.395 07:48:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:37.395 07:48:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 173516 -w 256 00:37:37.395 07:48:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:37:37.654 07:48:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 173516 root 20 0 20.1t 62504 28872 S 0.0 0.5 0:01.56 reactor_0' 00:37:37.654 07:48:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 173516 root 20 0 20.1t 62504 28872 S 0.0 0.5 0:01.56 reactor_0 00:37:37.654 07:48:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:37.654 07:48:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:37.654 07:48:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:37.654 07:48:11 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:37.654 07:48:11 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:37.654 07:48:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:37.654 07:48:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:37.654 07:48:11 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:37.654 07:48:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:37:37.654 07:48:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:37:37.654 07:48:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:37:37.655 07:48:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 173516 00:37:37.655 07:48:11 reactor_set_interrupt -- common/autotest_common.sh@946 -- # '[' -z 173516 ']' 00:37:37.655 07:48:11 reactor_set_interrupt -- common/autotest_common.sh@950 -- # kill -0 173516 00:37:37.655 07:48:11 reactor_set_interrupt -- common/autotest_common.sh@951 -- # uname 00:37:37.655 07:48:11 reactor_set_interrupt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:37.655 07:48:11 reactor_set_interrupt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 173516 00:37:37.655 07:48:11 reactor_set_interrupt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:37.655 07:48:11 reactor_set_interrupt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:37.655 07:48:11 reactor_set_interrupt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 173516' 00:37:37.655 killing process with pid 173516 00:37:37.655 07:48:11 reactor_set_interrupt -- common/autotest_common.sh@965 -- # kill 173516 00:37:37.655 07:48:11 reactor_set_interrupt -- common/autotest_common.sh@970 -- # wait 173516 00:37:37.914 07:48:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:37:37.914 07:48:11 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:37:37.914 ************************************ 00:37:37.914 END TEST reactor_set_interrupt 00:37:37.914 ************************************ 00:37:37.914 00:37:37.914 real 0m9.829s 00:37:37.914 user 0m9.053s 00:37:37.914 sys 0m2.018s 00:37:37.914 07:48:11 reactor_set_interrupt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:37.914 07:48:11 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:37.914 07:48:11 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:37:37.914 07:48:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:37.914 07:48:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:37.914 07:48:11 -- common/autotest_common.sh@10 -- # set +x 00:37:37.914 ************************************ 00:37:37.914 START TEST reap_unregistered_poller 00:37:37.914 ************************************ 00:37:37.914 07:48:11 reap_unregistered_poller -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:37:38.176 * Looking for test storage... 00:37:38.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:37:38.176 07:48:11 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:37:38.176 07:48:11 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:37:38.176 07:48:11 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:37:38.176 07:48:11 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:37:38.176 07:48:11 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:37:38.176 07:48:11 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:38.176 07:48:11 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:37:38.176 07:48:11 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:37:38.176 07:48:11 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:37:38.176 07:48:11 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:37:38.176 07:48:11 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:37:38.176 07:48:11 reap_unregistered_poller -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:37:38.176 07:48:11 reap_unregistered_poller -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:37:38.176 07:48:11 reap_unregistered_poller -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:37:38.176 07:48:11 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:37:38.177 07:48:11 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:37:38.177 07:48:11 reap_unregistered_poller -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:37:38.177 #define SPDK_CONFIG_H 00:37:38.177 #define SPDK_CONFIG_APPS 1 00:37:38.177 #define SPDK_CONFIG_ARCH native 00:37:38.177 #define SPDK_CONFIG_ASAN 1 00:37:38.177 #undef SPDK_CONFIG_AVAHI 00:37:38.177 #undef SPDK_CONFIG_CET 00:37:38.177 #define SPDK_CONFIG_COVERAGE 1 00:37:38.177 #define SPDK_CONFIG_CROSS_PREFIX 00:37:38.177 #undef SPDK_CONFIG_CRYPTO 00:37:38.177 #undef SPDK_CONFIG_CRYPTO_MLX5 00:37:38.177 #undef SPDK_CONFIG_CUSTOMOCF 00:37:38.177 #undef SPDK_CONFIG_DAOS 00:37:38.177 #define SPDK_CONFIG_DAOS_DIR 00:37:38.177 #define SPDK_CONFIG_DEBUG 1 00:37:38.177 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:37:38.177 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:37:38.177 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:37:38.177 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:37:38.177 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:37:38.177 #undef SPDK_CONFIG_DPDK_UADK 00:37:38.177 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:37:38.177 #define SPDK_CONFIG_EXAMPLES 1 00:37:38.177 #undef SPDK_CONFIG_FC 00:37:38.177 #define SPDK_CONFIG_FC_PATH 00:37:38.177 #define SPDK_CONFIG_FIO_PLUGIN 1 00:37:38.177 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:37:38.177 #undef SPDK_CONFIG_FUSE 00:37:38.177 #undef SPDK_CONFIG_FUZZER 00:37:38.177 #define SPDK_CONFIG_FUZZER_LIB 00:37:38.177 #undef SPDK_CONFIG_GOLANG 00:37:38.177 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:37:38.177 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:37:38.177 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:37:38.177 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:37:38.177 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:37:38.177 #undef SPDK_CONFIG_HAVE_LIBBSD 00:37:38.177 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:37:38.177 #define SPDK_CONFIG_IDXD 1 00:37:38.177 #undef SPDK_CONFIG_IDXD_KERNEL 00:37:38.177 #undef SPDK_CONFIG_IPSEC_MB 00:37:38.177 #define SPDK_CONFIG_IPSEC_MB_DIR 00:37:38.177 #define SPDK_CONFIG_ISAL 1 00:37:38.177 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:37:38.177 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:37:38.177 #define SPDK_CONFIG_LIBDIR 00:37:38.177 #undef SPDK_CONFIG_LTO 00:37:38.177 #define SPDK_CONFIG_MAX_LCORES 00:37:38.177 #define SPDK_CONFIG_NVME_CUSE 1 00:37:38.177 #undef SPDK_CONFIG_OCF 00:37:38.177 #define SPDK_CONFIG_OCF_PATH 00:37:38.177 #define SPDK_CONFIG_OPENSSL_PATH 00:37:38.177 #undef SPDK_CONFIG_PGO_CAPTURE 00:37:38.177 #define SPDK_CONFIG_PGO_DIR 00:37:38.177 #undef SPDK_CONFIG_PGO_USE 00:37:38.177 #define SPDK_CONFIG_PREFIX /usr/local 00:37:38.177 #define SPDK_CONFIG_RAID5F 1 00:37:38.177 #undef SPDK_CONFIG_RBD 00:37:38.177 #define SPDK_CONFIG_RDMA 1 00:37:38.177 #define SPDK_CONFIG_RDMA_PROV verbs 00:37:38.177 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:37:38.177 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:37:38.177 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:37:38.177 #undef SPDK_CONFIG_SHARED 00:37:38.177 #undef SPDK_CONFIG_SMA 00:37:38.177 #define SPDK_CONFIG_TESTS 1 00:37:38.177 #undef SPDK_CONFIG_TSAN 00:37:38.177 #undef SPDK_CONFIG_UBLK 00:37:38.177 #define SPDK_CONFIG_UBSAN 1 00:37:38.177 #define SPDK_CONFIG_UNIT_TESTS 1 00:37:38.177 #undef SPDK_CONFIG_URING 00:37:38.177 #define SPDK_CONFIG_URING_PATH 00:37:38.177 #undef SPDK_CONFIG_URING_ZNS 00:37:38.177 #undef SPDK_CONFIG_USDT 00:37:38.177 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:37:38.177 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:37:38.177 #undef SPDK_CONFIG_VFIO_USER 00:37:38.177 #define SPDK_CONFIG_VFIO_USER_DIR 00:37:38.177 #define SPDK_CONFIG_VHOST 1 00:37:38.177 #define SPDK_CONFIG_VIRTIO 1 00:37:38.177 #undef SPDK_CONFIG_VTUNE 00:37:38.177 #define SPDK_CONFIG_VTUNE_DIR 00:37:38.177 #define SPDK_CONFIG_WERROR 1 00:37:38.177 #define SPDK_CONFIG_WPDK_DIR 00:37:38.177 #undef SPDK_CONFIG_XNVME 00:37:38.177 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:37:38.177 07:48:11 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:37:38.177 07:48:11 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:38.177 07:48:11 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:38.177 07:48:11 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:38.177 07:48:11 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:38.177 07:48:11 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:38.177 07:48:11 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:38.178 07:48:11 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:38.178 07:48:11 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:37:38.178 07:48:11 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:38.178 07:48:11 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:37:38.178 07:48:11 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:37:38.178 07:48:11 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:37:38.178 07:48:11 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:37:38.178 07:48:11 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:37:38.178 07:48:11 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:37:38.178 07:48:11 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:37:38.178 07:48:11 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:37:38.178 07:48:11 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:37:38.178 07:48:11 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:37:38.178 07:48:11 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:37:38.178 07:48:11 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:37:38.178 07:48:11 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:37:38.178 07:48:11 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:37:38.178 07:48:11 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:37:38.178 07:48:11 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:37:38.178 07:48:12 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:37:38.178 07:48:12 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:37:38.178 07:48:12 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:37:38.178 07:48:12 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:37:38.178 07:48:12 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:37:38.178 07:48:12 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:37:38.178 07:48:12 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:37:38.178 07:48:12 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@57 -- # : 1 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@61 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@63 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@65 -- # : 1 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@67 -- # : 1 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@69 -- # : 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@71 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@73 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@75 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@77 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@79 -- # : 1 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@81 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@83 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@85 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@87 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@89 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@91 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@93 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@95 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@97 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@99 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@101 -- # : rdma 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@103 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@105 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@107 -- # : 1 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@109 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@111 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@113 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@115 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@117 -- # : 0 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@119 -- # : 1 00:37:38.178 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@121 -- # : 1 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@123 -- # : /home/vagrant/spdk_repo/dpdk/build 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@125 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@127 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@129 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@131 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@133 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@135 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@137 -- # : v22.11.4 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@139 -- # : true 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@141 -- # : 1 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@143 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@145 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@147 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@149 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@151 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@153 -- # : 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@155 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@157 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@159 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@161 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@163 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@168 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@170 -- # : 0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@199 -- # cat 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:37:38.179 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@252 -- # export QEMU_BIN= 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@252 -- # QEMU_BIN= 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@262 -- # export valgrind= 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@262 -- # valgrind= 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@268 -- # uname -s 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@278 -- # MAKE=make 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@298 -- # TEST_MODE= 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@317 -- # [[ -z 173677 ]] 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@317 -- # kill -0 173677 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local mount target_dir 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:37:38.180 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.F6S8aY 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.F6S8aY/tests/interrupt /tmp/spdk.F6S8aY 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@326 -- # df -T 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=1248956416 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253683200 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=4726784 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda1 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=ext4 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=9192763392 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=20616794112 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=11407253504 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=6265024512 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=6268399616 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=5242880 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=5242880 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda15 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=vfat 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=103061504 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=109395968 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=6334464 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=1253675008 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253679104 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=98281308160 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=1421471744 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:37:38.441 * Looking for test storage... 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@367 -- # local target_space new_size 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@371 -- # mount=/ 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@373 -- # target_space=9192763392 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@379 -- # [[ ext4 == tmpfs ]] 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@379 -- # [[ ext4 == ramfs ]] 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@380 -- # new_size=13621846016 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:37:38.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@388 -- # return 0 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@1678 -- # set -o errtrace 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # true 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@1685 -- # xtrace_fd 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:37:38.441 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:37:38.442 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:37:38.442 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:37:38.442 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:37:38.442 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=173728 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 173728 /var/tmp/spdk.sock 00:37:38.442 07:48:12 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:37:38.442 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@827 -- # '[' -z 173728 ']' 00:37:38.442 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:38.442 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:38.442 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:38.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:38.442 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:38.442 07:48:12 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:37:38.442 [2024-07-12 07:48:12.165828] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:38.442 [2024-07-12 07:48:12.166409] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173728 ] 00:37:38.701 [2024-07-12 07:48:12.330200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:38.701 [2024-07-12 07:48:12.382111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:38.701 [2024-07-12 07:48:12.382314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:38.701 [2024-07-12 07:48:12.382291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:38.701 [2024-07-12 07:48:12.446393] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:39.269 07:48:13 reap_unregistered_poller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:39.269 07:48:13 reap_unregistered_poller -- common/autotest_common.sh@860 -- # return 0 00:37:39.269 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:37:39.269 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:37:39.269 07:48:13 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.269 07:48:13 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:37:39.269 07:48:13 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.269 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:37:39.269 "name": "app_thread", 00:37:39.269 "id": 1, 00:37:39.269 "active_pollers": [], 00:37:39.269 "timed_pollers": [ 00:37:39.269 { 00:37:39.269 "name": "rpc_subsystem_poll_servers", 00:37:39.269 "id": 1, 00:37:39.269 "state": "waiting", 00:37:39.269 "run_count": 0, 00:37:39.269 "busy_count": 0, 00:37:39.269 "period_ticks": 8400000 00:37:39.269 } 00:37:39.269 ], 00:37:39.269 "paused_pollers": [] 00:37:39.269 }' 00:37:39.269 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:37:39.269 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:37:39.269 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:37:39.269 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:37:39.527 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:37:39.527 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:37:39.527 07:48:13 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:37:39.527 07:48:13 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:39.527 07:48:13 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:37:39.527 5000+0 records in 00:37:39.527 5000+0 records out 00:37:39.527 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0361957 s, 283 MB/s 00:37:39.527 07:48:13 reap_unregistered_poller -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:37:39.784 AIO0 00:37:39.785 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:40.043 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:37:40.043 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:37:40.043 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:37:40.043 07:48:13 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.043 07:48:13 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:37:40.043 07:48:13 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.043 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:37:40.043 "name": "app_thread", 00:37:40.043 "id": 1, 00:37:40.043 "active_pollers": [], 00:37:40.043 "timed_pollers": [ 00:37:40.043 { 00:37:40.043 "name": "rpc_subsystem_poll_servers", 00:37:40.043 "id": 1, 00:37:40.043 "state": "waiting", 00:37:40.043 "run_count": 0, 00:37:40.043 "busy_count": 0, 00:37:40.043 "period_ticks": 8400000 00:37:40.043 } 00:37:40.043 ], 00:37:40.043 "paused_pollers": [] 00:37:40.043 }' 00:37:40.043 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:37:40.301 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:37:40.301 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:37:40.301 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:37:40.301 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:37:40.301 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:37:40.301 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:37:40.301 07:48:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 173728 00:37:40.301 07:48:13 reap_unregistered_poller -- common/autotest_common.sh@946 -- # '[' -z 173728 ']' 00:37:40.301 07:48:13 reap_unregistered_poller -- common/autotest_common.sh@950 -- # kill -0 173728 00:37:40.301 07:48:13 reap_unregistered_poller -- common/autotest_common.sh@951 -- # uname 00:37:40.301 07:48:13 reap_unregistered_poller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:40.301 07:48:13 reap_unregistered_poller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 173728 00:37:40.301 07:48:14 reap_unregistered_poller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:40.301 07:48:14 reap_unregistered_poller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:40.301 07:48:14 reap_unregistered_poller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 173728' 00:37:40.301 killing process with pid 173728 00:37:40.301 07:48:14 reap_unregistered_poller -- common/autotest_common.sh@965 -- # kill 173728 00:37:40.301 07:48:14 reap_unregistered_poller -- common/autotest_common.sh@970 -- # wait 173728 00:37:40.561 07:48:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:37:40.561 07:48:14 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:37:40.561 ************************************ 00:37:40.561 END TEST reap_unregistered_poller 00:37:40.561 ************************************ 00:37:40.561 00:37:40.561 real 0m2.522s 00:37:40.561 user 0m1.482s 00:37:40.561 sys 0m0.644s 00:37:40.561 07:48:14 reap_unregistered_poller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:40.561 07:48:14 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:37:40.561 07:48:14 -- spdk/autotest.sh@198 -- # uname -s 00:37:40.561 07:48:14 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:37:40.561 07:48:14 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:37:40.561 07:48:14 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:37:40.561 07:48:14 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:37:40.561 07:48:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:40.561 07:48:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:40.561 07:48:14 -- common/autotest_common.sh@10 -- # set +x 00:37:40.561 ************************************ 00:37:40.561 START TEST spdk_dd 00:37:40.561 ************************************ 00:37:40.561 07:48:14 spdk_dd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:37:40.820 * Looking for test storage... 00:37:40.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:37:40.820 07:48:14 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:40.820 07:48:14 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:40.820 07:48:14 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:40.820 07:48:14 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:40.820 07:48:14 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:40.820 07:48:14 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:40.820 07:48:14 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:40.820 07:48:14 spdk_dd -- paths/export.sh@5 -- # export PATH 00:37:40.820 07:48:14 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:40.820 07:48:14 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:37:41.079 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:37:41.337 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:37:42.274 07:48:16 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:37:42.274 07:48:16 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@230 -- # local class 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@232 -- # local progif 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@233 -- # class=01 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@15 -- # local i 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@24 -- # return 0 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@325 -- # (( 1 )) 00:37:42.274 07:48:16 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:37:42.274 07:48:16 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@139 -- # local lib so 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:37:42.274 07:48:16 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:37:42.274 07:48:16 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:37:42.274 07:48:16 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:37:42.274 07:48:16 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:37:42.274 07:48:16 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:42.274 07:48:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:37:42.274 ************************************ 00:37:42.274 START TEST spdk_dd_basic_rw 00:37:42.274 ************************************ 00:37:42.274 07:48:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:37:42.533 * Looking for test storage... 00:37:42.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:37:42.533 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:37:42.795 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 7 Host Read Commands: 2303 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:37:42.795 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 7 Host Read Commands: 2303 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:37:42.796 ************************************ 00:37:42.796 START TEST dd_bs_lt_native_bs 00:37:42.796 ************************************ 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1121 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:42.796 07:48:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:37:42.796 { 00:37:42.796 "subsystems": [ 00:37:42.796 { 00:37:42.796 "subsystem": "bdev", 00:37:42.796 "config": [ 00:37:42.796 { 00:37:42.796 "params": { 00:37:42.796 "trtype": "pcie", 00:37:42.796 "traddr": "0000:00:10.0", 00:37:42.796 "name": "Nvme0" 00:37:42.796 }, 00:37:42.796 "method": "bdev_nvme_attach_controller" 00:37:42.796 }, 00:37:42.796 { 00:37:42.796 "method": "bdev_wait_for_examine" 00:37:42.796 } 00:37:42.796 ] 00:37:42.796 } 00:37:42.796 ] 00:37:42.796 } 00:37:42.796 [2024-07-12 07:48:16.648327] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:42.796 [2024-07-12 07:48:16.648834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174030 ] 00:37:43.056 [2024-07-12 07:48:16.806194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:43.056 [2024-07-12 07:48:16.861217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:43.316 [2024-07-12 07:48:17.001356] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:37:43.316 [2024-07-12 07:48:17.001574] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:43.316 [2024-07-12 07:48:17.110345] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:37:43.575 ************************************ 00:37:43.575 END TEST dd_bs_lt_native_bs 00:37:43.575 ************************************ 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:43.575 00:37:43.575 real 0m0.696s 00:37:43.575 user 0m0.422s 00:37:43.575 sys 0m0.229s 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:37:43.575 ************************************ 00:37:43.575 START TEST dd_rw 00:37:43.575 ************************************ 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1121 -- # basic_rw 4096 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:37:43.575 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:44.143 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:37:44.143 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:37:44.143 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:44.143 07:48:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:44.143 { 00:37:44.143 "subsystems": [ 00:37:44.143 { 00:37:44.143 "subsystem": "bdev", 00:37:44.143 "config": [ 00:37:44.143 { 00:37:44.143 "params": { 00:37:44.143 "trtype": "pcie", 00:37:44.143 "traddr": "0000:00:10.0", 00:37:44.143 "name": "Nvme0" 00:37:44.143 }, 00:37:44.143 "method": "bdev_nvme_attach_controller" 00:37:44.143 }, 00:37:44.143 { 00:37:44.143 "method": "bdev_wait_for_examine" 00:37:44.143 } 00:37:44.143 ] 00:37:44.143 } 00:37:44.143 ] 00:37:44.143 } 00:37:44.143 [2024-07-12 07:48:17.840458] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:44.143 [2024-07-12 07:48:17.840642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174071 ] 00:37:44.143 [2024-07-12 07:48:17.982105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.402 [2024-07-12 07:48:18.026215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:44.662  Copying: 60/60 [kB] (average 19 MBps) 00:37:44.662 00:37:44.662 07:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:37:44.662 07:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:37:44.662 07:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:44.662 07:48:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:44.662 { 00:37:44.662 "subsystems": [ 00:37:44.662 { 00:37:44.662 "subsystem": "bdev", 00:37:44.662 "config": [ 00:37:44.662 { 00:37:44.662 "params": { 00:37:44.662 "trtype": "pcie", 00:37:44.662 "traddr": "0000:00:10.0", 00:37:44.662 "name": "Nvme0" 00:37:44.662 }, 00:37:44.662 "method": "bdev_nvme_attach_controller" 00:37:44.662 }, 00:37:44.662 { 00:37:44.662 "method": "bdev_wait_for_examine" 00:37:44.662 } 00:37:44.662 ] 00:37:44.662 } 00:37:44.662 ] 00:37:44.662 } 00:37:44.921 [2024-07-12 07:48:18.544541] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:44.921 [2024-07-12 07:48:18.544805] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174095 ] 00:37:44.921 [2024-07-12 07:48:18.701518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.921 [2024-07-12 07:48:18.754278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:45.440  Copying: 60/60 [kB] (average 14 MBps) 00:37:45.440 00:37:45.440 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:45.440 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:37:45.440 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:37:45.440 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:37:45.440 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:37:45.440 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:37:45.440 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:37:45.440 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:37:45.440 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:37:45.440 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:45.440 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:45.440 { 00:37:45.440 "subsystems": [ 00:37:45.440 { 00:37:45.440 "subsystem": "bdev", 00:37:45.440 "config": [ 00:37:45.440 { 00:37:45.440 "params": { 00:37:45.440 "trtype": "pcie", 00:37:45.440 "traddr": "0000:00:10.0", 00:37:45.440 "name": "Nvme0" 00:37:45.440 }, 00:37:45.440 "method": "bdev_nvme_attach_controller" 00:37:45.440 }, 00:37:45.440 { 00:37:45.440 "method": "bdev_wait_for_examine" 00:37:45.440 } 00:37:45.440 ] 00:37:45.440 } 00:37:45.440 ] 00:37:45.440 } 00:37:45.440 [2024-07-12 07:48:19.264873] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:45.440 [2024-07-12 07:48:19.265141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174104 ] 00:37:45.700 [2024-07-12 07:48:19.418368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.700 [2024-07-12 07:48:19.470240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:46.218  Copying: 1024/1024 [kB] (average 500 MBps) 00:37:46.218 00:37:46.218 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:37:46.218 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:37:46.218 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:37:46.218 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:37:46.218 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:37:46.218 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:37:46.218 07:48:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:46.476 07:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:37:46.476 07:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:37:46.476 07:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:46.476 07:48:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:46.735 { 00:37:46.735 "subsystems": [ 00:37:46.735 { 00:37:46.735 "subsystem": "bdev", 00:37:46.735 "config": [ 00:37:46.735 { 00:37:46.735 "params": { 00:37:46.735 "trtype": "pcie", 00:37:46.735 "traddr": "0000:00:10.0", 00:37:46.735 "name": "Nvme0" 00:37:46.735 }, 00:37:46.735 "method": "bdev_nvme_attach_controller" 00:37:46.735 }, 00:37:46.735 { 00:37:46.735 "method": "bdev_wait_for_examine" 00:37:46.735 } 00:37:46.735 ] 00:37:46.735 } 00:37:46.735 ] 00:37:46.735 } 00:37:46.735 [2024-07-12 07:48:20.404212] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:46.735 [2024-07-12 07:48:20.404424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174126 ] 00:37:46.735 [2024-07-12 07:48:20.546857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:46.735 [2024-07-12 07:48:20.588558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:47.254  Copying: 60/60 [kB] (average 29 MBps) 00:37:47.254 00:37:47.254 07:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:37:47.254 07:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:37:47.254 07:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:47.254 07:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:47.254 { 00:37:47.254 "subsystems": [ 00:37:47.254 { 00:37:47.254 "subsystem": "bdev", 00:37:47.254 "config": [ 00:37:47.254 { 00:37:47.254 "params": { 00:37:47.254 "trtype": "pcie", 00:37:47.254 "traddr": "0000:00:10.0", 00:37:47.254 "name": "Nvme0" 00:37:47.254 }, 00:37:47.254 "method": "bdev_nvme_attach_controller" 00:37:47.254 }, 00:37:47.254 { 00:37:47.254 "method": "bdev_wait_for_examine" 00:37:47.254 } 00:37:47.254 ] 00:37:47.254 } 00:37:47.254 ] 00:37:47.254 } 00:37:47.254 [2024-07-12 07:48:21.085060] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:47.254 [2024-07-12 07:48:21.085362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174144 ] 00:37:47.513 [2024-07-12 07:48:21.238071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:47.513 [2024-07-12 07:48:21.290461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.033  Copying: 60/60 [kB] (average 29 MBps) 00:37:48.033 00:37:48.033 07:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:48.033 07:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:37:48.033 07:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:37:48.033 07:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:37:48.033 07:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:37:48.033 07:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:37:48.033 07:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:37:48.033 07:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:37:48.033 07:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:37:48.033 07:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:48.033 07:48:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:48.033 { 00:37:48.033 "subsystems": [ 00:37:48.033 { 00:37:48.033 "subsystem": "bdev", 00:37:48.033 "config": [ 00:37:48.033 { 00:37:48.033 "params": { 00:37:48.033 "trtype": "pcie", 00:37:48.033 "traddr": "0000:00:10.0", 00:37:48.033 "name": "Nvme0" 00:37:48.033 }, 00:37:48.033 "method": "bdev_nvme_attach_controller" 00:37:48.033 }, 00:37:48.033 { 00:37:48.033 "method": "bdev_wait_for_examine" 00:37:48.033 } 00:37:48.033 ] 00:37:48.033 } 00:37:48.033 ] 00:37:48.033 } 00:37:48.033 [2024-07-12 07:48:21.810558] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:48.033 [2024-07-12 07:48:21.810837] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174160 ] 00:37:48.292 [2024-07-12 07:48:21.965177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.292 [2024-07-12 07:48:22.012644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.862  Copying: 1024/1024 [kB] (average 500 MBps) 00:37:48.862 00:37:48.862 07:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:37:48.862 07:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:37:48.862 07:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:37:48.862 07:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:37:48.862 07:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:37:48.862 07:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:37:48.862 07:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:37:48.862 07:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:49.121 07:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:37:49.121 07:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:37:49.121 07:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:49.121 07:48:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:49.121 { 00:37:49.121 "subsystems": [ 00:37:49.121 { 00:37:49.121 "subsystem": "bdev", 00:37:49.122 "config": [ 00:37:49.122 { 00:37:49.122 "params": { 00:37:49.122 "trtype": "pcie", 00:37:49.122 "traddr": "0000:00:10.0", 00:37:49.122 "name": "Nvme0" 00:37:49.122 }, 00:37:49.122 "method": "bdev_nvme_attach_controller" 00:37:49.122 }, 00:37:49.122 { 00:37:49.122 "method": "bdev_wait_for_examine" 00:37:49.122 } 00:37:49.122 ] 00:37:49.122 } 00:37:49.122 ] 00:37:49.122 } 00:37:49.122 [2024-07-12 07:48:22.942515] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:49.122 [2024-07-12 07:48:22.942783] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174180 ] 00:37:49.379 [2024-07-12 07:48:23.097296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:49.380 [2024-07-12 07:48:23.145142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:49.897  Copying: 56/56 [kB] (average 27 MBps) 00:37:49.897 00:37:49.898 07:48:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:37:49.898 07:48:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:37:49.898 07:48:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:49.898 07:48:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:49.898 { 00:37:49.898 "subsystems": [ 00:37:49.898 { 00:37:49.898 "subsystem": "bdev", 00:37:49.898 "config": [ 00:37:49.898 { 00:37:49.898 "params": { 00:37:49.898 "trtype": "pcie", 00:37:49.898 "traddr": "0000:00:10.0", 00:37:49.898 "name": "Nvme0" 00:37:49.898 }, 00:37:49.898 "method": "bdev_nvme_attach_controller" 00:37:49.898 }, 00:37:49.898 { 00:37:49.898 "method": "bdev_wait_for_examine" 00:37:49.898 } 00:37:49.898 ] 00:37:49.898 } 00:37:49.898 ] 00:37:49.898 } 00:37:49.898 [2024-07-12 07:48:23.671615] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:49.898 [2024-07-12 07:48:23.671887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174199 ] 00:37:50.156 [2024-07-12 07:48:23.825621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.156 [2024-07-12 07:48:23.872271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:50.415  Copying: 56/56 [kB] (average 54 MBps) 00:37:50.415 00:37:50.415 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:50.674 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:37:50.674 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:37:50.674 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:37:50.674 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:37:50.674 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:37:50.674 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:37:50.674 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:37:50.674 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:37:50.674 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:50.674 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:50.674 [2024-07-12 07:48:24.354520] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:50.674 [2024-07-12 07:48:24.354746] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174209 ] 00:37:50.674 { 00:37:50.674 "subsystems": [ 00:37:50.674 { 00:37:50.674 "subsystem": "bdev", 00:37:50.674 "config": [ 00:37:50.674 { 00:37:50.674 "params": { 00:37:50.674 "trtype": "pcie", 00:37:50.674 "traddr": "0000:00:10.0", 00:37:50.674 "name": "Nvme0" 00:37:50.674 }, 00:37:50.674 "method": "bdev_nvme_attach_controller" 00:37:50.674 }, 00:37:50.674 { 00:37:50.674 "method": "bdev_wait_for_examine" 00:37:50.674 } 00:37:50.674 ] 00:37:50.674 } 00:37:50.674 ] 00:37:50.674 } 00:37:50.674 [2024-07-12 07:48:24.489900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.674 [2024-07-12 07:48:24.532392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:51.192  Copying: 1024/1024 [kB] (average 500 MBps) 00:37:51.192 00:37:51.192 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:37:51.192 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:37:51.192 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:37:51.192 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:37:51.192 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:37:51.192 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:37:51.192 07:48:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:51.760 07:48:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:37:51.760 07:48:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:37:51.760 07:48:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:51.760 07:48:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:51.760 { 00:37:51.760 "subsystems": [ 00:37:51.760 { 00:37:51.760 "subsystem": "bdev", 00:37:51.760 "config": [ 00:37:51.760 { 00:37:51.760 "params": { 00:37:51.760 "trtype": "pcie", 00:37:51.760 "traddr": "0000:00:10.0", 00:37:51.760 "name": "Nvme0" 00:37:51.760 }, 00:37:51.760 "method": "bdev_nvme_attach_controller" 00:37:51.760 }, 00:37:51.760 { 00:37:51.760 "method": "bdev_wait_for_examine" 00:37:51.760 } 00:37:51.760 ] 00:37:51.760 } 00:37:51.760 ] 00:37:51.760 } 00:37:51.760 [2024-07-12 07:48:25.447000] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:51.760 [2024-07-12 07:48:25.447274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174229 ] 00:37:51.760 [2024-07-12 07:48:25.601608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:52.041 [2024-07-12 07:48:25.647096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:52.299  Copying: 56/56 [kB] (average 54 MBps) 00:37:52.299 00:37:52.299 07:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:37:52.299 07:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:37:52.299 07:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:52.299 07:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:52.299 { 00:37:52.299 "subsystems": [ 00:37:52.300 { 00:37:52.300 "subsystem": "bdev", 00:37:52.300 "config": [ 00:37:52.300 { 00:37:52.300 "params": { 00:37:52.300 "trtype": "pcie", 00:37:52.300 "traddr": "0000:00:10.0", 00:37:52.300 "name": "Nvme0" 00:37:52.300 }, 00:37:52.300 "method": "bdev_nvme_attach_controller" 00:37:52.300 }, 00:37:52.300 { 00:37:52.300 "method": "bdev_wait_for_examine" 00:37:52.300 } 00:37:52.300 ] 00:37:52.300 } 00:37:52.300 ] 00:37:52.300 } 00:37:52.300 [2024-07-12 07:48:26.149142] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:52.300 [2024-07-12 07:48:26.149426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174248 ] 00:37:52.558 [2024-07-12 07:48:26.302070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:52.558 [2024-07-12 07:48:26.343267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.077  Copying: 56/56 [kB] (average 54 MBps) 00:37:53.077 00:37:53.077 07:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:53.077 07:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:37:53.077 07:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:37:53.077 07:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:37:53.077 07:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:37:53.077 07:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:37:53.077 07:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:37:53.077 07:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:37:53.077 07:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:37:53.077 07:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:53.077 07:48:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:53.077 [2024-07-12 07:48:26.824069] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:53.077 [2024-07-12 07:48:26.824498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174265 ] 00:37:53.077 { 00:37:53.077 "subsystems": [ 00:37:53.077 { 00:37:53.077 "subsystem": "bdev", 00:37:53.077 "config": [ 00:37:53.077 { 00:37:53.077 "params": { 00:37:53.077 "trtype": "pcie", 00:37:53.077 "traddr": "0000:00:10.0", 00:37:53.077 "name": "Nvme0" 00:37:53.077 }, 00:37:53.077 "method": "bdev_nvme_attach_controller" 00:37:53.077 }, 00:37:53.077 { 00:37:53.077 "method": "bdev_wait_for_examine" 00:37:53.078 } 00:37:53.078 ] 00:37:53.078 } 00:37:53.078 ] 00:37:53.078 } 00:37:53.337 [2024-07-12 07:48:26.967242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.337 [2024-07-12 07:48:27.009716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.596  Copying: 1024/1024 [kB] (average 1000 MBps) 00:37:53.596 00:37:53.596 07:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:37:53.596 07:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:37:53.596 07:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:37:53.596 07:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:37:53.596 07:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:37:53.596 07:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:37:53.596 07:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:37:53.596 07:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:54.166 07:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:37:54.166 07:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:37:54.166 07:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:54.166 07:48:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:54.166 { 00:37:54.166 "subsystems": [ 00:37:54.166 { 00:37:54.166 "subsystem": "bdev", 00:37:54.166 "config": [ 00:37:54.166 { 00:37:54.166 "params": { 00:37:54.166 "trtype": "pcie", 00:37:54.166 "traddr": "0000:00:10.0", 00:37:54.166 "name": "Nvme0" 00:37:54.166 }, 00:37:54.166 "method": "bdev_nvme_attach_controller" 00:37:54.166 }, 00:37:54.166 { 00:37:54.166 "method": "bdev_wait_for_examine" 00:37:54.166 } 00:37:54.166 ] 00:37:54.166 } 00:37:54.166 ] 00:37:54.166 } 00:37:54.166 [2024-07-12 07:48:27.864364] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:54.166 [2024-07-12 07:48:27.864804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174287 ] 00:37:54.166 [2024-07-12 07:48:28.017882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.440 [2024-07-12 07:48:28.065031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.733  Copying: 48/48 [kB] (average 46 MBps) 00:37:54.733 00:37:54.733 07:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:37:54.733 07:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:37:54.733 07:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:54.733 07:48:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:54.733 { 00:37:54.733 "subsystems": [ 00:37:54.733 { 00:37:54.733 "subsystem": "bdev", 00:37:54.733 "config": [ 00:37:54.733 { 00:37:54.733 "params": { 00:37:54.733 "trtype": "pcie", 00:37:54.733 "traddr": "0000:00:10.0", 00:37:54.733 "name": "Nvme0" 00:37:54.733 }, 00:37:54.733 "method": "bdev_nvme_attach_controller" 00:37:54.733 }, 00:37:54.733 { 00:37:54.733 "method": "bdev_wait_for_examine" 00:37:54.733 } 00:37:54.733 ] 00:37:54.733 } 00:37:54.733 ] 00:37:54.733 } 00:37:54.733 [2024-07-12 07:48:28.553922] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:54.733 [2024-07-12 07:48:28.554370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174295 ] 00:37:55.012 [2024-07-12 07:48:28.710454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.012 [2024-07-12 07:48:28.759198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:55.288  Copying: 48/48 [kB] (average 46 MBps) 00:37:55.288 00:37:55.548 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:55.548 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:37:55.548 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:37:55.548 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:37:55.548 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:37:55.548 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:37:55.548 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:37:55.548 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:37:55.548 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:37:55.548 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:55.548 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:55.548 { 00:37:55.548 "subsystems": [ 00:37:55.548 { 00:37:55.548 "subsystem": "bdev", 00:37:55.548 "config": [ 00:37:55.548 { 00:37:55.548 "params": { 00:37:55.548 "trtype": "pcie", 00:37:55.548 "traddr": "0000:00:10.0", 00:37:55.548 "name": "Nvme0" 00:37:55.548 }, 00:37:55.548 "method": "bdev_nvme_attach_controller" 00:37:55.548 }, 00:37:55.548 { 00:37:55.548 "method": "bdev_wait_for_examine" 00:37:55.548 } 00:37:55.548 ] 00:37:55.548 } 00:37:55.548 ] 00:37:55.548 } 00:37:55.548 [2024-07-12 07:48:29.254879] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:55.548 [2024-07-12 07:48:29.255437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174316 ] 00:37:55.548 [2024-07-12 07:48:29.409959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.808 [2024-07-12 07:48:29.462348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:56.067  Copying: 1024/1024 [kB] (average 1000 MBps) 00:37:56.067 00:37:56.067 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:37:56.067 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:37:56.067 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:37:56.067 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:37:56.067 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:37:56.068 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:37:56.068 07:48:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:56.637 07:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:37:56.637 07:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:37:56.637 07:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:56.637 07:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:56.637 { 00:37:56.637 "subsystems": [ 00:37:56.637 { 00:37:56.637 "subsystem": "bdev", 00:37:56.637 "config": [ 00:37:56.637 { 00:37:56.637 "params": { 00:37:56.637 "trtype": "pcie", 00:37:56.637 "traddr": "0000:00:10.0", 00:37:56.637 "name": "Nvme0" 00:37:56.637 }, 00:37:56.637 "method": "bdev_nvme_attach_controller" 00:37:56.637 }, 00:37:56.637 { 00:37:56.637 "method": "bdev_wait_for_examine" 00:37:56.637 } 00:37:56.637 ] 00:37:56.637 } 00:37:56.637 ] 00:37:56.637 } 00:37:56.637 [2024-07-12 07:48:30.343324] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:56.637 [2024-07-12 07:48:30.343779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174336 ] 00:37:56.637 [2024-07-12 07:48:30.497684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:56.897 [2024-07-12 07:48:30.548796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.156  Copying: 48/48 [kB] (average 46 MBps) 00:37:57.156 00:37:57.156 07:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:37:57.156 07:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:37:57.156 07:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:57.156 07:48:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:57.156 { 00:37:57.156 "subsystems": [ 00:37:57.156 { 00:37:57.156 "subsystem": "bdev", 00:37:57.156 "config": [ 00:37:57.156 { 00:37:57.156 "params": { 00:37:57.156 "trtype": "pcie", 00:37:57.156 "traddr": "0000:00:10.0", 00:37:57.156 "name": "Nvme0" 00:37:57.156 }, 00:37:57.156 "method": "bdev_nvme_attach_controller" 00:37:57.156 }, 00:37:57.156 { 00:37:57.156 "method": "bdev_wait_for_examine" 00:37:57.156 } 00:37:57.156 ] 00:37:57.156 } 00:37:57.156 ] 00:37:57.156 } 00:37:57.156 [2024-07-12 07:48:31.036615] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:57.156 [2024-07-12 07:48:31.037054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174351 ] 00:37:57.416 [2024-07-12 07:48:31.190118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:57.416 [2024-07-12 07:48:31.242560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.935  Copying: 48/48 [kB] (average 46 MBps) 00:37:57.935 00:37:57.935 07:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:57.935 07:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:37:57.935 07:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:37:57.935 07:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:37:57.935 07:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:37:57.935 07:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:37:57.935 07:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:37:57.935 07:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:37:57.935 07:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:37:57.935 07:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:37:57.935 07:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:57.935 { 00:37:57.935 "subsystems": [ 00:37:57.935 { 00:37:57.935 "subsystem": "bdev", 00:37:57.935 "config": [ 00:37:57.935 { 00:37:57.935 "params": { 00:37:57.935 "trtype": "pcie", 00:37:57.935 "traddr": "0000:00:10.0", 00:37:57.935 "name": "Nvme0" 00:37:57.935 }, 00:37:57.935 "method": "bdev_nvme_attach_controller" 00:37:57.935 }, 00:37:57.935 { 00:37:57.935 "method": "bdev_wait_for_examine" 00:37:57.935 } 00:37:57.935 ] 00:37:57.935 } 00:37:57.935 ] 00:37:57.935 } 00:37:57.935 [2024-07-12 07:48:31.743861] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:57.935 [2024-07-12 07:48:31.744276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174371 ] 00:37:58.194 [2024-07-12 07:48:31.897870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:58.194 [2024-07-12 07:48:31.944581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:58.711  Copying: 1024/1024 [kB] (average 1000 MBps) 00:37:58.711 00:37:58.711 ************************************ 00:37:58.711 END TEST dd_rw 00:37:58.711 ************************************ 00:37:58.711 00:37:58.711 real 0m15.048s 00:37:58.711 user 0m9.358s 00:37:58.711 sys 0m4.229s 00:37:58.711 07:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:58.711 07:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:37:58.711 07:48:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:37:58.711 07:48:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:58.711 07:48:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:58.711 07:48:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:37:58.711 ************************************ 00:37:58.711 START TEST dd_rw_offset 00:37:58.711 ************************************ 00:37:58.711 07:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1121 -- # basic_offset 00:37:58.711 07:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:37:58.711 07:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:37:58.711 07:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:37:58.711 07:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:37:58.711 07:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:37:58.712 07:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=q0qgru0qrdydwaqyhixnc8zhgic31484gi2z66to1ny8d6rr45z5lgkmlj51wmvalywie0k06okf65ztr121sh27bg59d6yk3g5v7juf18qagc64rsf45toa0mb6tnfhb41pt683lr1hppnwxlhigbrtp95m7xn3to2hcoundnx5keaok9784ojobx2w7l2wqk688ymwna1e2mnnlw7xo5ycqtkehba9c2plipn0yu2gpkcynq02fmex453fod97mxl99m79gytejxfb4tv5c29ie9mlz55j9gtmug0ejdd68swkwqte5pf5mrxn2stwj0o4rpestsopo1pqfsn5ufcazdiletq4cfssvkfewf5hpyepn6sk084t0cmu0iu0cakxqse5lam9ss91yyagxdur8ax4d0i1n2tlgsf5ypv5x9myhz504fppren00vs54yn8p7brdy6sy6byqe2koqprrvjkv1xkn2gjgtz6uqt7a4x0jfb15deubmvmk6gd257xqo8r9leer7orcgxyba4zjzyqu2wmm658cd3718ey3op9ykeml1eah0b8m1uru4ru2r4zu4ba4ao8aaulplqm4kvdf5g88n4ezw2k6kp4qmdlr1cv8s2beb36ut7fdzympe8yy5y6qxwsemxkupcnxz1exreniw9c2zndslivdpohxxtui6h2b65vzgmd1mpljgvonerejweape1jiw4j5osqlpg5qt4ndmpjaz1k6kyqtaj1sxnksrez44m9kjdgg4s1bvzqy5zgirwx7s5aken6fvcz4pz4ivwnsucbfdec1raqceix4fm3tjl6r0fgezz70dv225a4lmhw0y5xy6tru0t0zbv64y9oqcct3nof0p73iyn491p2iwa1kf284ffpjbwdhkm94qw8n3apfhtppvoblyr4fhjje6216ltbxscpxwsztm09pr67mpmwu07dnzac5pg2g4lg8de0xxwqfrq0bion955dohzljvslxidlxz9whd5b66yzrfmo03iogimlnj81a9d8oaki514vyfvsml5ec7fclbv62ddcxlgawgt0nk5sx56tl9z7wa0uhy1cokd6uco8g0nkdxw0glmti5a89sl6xk8g9j0ugpnsdgbyry82tawyh11iyj5a7vru9implxogh5pzp1x5okq9o9laa769dz04dpri6bo1hh47mmsy28ie000r6dd3kazwob73nzj3wva08yq6oqcroiddr9upmrlzwx8gxl1pebqd9aymqaqu3sgfh6hbed96odqmtphimkqh9p8ha44o276u5qzfva3p56qpzix92iwzmumi2vev56qheyzyytsawo2755c476h1pvd3dymf4jnmq6jruvotnvmgvosejvuyqrtzkkdbm6oeike5nj0zuaa2mz2se2c4eees33s3psu62zu3k7miny8442p7wfqbcm55qatlt89yea8rtso1qit27it6x52rahgmj0di5fw7wtsj2b0rlclzzymdu3qx7062elcwy9vqbfi4ua1w12xadga0mdqz6ybvbovh3l1jiv1t7gx3fevipeqvmp89j8099x0slxnah97z8pf6bb7805yyyofed4eq1xwahri4ifmkjbrafua0w60xc4ys38gzhb9oxfxn24q9joqh5vxufwffp7pgj4u10oagkggis13chov3ya3jsozh917mqlfp6fiqh1kbsfdzi0853tmgbgtdm3zbqh4th1mqmzqq7awsexq5dnw7dja0uab5tghry1qo7d481ud8gfhfwyis6yvozj3wc9a3g3kxedniv7x59d651gich4pdjommw8i8jchx5c02g2w2r1viq4zwbbzx7j5ezuv99qtnrpuva8v7f4pjl6vjdgeo0iuplzgb2yyqub4uauskc0rt1ng688ax7f4wi0zv83abkwffsgq53cqvs77y31fo0gh18m6zts1v9hzg4us4h4j5i111e49ob7yrxltcyhu7ohg7nc2r51tl42tcom6xf89vfdbjcrkyb6r6bq5pcury305yoq6do326i6cn5i8teqar666qjwaaxx63r4agz2mcv95ttr3j2he2yy1t0az6btg1oes7zhgfxt31ybjnbe03yj4fjoiphts55zrfbi4k5d848qpuzrc02wedextlvhqmohyzlfok8gb46wwfgz399x4sfltm9aljleamkj34ut7ezl35snzjayri92955s4chnse9xd04vej3d6uwg8cevbdfmks81thig7cunnnou3tiz7mp3pqj8qa5lzfoq9fci5knpzsjm92v8z3bz6zrdjf4e7698lrhd546c0o4m5yby2q1yb5szyqre5cau8g2nkr8a8kbzhqjd8czp2yrki7fydpesxiev6eag8pgd54k58i6r718nj6kftgomtewl5udrn1bkqesitoz1lfof5tqlo5y0cjpghmtnmznt7xfqaxk47bb4yyxo5iq050udxn1vt33josiz2uvjxvi93xi561jdbmjhgd9fhm9px87m0gkewllee5gehe350p220ryc0wc5kaa349b4883aaw1lo1v4ua5hr2wsua2st0jggkd0nrt2yito69d9g759cqumo7l85yxvvcs9x0n4092ps0azgiuvpvif3l1s2vcce15eglxluv3f17a1la2yu9awjw8up7qatz7mykwdj7sztwo1ra63jlwyurl85gj3y6kcij290jede7qjcog6r5zlxg7zyxe9t0nw5miq9818icyx57hb20hcujc2r8iwcs4cc5dc31ik1fmj059uso77tb8nmsbzh62um6mwlmpcdj9a6h9te7sg4vu5so0oi1t8h496cz6sanrz1ux9kjqgfk1weyu4fdpdd5eoh9u3jju8qih5wfqvmlzttleq837nca9unveeslyc5jqvpkr2h1wit2e4u8ul1hsctuolq5w6ed2ojrdoy8p57fjfmnzjmon17t8twjavbo0j5hdc38y3ukov2fl5ggyl3gfjm24983vhvs4jv4pmw7v2r12royq65f0ne94wnikxp8mlxwbk49yhlyee0j1fop9nx5ehst3eh80b5pvzh2z4xhyx7033jd03djsx4ru4mb7ccyfgrgl3kx9jtf4hrgvzecebejjqpp1z9h1iz51zzq9dp75dq7s9brajl7i826dqnkjdly6ynuayrvmn0p46cpgnopovv4h4l7q4s85z27my10n3ddlyd3ipi0cdqulc1hrp1v4vqyubd2jzf2h9y0a7kg1p69a53tssl56flwhibdu8qjzr1slulo0rah65etzov7h5udjrxuag32vvwd022nxbxzss6c9asnbcgbtg89aulyszby6iwmn8o8t3y4snsoqru1yptu80yt6d7zvv3u1ghq9xpiu02qitexwa7pkhn9cjdasauawltvd9qqyomnv2f05rjpniny7uavkajbjg3zni1hutvkkb6n5zw3el2tk1qs1ki2jvzku4ocst2l2j0bttupizwznnbz98zsd4wzxf9uom9jmq3a8osq6jtbdt3y4nwtzj60gw693hiuwpiwq95asloxbgmvy3o8wq3tp3lcqykc73liwgotdwlrm1fvtzaf9o585pbodfmhyqwirnydddevlaflhsinnex2y1x6anna6wn1zqmbuztwn1jrxz9klh6ji34vzbpqog39ve2tv5ugnh30kwqra2ah63l8byhieaorjs5pqkztdxaod4k3zrsofpvp6yimfeh73oqzjgwmck7d8qtg6de81ybkdnktzsu29zr5uoq7n7dws8f4xhbwdhk0l3pxcah6h0l0ozpvmm9hueqmv7ygc56k311t96ayfpxbwc5nrjyyer8g11abvct622nizt0o7vz7zaj94ee42vqw308yraieeuxib31y5mqtvyt8ra5hgan3ryuhj7t11pzeen66k8k1zh8ebjibp7mwnzdswgwqzb4hf28ugmlexohho01acxubg9uj9mf716xy6xvutd1ddtvtw28wq8p0q4iajk1hy3mpp71w 00:37:58.712 07:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:37:58.712 07:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:37:58.712 07:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:37:58.712 07:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:37:58.712 { 00:37:58.712 "subsystems": [ 00:37:58.712 { 00:37:58.712 "subsystem": "bdev", 00:37:58.712 "config": [ 00:37:58.712 { 00:37:58.712 "params": { 00:37:58.712 "trtype": "pcie", 00:37:58.712 "traddr": "0000:00:10.0", 00:37:58.712 "name": "Nvme0" 00:37:58.712 }, 00:37:58.712 "method": "bdev_nvme_attach_controller" 00:37:58.712 }, 00:37:58.712 { 00:37:58.712 "method": "bdev_wait_for_examine" 00:37:58.712 } 00:37:58.712 ] 00:37:58.712 } 00:37:58.712 ] 00:37:58.712 } 00:37:58.712 [2024-07-12 07:48:32.574626] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:58.712 [2024-07-12 07:48:32.575056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174400 ] 00:37:58.970 [2024-07-12 07:48:32.722157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:58.970 [2024-07-12 07:48:32.768222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:59.487  Copying: 4096/4096 [B] (average 4000 kBps) 00:37:59.487 00:37:59.487 07:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:37:59.487 07:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:37:59.487 07:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:37:59.487 07:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:37:59.487 { 00:37:59.487 "subsystems": [ 00:37:59.487 { 00:37:59.487 "subsystem": "bdev", 00:37:59.487 "config": [ 00:37:59.487 { 00:37:59.487 "params": { 00:37:59.487 "trtype": "pcie", 00:37:59.487 "traddr": "0000:00:10.0", 00:37:59.487 "name": "Nvme0" 00:37:59.487 }, 00:37:59.487 "method": "bdev_nvme_attach_controller" 00:37:59.487 }, 00:37:59.487 { 00:37:59.487 "method": "bdev_wait_for_examine" 00:37:59.487 } 00:37:59.487 ] 00:37:59.487 } 00:37:59.487 ] 00:37:59.487 } 00:37:59.487 [2024-07-12 07:48:33.265424] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:59.487 [2024-07-12 07:48:33.265885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174424 ] 00:37:59.745 [2024-07-12 07:48:33.420875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:59.745 [2024-07-12 07:48:33.466231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.315  Copying: 4096/4096 [B] (average 4000 kBps) 00:38:00.315 00:38:00.315 07:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:38:00.315 07:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ q0qgru0qrdydwaqyhixnc8zhgic31484gi2z66to1ny8d6rr45z5lgkmlj51wmvalywie0k06okf65ztr121sh27bg59d6yk3g5v7juf18qagc64rsf45toa0mb6tnfhb41pt683lr1hppnwxlhigbrtp95m7xn3to2hcoundnx5keaok9784ojobx2w7l2wqk688ymwna1e2mnnlw7xo5ycqtkehba9c2plipn0yu2gpkcynq02fmex453fod97mxl99m79gytejxfb4tv5c29ie9mlz55j9gtmug0ejdd68swkwqte5pf5mrxn2stwj0o4rpestsopo1pqfsn5ufcazdiletq4cfssvkfewf5hpyepn6sk084t0cmu0iu0cakxqse5lam9ss91yyagxdur8ax4d0i1n2tlgsf5ypv5x9myhz504fppren00vs54yn8p7brdy6sy6byqe2koqprrvjkv1xkn2gjgtz6uqt7a4x0jfb15deubmvmk6gd257xqo8r9leer7orcgxyba4zjzyqu2wmm658cd3718ey3op9ykeml1eah0b8m1uru4ru2r4zu4ba4ao8aaulplqm4kvdf5g88n4ezw2k6kp4qmdlr1cv8s2beb36ut7fdzympe8yy5y6qxwsemxkupcnxz1exreniw9c2zndslivdpohxxtui6h2b65vzgmd1mpljgvonerejweape1jiw4j5osqlpg5qt4ndmpjaz1k6kyqtaj1sxnksrez44m9kjdgg4s1bvzqy5zgirwx7s5aken6fvcz4pz4ivwnsucbfdec1raqceix4fm3tjl6r0fgezz70dv225a4lmhw0y5xy6tru0t0zbv64y9oqcct3nof0p73iyn491p2iwa1kf284ffpjbwdhkm94qw8n3apfhtppvoblyr4fhjje6216ltbxscpxwsztm09pr67mpmwu07dnzac5pg2g4lg8de0xxwqfrq0bion955dohzljvslxidlxz9whd5b66yzrfmo03iogimlnj81a9d8oaki514vyfvsml5ec7fclbv62ddcxlgawgt0nk5sx56tl9z7wa0uhy1cokd6uco8g0nkdxw0glmti5a89sl6xk8g9j0ugpnsdgbyry82tawyh11iyj5a7vru9implxogh5pzp1x5okq9o9laa769dz04dpri6bo1hh47mmsy28ie000r6dd3kazwob73nzj3wva08yq6oqcroiddr9upmrlzwx8gxl1pebqd9aymqaqu3sgfh6hbed96odqmtphimkqh9p8ha44o276u5qzfva3p56qpzix92iwzmumi2vev56qheyzyytsawo2755c476h1pvd3dymf4jnmq6jruvotnvmgvosejvuyqrtzkkdbm6oeike5nj0zuaa2mz2se2c4eees33s3psu62zu3k7miny8442p7wfqbcm55qatlt89yea8rtso1qit27it6x52rahgmj0di5fw7wtsj2b0rlclzzymdu3qx7062elcwy9vqbfi4ua1w12xadga0mdqz6ybvbovh3l1jiv1t7gx3fevipeqvmp89j8099x0slxnah97z8pf6bb7805yyyofed4eq1xwahri4ifmkjbrafua0w60xc4ys38gzhb9oxfxn24q9joqh5vxufwffp7pgj4u10oagkggis13chov3ya3jsozh917mqlfp6fiqh1kbsfdzi0853tmgbgtdm3zbqh4th1mqmzqq7awsexq5dnw7dja0uab5tghry1qo7d481ud8gfhfwyis6yvozj3wc9a3g3kxedniv7x59d651gich4pdjommw8i8jchx5c02g2w2r1viq4zwbbzx7j5ezuv99qtnrpuva8v7f4pjl6vjdgeo0iuplzgb2yyqub4uauskc0rt1ng688ax7f4wi0zv83abkwffsgq53cqvs77y31fo0gh18m6zts1v9hzg4us4h4j5i111e49ob7yrxltcyhu7ohg7nc2r51tl42tcom6xf89vfdbjcrkyb6r6bq5pcury305yoq6do326i6cn5i8teqar666qjwaaxx63r4agz2mcv95ttr3j2he2yy1t0az6btg1oes7zhgfxt31ybjnbe03yj4fjoiphts55zrfbi4k5d848qpuzrc02wedextlvhqmohyzlfok8gb46wwfgz399x4sfltm9aljleamkj34ut7ezl35snzjayri92955s4chnse9xd04vej3d6uwg8cevbdfmks81thig7cunnnou3tiz7mp3pqj8qa5lzfoq9fci5knpzsjm92v8z3bz6zrdjf4e7698lrhd546c0o4m5yby2q1yb5szyqre5cau8g2nkr8a8kbzhqjd8czp2yrki7fydpesxiev6eag8pgd54k58i6r718nj6kftgomtewl5udrn1bkqesitoz1lfof5tqlo5y0cjpghmtnmznt7xfqaxk47bb4yyxo5iq050udxn1vt33josiz2uvjxvi93xi561jdbmjhgd9fhm9px87m0gkewllee5gehe350p220ryc0wc5kaa349b4883aaw1lo1v4ua5hr2wsua2st0jggkd0nrt2yito69d9g759cqumo7l85yxvvcs9x0n4092ps0azgiuvpvif3l1s2vcce15eglxluv3f17a1la2yu9awjw8up7qatz7mykwdj7sztwo1ra63jlwyurl85gj3y6kcij290jede7qjcog6r5zlxg7zyxe9t0nw5miq9818icyx57hb20hcujc2r8iwcs4cc5dc31ik1fmj059uso77tb8nmsbzh62um6mwlmpcdj9a6h9te7sg4vu5so0oi1t8h496cz6sanrz1ux9kjqgfk1weyu4fdpdd5eoh9u3jju8qih5wfqvmlzttleq837nca9unveeslyc5jqvpkr2h1wit2e4u8ul1hsctuolq5w6ed2ojrdoy8p57fjfmnzjmon17t8twjavbo0j5hdc38y3ukov2fl5ggyl3gfjm24983vhvs4jv4pmw7v2r12royq65f0ne94wnikxp8mlxwbk49yhlyee0j1fop9nx5ehst3eh80b5pvzh2z4xhyx7033jd03djsx4ru4mb7ccyfgrgl3kx9jtf4hrgvzecebejjqpp1z9h1iz51zzq9dp75dq7s9brajl7i826dqnkjdly6ynuayrvmn0p46cpgnopovv4h4l7q4s85z27my10n3ddlyd3ipi0cdqulc1hrp1v4vqyubd2jzf2h9y0a7kg1p69a53tssl56flwhibdu8qjzr1slulo0rah65etzov7h5udjrxuag32vvwd022nxbxzss6c9asnbcgbtg89aulyszby6iwmn8o8t3y4snsoqru1yptu80yt6d7zvv3u1ghq9xpiu02qitexwa7pkhn9cjdasauawltvd9qqyomnv2f05rjpniny7uavkajbjg3zni1hutvkkb6n5zw3el2tk1qs1ki2jvzku4ocst2l2j0bttupizwznnbz98zsd4wzxf9uom9jmq3a8osq6jtbdt3y4nwtzj60gw693hiuwpiwq95asloxbgmvy3o8wq3tp3lcqykc73liwgotdwlrm1fvtzaf9o585pbodfmhyqwirnydddevlaflhsinnex2y1x6anna6wn1zqmbuztwn1jrxz9klh6ji34vzbpqog39ve2tv5ugnh30kwqra2ah63l8byhieaorjs5pqkztdxaod4k3zrsofpvp6yimfeh73oqzjgwmck7d8qtg6de81ybkdnktzsu29zr5uoq7n7dws8f4xhbwdhk0l3pxcah6h0l0ozpvmm9hueqmv7ygc56k311t96ayfpxbwc5nrjyyer8g11abvct622nizt0o7vz7zaj94ee42vqw308yraieeuxib31y5mqtvyt8ra5hgan3ryuhj7t11pzeen66k8k1zh8ebjibp7mwnzdswgwqzb4hf28ugmlexohho01acxubg9uj9mf716xy6xvutd1ddtvtw28wq8p0q4iajk1hy3mpp71w == \q\0\q\g\r\u\0\q\r\d\y\d\w\a\q\y\h\i\x\n\c\8\z\h\g\i\c\3\1\4\8\4\g\i\2\z\6\6\t\o\1\n\y\8\d\6\r\r\4\5\z\5\l\g\k\m\l\j\5\1\w\m\v\a\l\y\w\i\e\0\k\0\6\o\k\f\6\5\z\t\r\1\2\1\s\h\2\7\b\g\5\9\d\6\y\k\3\g\5\v\7\j\u\f\1\8\q\a\g\c\6\4\r\s\f\4\5\t\o\a\0\m\b\6\t\n\f\h\b\4\1\p\t\6\8\3\l\r\1\h\p\p\n\w\x\l\h\i\g\b\r\t\p\9\5\m\7\x\n\3\t\o\2\h\c\o\u\n\d\n\x\5\k\e\a\o\k\9\7\8\4\o\j\o\b\x\2\w\7\l\2\w\q\k\6\8\8\y\m\w\n\a\1\e\2\m\n\n\l\w\7\x\o\5\y\c\q\t\k\e\h\b\a\9\c\2\p\l\i\p\n\0\y\u\2\g\p\k\c\y\n\q\0\2\f\m\e\x\4\5\3\f\o\d\9\7\m\x\l\9\9\m\7\9\g\y\t\e\j\x\f\b\4\t\v\5\c\2\9\i\e\9\m\l\z\5\5\j\9\g\t\m\u\g\0\e\j\d\d\6\8\s\w\k\w\q\t\e\5\p\f\5\m\r\x\n\2\s\t\w\j\0\o\4\r\p\e\s\t\s\o\p\o\1\p\q\f\s\n\5\u\f\c\a\z\d\i\l\e\t\q\4\c\f\s\s\v\k\f\e\w\f\5\h\p\y\e\p\n\6\s\k\0\8\4\t\0\c\m\u\0\i\u\0\c\a\k\x\q\s\e\5\l\a\m\9\s\s\9\1\y\y\a\g\x\d\u\r\8\a\x\4\d\0\i\1\n\2\t\l\g\s\f\5\y\p\v\5\x\9\m\y\h\z\5\0\4\f\p\p\r\e\n\0\0\v\s\5\4\y\n\8\p\7\b\r\d\y\6\s\y\6\b\y\q\e\2\k\o\q\p\r\r\v\j\k\v\1\x\k\n\2\g\j\g\t\z\6\u\q\t\7\a\4\x\0\j\f\b\1\5\d\e\u\b\m\v\m\k\6\g\d\2\5\7\x\q\o\8\r\9\l\e\e\r\7\o\r\c\g\x\y\b\a\4\z\j\z\y\q\u\2\w\m\m\6\5\8\c\d\3\7\1\8\e\y\3\o\p\9\y\k\e\m\l\1\e\a\h\0\b\8\m\1\u\r\u\4\r\u\2\r\4\z\u\4\b\a\4\a\o\8\a\a\u\l\p\l\q\m\4\k\v\d\f\5\g\8\8\n\4\e\z\w\2\k\6\k\p\4\q\m\d\l\r\1\c\v\8\s\2\b\e\b\3\6\u\t\7\f\d\z\y\m\p\e\8\y\y\5\y\6\q\x\w\s\e\m\x\k\u\p\c\n\x\z\1\e\x\r\e\n\i\w\9\c\2\z\n\d\s\l\i\v\d\p\o\h\x\x\t\u\i\6\h\2\b\6\5\v\z\g\m\d\1\m\p\l\j\g\v\o\n\e\r\e\j\w\e\a\p\e\1\j\i\w\4\j\5\o\s\q\l\p\g\5\q\t\4\n\d\m\p\j\a\z\1\k\6\k\y\q\t\a\j\1\s\x\n\k\s\r\e\z\4\4\m\9\k\j\d\g\g\4\s\1\b\v\z\q\y\5\z\g\i\r\w\x\7\s\5\a\k\e\n\6\f\v\c\z\4\p\z\4\i\v\w\n\s\u\c\b\f\d\e\c\1\r\a\q\c\e\i\x\4\f\m\3\t\j\l\6\r\0\f\g\e\z\z\7\0\d\v\2\2\5\a\4\l\m\h\w\0\y\5\x\y\6\t\r\u\0\t\0\z\b\v\6\4\y\9\o\q\c\c\t\3\n\o\f\0\p\7\3\i\y\n\4\9\1\p\2\i\w\a\1\k\f\2\8\4\f\f\p\j\b\w\d\h\k\m\9\4\q\w\8\n\3\a\p\f\h\t\p\p\v\o\b\l\y\r\4\f\h\j\j\e\6\2\1\6\l\t\b\x\s\c\p\x\w\s\z\t\m\0\9\p\r\6\7\m\p\m\w\u\0\7\d\n\z\a\c\5\p\g\2\g\4\l\g\8\d\e\0\x\x\w\q\f\r\q\0\b\i\o\n\9\5\5\d\o\h\z\l\j\v\s\l\x\i\d\l\x\z\9\w\h\d\5\b\6\6\y\z\r\f\m\o\0\3\i\o\g\i\m\l\n\j\8\1\a\9\d\8\o\a\k\i\5\1\4\v\y\f\v\s\m\l\5\e\c\7\f\c\l\b\v\6\2\d\d\c\x\l\g\a\w\g\t\0\n\k\5\s\x\5\6\t\l\9\z\7\w\a\0\u\h\y\1\c\o\k\d\6\u\c\o\8\g\0\n\k\d\x\w\0\g\l\m\t\i\5\a\8\9\s\l\6\x\k\8\g\9\j\0\u\g\p\n\s\d\g\b\y\r\y\8\2\t\a\w\y\h\1\1\i\y\j\5\a\7\v\r\u\9\i\m\p\l\x\o\g\h\5\p\z\p\1\x\5\o\k\q\9\o\9\l\a\a\7\6\9\d\z\0\4\d\p\r\i\6\b\o\1\h\h\4\7\m\m\s\y\2\8\i\e\0\0\0\r\6\d\d\3\k\a\z\w\o\b\7\3\n\z\j\3\w\v\a\0\8\y\q\6\o\q\c\r\o\i\d\d\r\9\u\p\m\r\l\z\w\x\8\g\x\l\1\p\e\b\q\d\9\a\y\m\q\a\q\u\3\s\g\f\h\6\h\b\e\d\9\6\o\d\q\m\t\p\h\i\m\k\q\h\9\p\8\h\a\4\4\o\2\7\6\u\5\q\z\f\v\a\3\p\5\6\q\p\z\i\x\9\2\i\w\z\m\u\m\i\2\v\e\v\5\6\q\h\e\y\z\y\y\t\s\a\w\o\2\7\5\5\c\4\7\6\h\1\p\v\d\3\d\y\m\f\4\j\n\m\q\6\j\r\u\v\o\t\n\v\m\g\v\o\s\e\j\v\u\y\q\r\t\z\k\k\d\b\m\6\o\e\i\k\e\5\n\j\0\z\u\a\a\2\m\z\2\s\e\2\c\4\e\e\e\s\3\3\s\3\p\s\u\6\2\z\u\3\k\7\m\i\n\y\8\4\4\2\p\7\w\f\q\b\c\m\5\5\q\a\t\l\t\8\9\y\e\a\8\r\t\s\o\1\q\i\t\2\7\i\t\6\x\5\2\r\a\h\g\m\j\0\d\i\5\f\w\7\w\t\s\j\2\b\0\r\l\c\l\z\z\y\m\d\u\3\q\x\7\0\6\2\e\l\c\w\y\9\v\q\b\f\i\4\u\a\1\w\1\2\x\a\d\g\a\0\m\d\q\z\6\y\b\v\b\o\v\h\3\l\1\j\i\v\1\t\7\g\x\3\f\e\v\i\p\e\q\v\m\p\8\9\j\8\0\9\9\x\0\s\l\x\n\a\h\9\7\z\8\p\f\6\b\b\7\8\0\5\y\y\y\o\f\e\d\4\e\q\1\x\w\a\h\r\i\4\i\f\m\k\j\b\r\a\f\u\a\0\w\6\0\x\c\4\y\s\3\8\g\z\h\b\9\o\x\f\x\n\2\4\q\9\j\o\q\h\5\v\x\u\f\w\f\f\p\7\p\g\j\4\u\1\0\o\a\g\k\g\g\i\s\1\3\c\h\o\v\3\y\a\3\j\s\o\z\h\9\1\7\m\q\l\f\p\6\f\i\q\h\1\k\b\s\f\d\z\i\0\8\5\3\t\m\g\b\g\t\d\m\3\z\b\q\h\4\t\h\1\m\q\m\z\q\q\7\a\w\s\e\x\q\5\d\n\w\7\d\j\a\0\u\a\b\5\t\g\h\r\y\1\q\o\7\d\4\8\1\u\d\8\g\f\h\f\w\y\i\s\6\y\v\o\z\j\3\w\c\9\a\3\g\3\k\x\e\d\n\i\v\7\x\5\9\d\6\5\1\g\i\c\h\4\p\d\j\o\m\m\w\8\i\8\j\c\h\x\5\c\0\2\g\2\w\2\r\1\v\i\q\4\z\w\b\b\z\x\7\j\5\e\z\u\v\9\9\q\t\n\r\p\u\v\a\8\v\7\f\4\p\j\l\6\v\j\d\g\e\o\0\i\u\p\l\z\g\b\2\y\y\q\u\b\4\u\a\u\s\k\c\0\r\t\1\n\g\6\8\8\a\x\7\f\4\w\i\0\z\v\8\3\a\b\k\w\f\f\s\g\q\5\3\c\q\v\s\7\7\y\3\1\f\o\0\g\h\1\8\m\6\z\t\s\1\v\9\h\z\g\4\u\s\4\h\4\j\5\i\1\1\1\e\4\9\o\b\7\y\r\x\l\t\c\y\h\u\7\o\h\g\7\n\c\2\r\5\1\t\l\4\2\t\c\o\m\6\x\f\8\9\v\f\d\b\j\c\r\k\y\b\6\r\6\b\q\5\p\c\u\r\y\3\0\5\y\o\q\6\d\o\3\2\6\i\6\c\n\5\i\8\t\e\q\a\r\6\6\6\q\j\w\a\a\x\x\6\3\r\4\a\g\z\2\m\c\v\9\5\t\t\r\3\j\2\h\e\2\y\y\1\t\0\a\z\6\b\t\g\1\o\e\s\7\z\h\g\f\x\t\3\1\y\b\j\n\b\e\0\3\y\j\4\f\j\o\i\p\h\t\s\5\5\z\r\f\b\i\4\k\5\d\8\4\8\q\p\u\z\r\c\0\2\w\e\d\e\x\t\l\v\h\q\m\o\h\y\z\l\f\o\k\8\g\b\4\6\w\w\f\g\z\3\9\9\x\4\s\f\l\t\m\9\a\l\j\l\e\a\m\k\j\3\4\u\t\7\e\z\l\3\5\s\n\z\j\a\y\r\i\9\2\9\5\5\s\4\c\h\n\s\e\9\x\d\0\4\v\e\j\3\d\6\u\w\g\8\c\e\v\b\d\f\m\k\s\8\1\t\h\i\g\7\c\u\n\n\n\o\u\3\t\i\z\7\m\p\3\p\q\j\8\q\a\5\l\z\f\o\q\9\f\c\i\5\k\n\p\z\s\j\m\9\2\v\8\z\3\b\z\6\z\r\d\j\f\4\e\7\6\9\8\l\r\h\d\5\4\6\c\0\o\4\m\5\y\b\y\2\q\1\y\b\5\s\z\y\q\r\e\5\c\a\u\8\g\2\n\k\r\8\a\8\k\b\z\h\q\j\d\8\c\z\p\2\y\r\k\i\7\f\y\d\p\e\s\x\i\e\v\6\e\a\g\8\p\g\d\5\4\k\5\8\i\6\r\7\1\8\n\j\6\k\f\t\g\o\m\t\e\w\l\5\u\d\r\n\1\b\k\q\e\s\i\t\o\z\1\l\f\o\f\5\t\q\l\o\5\y\0\c\j\p\g\h\m\t\n\m\z\n\t\7\x\f\q\a\x\k\4\7\b\b\4\y\y\x\o\5\i\q\0\5\0\u\d\x\n\1\v\t\3\3\j\o\s\i\z\2\u\v\j\x\v\i\9\3\x\i\5\6\1\j\d\b\m\j\h\g\d\9\f\h\m\9\p\x\8\7\m\0\g\k\e\w\l\l\e\e\5\g\e\h\e\3\5\0\p\2\2\0\r\y\c\0\w\c\5\k\a\a\3\4\9\b\4\8\8\3\a\a\w\1\l\o\1\v\4\u\a\5\h\r\2\w\s\u\a\2\s\t\0\j\g\g\k\d\0\n\r\t\2\y\i\t\o\6\9\d\9\g\7\5\9\c\q\u\m\o\7\l\8\5\y\x\v\v\c\s\9\x\0\n\4\0\9\2\p\s\0\a\z\g\i\u\v\p\v\i\f\3\l\1\s\2\v\c\c\e\1\5\e\g\l\x\l\u\v\3\f\1\7\a\1\l\a\2\y\u\9\a\w\j\w\8\u\p\7\q\a\t\z\7\m\y\k\w\d\j\7\s\z\t\w\o\1\r\a\6\3\j\l\w\y\u\r\l\8\5\g\j\3\y\6\k\c\i\j\2\9\0\j\e\d\e\7\q\j\c\o\g\6\r\5\z\l\x\g\7\z\y\x\e\9\t\0\n\w\5\m\i\q\9\8\1\8\i\c\y\x\5\7\h\b\2\0\h\c\u\j\c\2\r\8\i\w\c\s\4\c\c\5\d\c\3\1\i\k\1\f\m\j\0\5\9\u\s\o\7\7\t\b\8\n\m\s\b\z\h\6\2\u\m\6\m\w\l\m\p\c\d\j\9\a\6\h\9\t\e\7\s\g\4\v\u\5\s\o\0\o\i\1\t\8\h\4\9\6\c\z\6\s\a\n\r\z\1\u\x\9\k\j\q\g\f\k\1\w\e\y\u\4\f\d\p\d\d\5\e\o\h\9\u\3\j\j\u\8\q\i\h\5\w\f\q\v\m\l\z\t\t\l\e\q\8\3\7\n\c\a\9\u\n\v\e\e\s\l\y\c\5\j\q\v\p\k\r\2\h\1\w\i\t\2\e\4\u\8\u\l\1\h\s\c\t\u\o\l\q\5\w\6\e\d\2\o\j\r\d\o\y\8\p\5\7\f\j\f\m\n\z\j\m\o\n\1\7\t\8\t\w\j\a\v\b\o\0\j\5\h\d\c\3\8\y\3\u\k\o\v\2\f\l\5\g\g\y\l\3\g\f\j\m\2\4\9\8\3\v\h\v\s\4\j\v\4\p\m\w\7\v\2\r\1\2\r\o\y\q\6\5\f\0\n\e\9\4\w\n\i\k\x\p\8\m\l\x\w\b\k\4\9\y\h\l\y\e\e\0\j\1\f\o\p\9\n\x\5\e\h\s\t\3\e\h\8\0\b\5\p\v\z\h\2\z\4\x\h\y\x\7\0\3\3\j\d\0\3\d\j\s\x\4\r\u\4\m\b\7\c\c\y\f\g\r\g\l\3\k\x\9\j\t\f\4\h\r\g\v\z\e\c\e\b\e\j\j\q\p\p\1\z\9\h\1\i\z\5\1\z\z\q\9\d\p\7\5\d\q\7\s\9\b\r\a\j\l\7\i\8\2\6\d\q\n\k\j\d\l\y\6\y\n\u\a\y\r\v\m\n\0\p\4\6\c\p\g\n\o\p\o\v\v\4\h\4\l\7\q\4\s\8\5\z\2\7\m\y\1\0\n\3\d\d\l\y\d\3\i\p\i\0\c\d\q\u\l\c\1\h\r\p\1\v\4\v\q\y\u\b\d\2\j\z\f\2\h\9\y\0\a\7\k\g\1\p\6\9\a\5\3\t\s\s\l\5\6\f\l\w\h\i\b\d\u\8\q\j\z\r\1\s\l\u\l\o\0\r\a\h\6\5\e\t\z\o\v\7\h\5\u\d\j\r\x\u\a\g\3\2\v\v\w\d\0\2\2\n\x\b\x\z\s\s\6\c\9\a\s\n\b\c\g\b\t\g\8\9\a\u\l\y\s\z\b\y\6\i\w\m\n\8\o\8\t\3\y\4\s\n\s\o\q\r\u\1\y\p\t\u\8\0\y\t\6\d\7\z\v\v\3\u\1\g\h\q\9\x\p\i\u\0\2\q\i\t\e\x\w\a\7\p\k\h\n\9\c\j\d\a\s\a\u\a\w\l\t\v\d\9\q\q\y\o\m\n\v\2\f\0\5\r\j\p\n\i\n\y\7\u\a\v\k\a\j\b\j\g\3\z\n\i\1\h\u\t\v\k\k\b\6\n\5\z\w\3\e\l\2\t\k\1\q\s\1\k\i\2\j\v\z\k\u\4\o\c\s\t\2\l\2\j\0\b\t\t\u\p\i\z\w\z\n\n\b\z\9\8\z\s\d\4\w\z\x\f\9\u\o\m\9\j\m\q\3\a\8\o\s\q\6\j\t\b\d\t\3\y\4\n\w\t\z\j\6\0\g\w\6\9\3\h\i\u\w\p\i\w\q\9\5\a\s\l\o\x\b\g\m\v\y\3\o\8\w\q\3\t\p\3\l\c\q\y\k\c\7\3\l\i\w\g\o\t\d\w\l\r\m\1\f\v\t\z\a\f\9\o\5\8\5\p\b\o\d\f\m\h\y\q\w\i\r\n\y\d\d\d\e\v\l\a\f\l\h\s\i\n\n\e\x\2\y\1\x\6\a\n\n\a\6\w\n\1\z\q\m\b\u\z\t\w\n\1\j\r\x\z\9\k\l\h\6\j\i\3\4\v\z\b\p\q\o\g\3\9\v\e\2\t\v\5\u\g\n\h\3\0\k\w\q\r\a\2\a\h\6\3\l\8\b\y\h\i\e\a\o\r\j\s\5\p\q\k\z\t\d\x\a\o\d\4\k\3\z\r\s\o\f\p\v\p\6\y\i\m\f\e\h\7\3\o\q\z\j\g\w\m\c\k\7\d\8\q\t\g\6\d\e\8\1\y\b\k\d\n\k\t\z\s\u\2\9\z\r\5\u\o\q\7\n\7\d\w\s\8\f\4\x\h\b\w\d\h\k\0\l\3\p\x\c\a\h\6\h\0\l\0\o\z\p\v\m\m\9\h\u\e\q\m\v\7\y\g\c\5\6\k\3\1\1\t\9\6\a\y\f\p\x\b\w\c\5\n\r\j\y\y\e\r\8\g\1\1\a\b\v\c\t\6\2\2\n\i\z\t\0\o\7\v\z\7\z\a\j\9\4\e\e\4\2\v\q\w\3\0\8\y\r\a\i\e\e\u\x\i\b\3\1\y\5\m\q\t\v\y\t\8\r\a\5\h\g\a\n\3\r\y\u\h\j\7\t\1\1\p\z\e\e\n\6\6\k\8\k\1\z\h\8\e\b\j\i\b\p\7\m\w\n\z\d\s\w\g\w\q\z\b\4\h\f\2\8\u\g\m\l\e\x\o\h\h\o\0\1\a\c\x\u\b\g\9\u\j\9\m\f\7\1\6\x\y\6\x\v\u\t\d\1\d\d\t\v\t\w\2\8\w\q\8\p\0\q\4\i\a\j\k\1\h\y\3\m\p\p\7\1\w ]] 00:38:00.315 00:38:00.315 real 0m1.479s 00:38:00.315 user 0m0.843s 00:38:00.315 sys 0m0.445s 00:38:00.315 07:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:00.315 07:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:38:00.315 ************************************ 00:38:00.315 END TEST dd_rw_offset 00:38:00.315 ************************************ 00:38:00.315 07:48:33 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:38:00.315 07:48:33 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:38:00.316 07:48:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:00.316 07:48:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:00.316 07:48:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:38:00.316 07:48:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:00.316 07:48:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:38:00.316 07:48:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:00.316 07:48:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:38:00.316 07:48:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:00.316 07:48:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:00.316 { 00:38:00.316 "subsystems": [ 00:38:00.316 { 00:38:00.316 "subsystem": "bdev", 00:38:00.316 "config": [ 00:38:00.316 { 00:38:00.316 "params": { 00:38:00.316 "trtype": "pcie", 00:38:00.316 "traddr": "0000:00:10.0", 00:38:00.316 "name": "Nvme0" 00:38:00.316 }, 00:38:00.316 "method": "bdev_nvme_attach_controller" 00:38:00.316 }, 00:38:00.316 { 00:38:00.316 "method": "bdev_wait_for_examine" 00:38:00.316 } 00:38:00.316 ] 00:38:00.316 } 00:38:00.316 ] 00:38:00.316 } 00:38:00.316 [2024-07-12 07:48:34.041602] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:00.316 [2024-07-12 07:48:34.041775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174455 ] 00:38:00.316 [2024-07-12 07:48:34.181479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.575 [2024-07-12 07:48:34.222825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:00.833  Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:00.833 00:38:00.833 07:48:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:00.833 ************************************ 00:38:00.833 END TEST spdk_dd_basic_rw 00:38:00.833 ************************************ 00:38:00.833 00:38:00.833 real 0m18.499s 00:38:00.833 user 0m11.259s 00:38:00.833 sys 0m5.385s 00:38:00.833 07:48:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:00.833 07:48:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:00.833 07:48:34 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:38:00.833 07:48:34 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:00.833 07:48:34 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:00.833 07:48:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:38:01.092 ************************************ 00:38:01.092 START TEST spdk_dd_posix 00:38:01.092 ************************************ 00:38:01.092 07:48:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:38:01.092 * Looking for test storage... 00:38:01.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:01.092 07:48:34 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:01.092 07:48:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:01.092 07:48:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:01.092 07:48:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:01.092 07:48:34 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:01.092 07:48:34 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:01.092 07:48:34 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:01.092 07:48:34 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:38:01.093 * First test run, using AIO 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:01.093 ************************************ 00:38:01.093 START TEST dd_flag_append 00:38:01.093 ************************************ 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1121 -- # append 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=m5fdyqy4ovppfwvp3qdwux8d7zc7mwxc 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=esdk77mg9gfdesues3gdnoyaiaslwcb9 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s m5fdyqy4ovppfwvp3qdwux8d7zc7mwxc 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s esdk77mg9gfdesues3gdnoyaiaslwcb9 00:38:01.093 07:48:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:38:01.093 [2024-07-12 07:48:34.926453] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:01.093 [2024-07-12 07:48:34.926943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174527 ] 00:38:01.352 [2024-07-12 07:48:35.081261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.352 [2024-07-12 07:48:35.134382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:01.611  Copying: 32/32 [B] (average 31 kBps) 00:38:01.611 00:38:01.611 ************************************ 00:38:01.611 END TEST dd_flag_append 00:38:01.611 ************************************ 00:38:01.611 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ esdk77mg9gfdesues3gdnoyaiaslwcb9m5fdyqy4ovppfwvp3qdwux8d7zc7mwxc == \e\s\d\k\7\7\m\g\9\g\f\d\e\s\u\e\s\3\g\d\n\o\y\a\i\a\s\l\w\c\b\9\m\5\f\d\y\q\y\4\o\v\p\p\f\w\v\p\3\q\d\w\u\x\8\d\7\z\c\7\m\w\x\c ]] 00:38:01.611 00:38:01.611 real 0m0.633s 00:38:01.611 user 0m0.280s 00:38:01.611 sys 0m0.213s 00:38:01.611 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:01.611 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:01.871 ************************************ 00:38:01.871 START TEST dd_flag_directory 00:38:01.871 ************************************ 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1121 -- # directory 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:01.871 07:48:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:01.871 [2024-07-12 07:48:35.624946] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:01.871 [2024-07-12 07:48:35.625481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174560 ] 00:38:02.129 [2024-07-12 07:48:35.778306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.129 [2024-07-12 07:48:35.821295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.129 [2024-07-12 07:48:35.882735] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:02.129 [2024-07-12 07:48:35.883035] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:02.129 [2024-07-12 07:48:35.883104] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:02.129 [2024-07-12 07:48:35.985485] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:02.388 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:38:02.388 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:02.388 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:38:02.389 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:38:02.389 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:38:02.389 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:02.389 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:02.389 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:38:02.389 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:02.389 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:02.389 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:02.389 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:02.389 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:02.389 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:02.389 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:02.389 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:02.389 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:02.389 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:02.389 [2024-07-12 07:48:36.205714] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:02.389 [2024-07-12 07:48:36.206186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174575 ] 00:38:02.648 [2024-07-12 07:48:36.361368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.648 [2024-07-12 07:48:36.413920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.648 [2024-07-12 07:48:36.481580] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:02.648 [2024-07-12 07:48:36.481869] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:02.648 [2024-07-12 07:48:36.481938] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:02.906 [2024-07-12 07:48:36.584643] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:02.906 ************************************ 00:38:02.906 END TEST dd_flag_directory 00:38:02.906 ************************************ 00:38:02.906 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:38:02.906 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:02.906 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:38:02.906 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:38:02.906 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:38:02.906 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:02.906 00:38:02.906 real 0m1.178s 00:38:02.906 user 0m0.535s 00:38:02.906 sys 0m0.440s 00:38:02.906 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:02.906 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:03.165 ************************************ 00:38:03.165 START TEST dd_flag_nofollow 00:38:03.165 ************************************ 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1121 -- # nofollow 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:03.165 07:48:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:03.165 [2024-07-12 07:48:36.887524] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:03.165 [2024-07-12 07:48:36.888015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174609 ] 00:38:03.165 [2024-07-12 07:48:37.041940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.425 [2024-07-12 07:48:37.091211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:03.425 [2024-07-12 07:48:37.155314] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:03.425 [2024-07-12 07:48:37.155581] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:03.425 [2024-07-12 07:48:37.155648] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:03.425 [2024-07-12 07:48:37.258034] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:03.684 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:38:03.684 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:03.684 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:38:03.684 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:38:03.684 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:38:03.684 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:03.685 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:03.685 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:38:03.685 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:03.685 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:03.685 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:03.685 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:03.685 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:03.685 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:03.685 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:03.685 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:03.685 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:03.685 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:03.685 [2024-07-12 07:48:37.480018] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:03.685 [2024-07-12 07:48:37.480550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174622 ] 00:38:03.943 [2024-07-12 07:48:37.634206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.943 [2024-07-12 07:48:37.686375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:03.943 [2024-07-12 07:48:37.753854] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:03.943 [2024-07-12 07:48:37.754159] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:03.944 [2024-07-12 07:48:37.754219] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:04.202 [2024-07-12 07:48:37.856211] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:04.202 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:38:04.202 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:04.202 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:38:04.202 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:38:04.202 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:38:04.202 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:04.202 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:38:04.202 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:38:04.202 07:48:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:38:04.202 07:48:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:04.202 [2024-07-12 07:48:38.064336] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:04.202 [2024-07-12 07:48:38.064700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174632 ] 00:38:04.461 [2024-07-12 07:48:38.207586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:04.461 [2024-07-12 07:48:38.251167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:04.720  Copying: 512/512 [B] (average 500 kBps) 00:38:04.720 00:38:04.979 ************************************ 00:38:04.979 END TEST dd_flag_nofollow 00:38:04.979 ************************************ 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ fbunmtn266jkwy3b0xjyeo0vbx4dex2ojwe1h4kqpn5tru0cz66b8sgzkpv8d9xtec3i1fwj7afkih9ewdqf6huxvn5emarq1gcv70k6xsci326tbi7rkuatdxc5j4m1vwi6g2nwtazqt6whzeccfovr24yla3lo5lry5sqhcxq6av14dbu7szpccp99q38ja3nk0buu6fhytmiqxdy6l2kwzngntulwg9m2a1fojfyce1l9vy3hqr8zx5db35k37vhbg2u8mz2z1tmwviodwmrddi0cn3qhs4zohgfel0p8vzyapy4bsd0xznemtkgrthj6zpcptn5s3zln7rpe54uschprhcpz4hi6cwae3hs62bmko6ggkea7sj6j0oiwodmmyw0pt8esc984qokztyqtkv4cpim7mon66y4yqc580r1vsuy7aimb9gl2g3e087jlnp70pddl0jg3mw5mjzjfmmzetza8f0fr2ni596x8zvu3wgk22al1wp3efnp2 == \f\b\u\n\m\t\n\2\6\6\j\k\w\y\3\b\0\x\j\y\e\o\0\v\b\x\4\d\e\x\2\o\j\w\e\1\h\4\k\q\p\n\5\t\r\u\0\c\z\6\6\b\8\s\g\z\k\p\v\8\d\9\x\t\e\c\3\i\1\f\w\j\7\a\f\k\i\h\9\e\w\d\q\f\6\h\u\x\v\n\5\e\m\a\r\q\1\g\c\v\7\0\k\6\x\s\c\i\3\2\6\t\b\i\7\r\k\u\a\t\d\x\c\5\j\4\m\1\v\w\i\6\g\2\n\w\t\a\z\q\t\6\w\h\z\e\c\c\f\o\v\r\2\4\y\l\a\3\l\o\5\l\r\y\5\s\q\h\c\x\q\6\a\v\1\4\d\b\u\7\s\z\p\c\c\p\9\9\q\3\8\j\a\3\n\k\0\b\u\u\6\f\h\y\t\m\i\q\x\d\y\6\l\2\k\w\z\n\g\n\t\u\l\w\g\9\m\2\a\1\f\o\j\f\y\c\e\1\l\9\v\y\3\h\q\r\8\z\x\5\d\b\3\5\k\3\7\v\h\b\g\2\u\8\m\z\2\z\1\t\m\w\v\i\o\d\w\m\r\d\d\i\0\c\n\3\q\h\s\4\z\o\h\g\f\e\l\0\p\8\v\z\y\a\p\y\4\b\s\d\0\x\z\n\e\m\t\k\g\r\t\h\j\6\z\p\c\p\t\n\5\s\3\z\l\n\7\r\p\e\5\4\u\s\c\h\p\r\h\c\p\z\4\h\i\6\c\w\a\e\3\h\s\6\2\b\m\k\o\6\g\g\k\e\a\7\s\j\6\j\0\o\i\w\o\d\m\m\y\w\0\p\t\8\e\s\c\9\8\4\q\o\k\z\t\y\q\t\k\v\4\c\p\i\m\7\m\o\n\6\6\y\4\y\q\c\5\8\0\r\1\v\s\u\y\7\a\i\m\b\9\g\l\2\g\3\e\0\8\7\j\l\n\p\7\0\p\d\d\l\0\j\g\3\m\w\5\m\j\z\j\f\m\m\z\e\t\z\a\8\f\0\f\r\2\n\i\5\9\6\x\8\z\v\u\3\w\g\k\2\2\a\l\1\w\p\3\e\f\n\p\2 ]] 00:38:04.979 00:38:04.979 real 0m1.797s 00:38:04.979 user 0m0.831s 00:38:04.979 sys 0m0.620s 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:04.979 ************************************ 00:38:04.979 START TEST dd_flag_noatime 00:38:04.979 ************************************ 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1121 -- # noatime 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1720770518 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1720770518 00:38:04.979 07:48:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:38:05.914 07:48:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:05.914 [2024-07-12 07:48:39.782982] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:05.914 [2024-07-12 07:48:39.783443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174683 ] 00:38:06.172 [2024-07-12 07:48:39.937712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.172 [2024-07-12 07:48:39.986297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:06.740  Copying: 512/512 [B] (average 500 kBps) 00:38:06.740 00:38:06.741 07:48:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:06.741 07:48:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1720770518 )) 00:38:06.741 07:48:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:06.741 07:48:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1720770518 )) 00:38:06.741 07:48:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:06.741 [2024-07-12 07:48:40.432801] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:06.741 [2024-07-12 07:48:40.433346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174698 ] 00:38:06.741 [2024-07-12 07:48:40.589690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.999 [2024-07-12 07:48:40.642082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.258  Copying: 512/512 [B] (average 500 kBps) 00:38:07.258 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:07.258 ************************************ 00:38:07.258 END TEST dd_flag_noatime 00:38:07.258 ************************************ 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1720770520 )) 00:38:07.258 00:38:07.258 real 0m2.333s 00:38:07.258 user 0m0.600s 00:38:07.258 sys 0m0.433s 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:07.258 ************************************ 00:38:07.258 START TEST dd_flags_misc 00:38:07.258 ************************************ 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1121 -- # io 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:07.258 07:48:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:07.517 [2024-07-12 07:48:41.167424] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:07.517 [2024-07-12 07:48:41.167673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174736 ] 00:38:07.517 [2024-07-12 07:48:41.321819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.517 [2024-07-12 07:48:41.374888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.035  Copying: 512/512 [B] (average 500 kBps) 00:38:08.035 00:38:08.035 07:48:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qti1xjuoyqa5g8puu41k2e78cwpppygmc9stl3pbtobgckc9qax4zgqy60q01ezbw1uxb2lf2uo2hze3h1pmdmxjutbyggzd34me17jtz38msxvgww540sieetfr2p0c0d2mzvu828xfppg913un56wrmd19kemow9dmbses3cdrb3viwz2732frfcfotad1aeeggs4prq6d4rncyy03q7sppj1ndw20mdq8hwqfl7s3nzkohrsmi730wz2piuyuyuwizxe6m1atwpruxxa7fphztrifcz7i18g2cffcpxmr62n0kfao1giaj0pff8zbrnlomvlmj9y70b8181wx1gy1dunm9wy5ijymzws9blxir3pu55a7s67bpcrd2smtkw56v1mdm0cgx6dsh41j32bgn4xe55upz6yrf9yepddupnx9mpvg56xjkjhgi8lcn18p6td5di58afu453f87xrnk4yt8m7mbhnq2cuvm8lg9vtuiz0gkqrwla4t38k7 == \q\t\i\1\x\j\u\o\y\q\a\5\g\8\p\u\u\4\1\k\2\e\7\8\c\w\p\p\p\y\g\m\c\9\s\t\l\3\p\b\t\o\b\g\c\k\c\9\q\a\x\4\z\g\q\y\6\0\q\0\1\e\z\b\w\1\u\x\b\2\l\f\2\u\o\2\h\z\e\3\h\1\p\m\d\m\x\j\u\t\b\y\g\g\z\d\3\4\m\e\1\7\j\t\z\3\8\m\s\x\v\g\w\w\5\4\0\s\i\e\e\t\f\r\2\p\0\c\0\d\2\m\z\v\u\8\2\8\x\f\p\p\g\9\1\3\u\n\5\6\w\r\m\d\1\9\k\e\m\o\w\9\d\m\b\s\e\s\3\c\d\r\b\3\v\i\w\z\2\7\3\2\f\r\f\c\f\o\t\a\d\1\a\e\e\g\g\s\4\p\r\q\6\d\4\r\n\c\y\y\0\3\q\7\s\p\p\j\1\n\d\w\2\0\m\d\q\8\h\w\q\f\l\7\s\3\n\z\k\o\h\r\s\m\i\7\3\0\w\z\2\p\i\u\y\u\y\u\w\i\z\x\e\6\m\1\a\t\w\p\r\u\x\x\a\7\f\p\h\z\t\r\i\f\c\z\7\i\1\8\g\2\c\f\f\c\p\x\m\r\6\2\n\0\k\f\a\o\1\g\i\a\j\0\p\f\f\8\z\b\r\n\l\o\m\v\l\m\j\9\y\7\0\b\8\1\8\1\w\x\1\g\y\1\d\u\n\m\9\w\y\5\i\j\y\m\z\w\s\9\b\l\x\i\r\3\p\u\5\5\a\7\s\6\7\b\p\c\r\d\2\s\m\t\k\w\5\6\v\1\m\d\m\0\c\g\x\6\d\s\h\4\1\j\3\2\b\g\n\4\x\e\5\5\u\p\z\6\y\r\f\9\y\e\p\d\d\u\p\n\x\9\m\p\v\g\5\6\x\j\k\j\h\g\i\8\l\c\n\1\8\p\6\t\d\5\d\i\5\8\a\f\u\4\5\3\f\8\7\x\r\n\k\4\y\t\8\m\7\m\b\h\n\q\2\c\u\v\m\8\l\g\9\v\t\u\i\z\0\g\k\q\r\w\l\a\4\t\3\8\k\7 ]] 00:38:08.035 07:48:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:08.035 07:48:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:08.035 [2024-07-12 07:48:41.802538] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:08.035 [2024-07-12 07:48:41.802838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174741 ] 00:38:08.295 [2024-07-12 07:48:41.957617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:08.295 [2024-07-12 07:48:42.010140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.554  Copying: 512/512 [B] (average 500 kBps) 00:38:08.554 00:38:08.555 07:48:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qti1xjuoyqa5g8puu41k2e78cwpppygmc9stl3pbtobgckc9qax4zgqy60q01ezbw1uxb2lf2uo2hze3h1pmdmxjutbyggzd34me17jtz38msxvgww540sieetfr2p0c0d2mzvu828xfppg913un56wrmd19kemow9dmbses3cdrb3viwz2732frfcfotad1aeeggs4prq6d4rncyy03q7sppj1ndw20mdq8hwqfl7s3nzkohrsmi730wz2piuyuyuwizxe6m1atwpruxxa7fphztrifcz7i18g2cffcpxmr62n0kfao1giaj0pff8zbrnlomvlmj9y70b8181wx1gy1dunm9wy5ijymzws9blxir3pu55a7s67bpcrd2smtkw56v1mdm0cgx6dsh41j32bgn4xe55upz6yrf9yepddupnx9mpvg56xjkjhgi8lcn18p6td5di58afu453f87xrnk4yt8m7mbhnq2cuvm8lg9vtuiz0gkqrwla4t38k7 == \q\t\i\1\x\j\u\o\y\q\a\5\g\8\p\u\u\4\1\k\2\e\7\8\c\w\p\p\p\y\g\m\c\9\s\t\l\3\p\b\t\o\b\g\c\k\c\9\q\a\x\4\z\g\q\y\6\0\q\0\1\e\z\b\w\1\u\x\b\2\l\f\2\u\o\2\h\z\e\3\h\1\p\m\d\m\x\j\u\t\b\y\g\g\z\d\3\4\m\e\1\7\j\t\z\3\8\m\s\x\v\g\w\w\5\4\0\s\i\e\e\t\f\r\2\p\0\c\0\d\2\m\z\v\u\8\2\8\x\f\p\p\g\9\1\3\u\n\5\6\w\r\m\d\1\9\k\e\m\o\w\9\d\m\b\s\e\s\3\c\d\r\b\3\v\i\w\z\2\7\3\2\f\r\f\c\f\o\t\a\d\1\a\e\e\g\g\s\4\p\r\q\6\d\4\r\n\c\y\y\0\3\q\7\s\p\p\j\1\n\d\w\2\0\m\d\q\8\h\w\q\f\l\7\s\3\n\z\k\o\h\r\s\m\i\7\3\0\w\z\2\p\i\u\y\u\y\u\w\i\z\x\e\6\m\1\a\t\w\p\r\u\x\x\a\7\f\p\h\z\t\r\i\f\c\z\7\i\1\8\g\2\c\f\f\c\p\x\m\r\6\2\n\0\k\f\a\o\1\g\i\a\j\0\p\f\f\8\z\b\r\n\l\o\m\v\l\m\j\9\y\7\0\b\8\1\8\1\w\x\1\g\y\1\d\u\n\m\9\w\y\5\i\j\y\m\z\w\s\9\b\l\x\i\r\3\p\u\5\5\a\7\s\6\7\b\p\c\r\d\2\s\m\t\k\w\5\6\v\1\m\d\m\0\c\g\x\6\d\s\h\4\1\j\3\2\b\g\n\4\x\e\5\5\u\p\z\6\y\r\f\9\y\e\p\d\d\u\p\n\x\9\m\p\v\g\5\6\x\j\k\j\h\g\i\8\l\c\n\1\8\p\6\t\d\5\d\i\5\8\a\f\u\4\5\3\f\8\7\x\r\n\k\4\y\t\8\m\7\m\b\h\n\q\2\c\u\v\m\8\l\g\9\v\t\u\i\z\0\g\k\q\r\w\l\a\4\t\3\8\k\7 ]] 00:38:08.555 07:48:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:08.555 07:48:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:08.813 [2024-07-12 07:48:42.446656] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:08.813 [2024-07-12 07:48:42.447483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174759 ] 00:38:08.813 [2024-07-12 07:48:42.601421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:08.813 [2024-07-12 07:48:42.647659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:09.333  Copying: 512/512 [B] (average 166 kBps) 00:38:09.333 00:38:09.333 07:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qti1xjuoyqa5g8puu41k2e78cwpppygmc9stl3pbtobgckc9qax4zgqy60q01ezbw1uxb2lf2uo2hze3h1pmdmxjutbyggzd34me17jtz38msxvgww540sieetfr2p0c0d2mzvu828xfppg913un56wrmd19kemow9dmbses3cdrb3viwz2732frfcfotad1aeeggs4prq6d4rncyy03q7sppj1ndw20mdq8hwqfl7s3nzkohrsmi730wz2piuyuyuwizxe6m1atwpruxxa7fphztrifcz7i18g2cffcpxmr62n0kfao1giaj0pff8zbrnlomvlmj9y70b8181wx1gy1dunm9wy5ijymzws9blxir3pu55a7s67bpcrd2smtkw56v1mdm0cgx6dsh41j32bgn4xe55upz6yrf9yepddupnx9mpvg56xjkjhgi8lcn18p6td5di58afu453f87xrnk4yt8m7mbhnq2cuvm8lg9vtuiz0gkqrwla4t38k7 == \q\t\i\1\x\j\u\o\y\q\a\5\g\8\p\u\u\4\1\k\2\e\7\8\c\w\p\p\p\y\g\m\c\9\s\t\l\3\p\b\t\o\b\g\c\k\c\9\q\a\x\4\z\g\q\y\6\0\q\0\1\e\z\b\w\1\u\x\b\2\l\f\2\u\o\2\h\z\e\3\h\1\p\m\d\m\x\j\u\t\b\y\g\g\z\d\3\4\m\e\1\7\j\t\z\3\8\m\s\x\v\g\w\w\5\4\0\s\i\e\e\t\f\r\2\p\0\c\0\d\2\m\z\v\u\8\2\8\x\f\p\p\g\9\1\3\u\n\5\6\w\r\m\d\1\9\k\e\m\o\w\9\d\m\b\s\e\s\3\c\d\r\b\3\v\i\w\z\2\7\3\2\f\r\f\c\f\o\t\a\d\1\a\e\e\g\g\s\4\p\r\q\6\d\4\r\n\c\y\y\0\3\q\7\s\p\p\j\1\n\d\w\2\0\m\d\q\8\h\w\q\f\l\7\s\3\n\z\k\o\h\r\s\m\i\7\3\0\w\z\2\p\i\u\y\u\y\u\w\i\z\x\e\6\m\1\a\t\w\p\r\u\x\x\a\7\f\p\h\z\t\r\i\f\c\z\7\i\1\8\g\2\c\f\f\c\p\x\m\r\6\2\n\0\k\f\a\o\1\g\i\a\j\0\p\f\f\8\z\b\r\n\l\o\m\v\l\m\j\9\y\7\0\b\8\1\8\1\w\x\1\g\y\1\d\u\n\m\9\w\y\5\i\j\y\m\z\w\s\9\b\l\x\i\r\3\p\u\5\5\a\7\s\6\7\b\p\c\r\d\2\s\m\t\k\w\5\6\v\1\m\d\m\0\c\g\x\6\d\s\h\4\1\j\3\2\b\g\n\4\x\e\5\5\u\p\z\6\y\r\f\9\y\e\p\d\d\u\p\n\x\9\m\p\v\g\5\6\x\j\k\j\h\g\i\8\l\c\n\1\8\p\6\t\d\5\d\i\5\8\a\f\u\4\5\3\f\8\7\x\r\n\k\4\y\t\8\m\7\m\b\h\n\q\2\c\u\v\m\8\l\g\9\v\t\u\i\z\0\g\k\q\r\w\l\a\4\t\3\8\k\7 ]] 00:38:09.333 07:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:09.333 07:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:09.333 [2024-07-12 07:48:43.078538] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:09.333 [2024-07-12 07:48:43.078833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174768 ] 00:38:09.593 [2024-07-12 07:48:43.233493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.593 [2024-07-12 07:48:43.283858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:09.852  Copying: 512/512 [B] (average 125 kBps) 00:38:09.852 00:38:09.852 07:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qti1xjuoyqa5g8puu41k2e78cwpppygmc9stl3pbtobgckc9qax4zgqy60q01ezbw1uxb2lf2uo2hze3h1pmdmxjutbyggzd34me17jtz38msxvgww540sieetfr2p0c0d2mzvu828xfppg913un56wrmd19kemow9dmbses3cdrb3viwz2732frfcfotad1aeeggs4prq6d4rncyy03q7sppj1ndw20mdq8hwqfl7s3nzkohrsmi730wz2piuyuyuwizxe6m1atwpruxxa7fphztrifcz7i18g2cffcpxmr62n0kfao1giaj0pff8zbrnlomvlmj9y70b8181wx1gy1dunm9wy5ijymzws9blxir3pu55a7s67bpcrd2smtkw56v1mdm0cgx6dsh41j32bgn4xe55upz6yrf9yepddupnx9mpvg56xjkjhgi8lcn18p6td5di58afu453f87xrnk4yt8m7mbhnq2cuvm8lg9vtuiz0gkqrwla4t38k7 == \q\t\i\1\x\j\u\o\y\q\a\5\g\8\p\u\u\4\1\k\2\e\7\8\c\w\p\p\p\y\g\m\c\9\s\t\l\3\p\b\t\o\b\g\c\k\c\9\q\a\x\4\z\g\q\y\6\0\q\0\1\e\z\b\w\1\u\x\b\2\l\f\2\u\o\2\h\z\e\3\h\1\p\m\d\m\x\j\u\t\b\y\g\g\z\d\3\4\m\e\1\7\j\t\z\3\8\m\s\x\v\g\w\w\5\4\0\s\i\e\e\t\f\r\2\p\0\c\0\d\2\m\z\v\u\8\2\8\x\f\p\p\g\9\1\3\u\n\5\6\w\r\m\d\1\9\k\e\m\o\w\9\d\m\b\s\e\s\3\c\d\r\b\3\v\i\w\z\2\7\3\2\f\r\f\c\f\o\t\a\d\1\a\e\e\g\g\s\4\p\r\q\6\d\4\r\n\c\y\y\0\3\q\7\s\p\p\j\1\n\d\w\2\0\m\d\q\8\h\w\q\f\l\7\s\3\n\z\k\o\h\r\s\m\i\7\3\0\w\z\2\p\i\u\y\u\y\u\w\i\z\x\e\6\m\1\a\t\w\p\r\u\x\x\a\7\f\p\h\z\t\r\i\f\c\z\7\i\1\8\g\2\c\f\f\c\p\x\m\r\6\2\n\0\k\f\a\o\1\g\i\a\j\0\p\f\f\8\z\b\r\n\l\o\m\v\l\m\j\9\y\7\0\b\8\1\8\1\w\x\1\g\y\1\d\u\n\m\9\w\y\5\i\j\y\m\z\w\s\9\b\l\x\i\r\3\p\u\5\5\a\7\s\6\7\b\p\c\r\d\2\s\m\t\k\w\5\6\v\1\m\d\m\0\c\g\x\6\d\s\h\4\1\j\3\2\b\g\n\4\x\e\5\5\u\p\z\6\y\r\f\9\y\e\p\d\d\u\p\n\x\9\m\p\v\g\5\6\x\j\k\j\h\g\i\8\l\c\n\1\8\p\6\t\d\5\d\i\5\8\a\f\u\4\5\3\f\8\7\x\r\n\k\4\y\t\8\m\7\m\b\h\n\q\2\c\u\v\m\8\l\g\9\v\t\u\i\z\0\g\k\q\r\w\l\a\4\t\3\8\k\7 ]] 00:38:09.852 07:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:09.852 07:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:38:09.852 07:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:38:09.852 07:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:38:09.852 07:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:09.852 07:48:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:10.119 [2024-07-12 07:48:43.741832] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:10.119 [2024-07-12 07:48:43.742084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174781 ] 00:38:10.119 [2024-07-12 07:48:43.897421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.119 [2024-07-12 07:48:43.942696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.634  Copying: 512/512 [B] (average 500 kBps) 00:38:10.634 00:38:10.635 07:48:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cdxgmwigug0p57by36cvlj1p1n3s5ra116z04eccr0aiwfstrr07foy1kykdripg6mcq7d0716apyxny3body7z1xpa3iwbiy02jeisn8r0pgr27nccqpcvl2iei9sykfvmcft1rvw92cadj84jb8vef5c6md59xgtjhpy5303i5r4j3ipxeo67dhev2jnqqk1ujua9lk25l5nfwubq4gfhs6v3100jb3azsuk70vlh8s0yj9coewn31ai6p7t27dre49vchy4otxvklqvnh4hg7rm4zow39tjvcktnxf1b5egmx6xhlvtdyp8ityw1rgppmgercgeoux9yjkgx122ss9cepxs4sguor8bd258vqcn97n53unz5nj2qabfockwvekjtetgqgwidk4w44w20outafreki40b6gyglqkzpv6a4humkwwkad762mwu56kanxutw48io5teym6h57u3o01fcj2iba1t52kt9ad4t0a44f5k9hacqj111cizn == \c\d\x\g\m\w\i\g\u\g\0\p\5\7\b\y\3\6\c\v\l\j\1\p\1\n\3\s\5\r\a\1\1\6\z\0\4\e\c\c\r\0\a\i\w\f\s\t\r\r\0\7\f\o\y\1\k\y\k\d\r\i\p\g\6\m\c\q\7\d\0\7\1\6\a\p\y\x\n\y\3\b\o\d\y\7\z\1\x\p\a\3\i\w\b\i\y\0\2\j\e\i\s\n\8\r\0\p\g\r\2\7\n\c\c\q\p\c\v\l\2\i\e\i\9\s\y\k\f\v\m\c\f\t\1\r\v\w\9\2\c\a\d\j\8\4\j\b\8\v\e\f\5\c\6\m\d\5\9\x\g\t\j\h\p\y\5\3\0\3\i\5\r\4\j\3\i\p\x\e\o\6\7\d\h\e\v\2\j\n\q\q\k\1\u\j\u\a\9\l\k\2\5\l\5\n\f\w\u\b\q\4\g\f\h\s\6\v\3\1\0\0\j\b\3\a\z\s\u\k\7\0\v\l\h\8\s\0\y\j\9\c\o\e\w\n\3\1\a\i\6\p\7\t\2\7\d\r\e\4\9\v\c\h\y\4\o\t\x\v\k\l\q\v\n\h\4\h\g\7\r\m\4\z\o\w\3\9\t\j\v\c\k\t\n\x\f\1\b\5\e\g\m\x\6\x\h\l\v\t\d\y\p\8\i\t\y\w\1\r\g\p\p\m\g\e\r\c\g\e\o\u\x\9\y\j\k\g\x\1\2\2\s\s\9\c\e\p\x\s\4\s\g\u\o\r\8\b\d\2\5\8\v\q\c\n\9\7\n\5\3\u\n\z\5\n\j\2\q\a\b\f\o\c\k\w\v\e\k\j\t\e\t\g\q\g\w\i\d\k\4\w\4\4\w\2\0\o\u\t\a\f\r\e\k\i\4\0\b\6\g\y\g\l\q\k\z\p\v\6\a\4\h\u\m\k\w\w\k\a\d\7\6\2\m\w\u\5\6\k\a\n\x\u\t\w\4\8\i\o\5\t\e\y\m\6\h\5\7\u\3\o\0\1\f\c\j\2\i\b\a\1\t\5\2\k\t\9\a\d\4\t\0\a\4\4\f\5\k\9\h\a\c\q\j\1\1\1\c\i\z\n ]] 00:38:10.635 07:48:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:10.635 07:48:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:10.635 [2024-07-12 07:48:44.355654] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:10.635 [2024-07-12 07:48:44.355913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174794 ] 00:38:10.635 [2024-07-12 07:48:44.509761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.892 [2024-07-12 07:48:44.560648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:11.149  Copying: 512/512 [B] (average 500 kBps) 00:38:11.149 00:38:11.150 07:48:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cdxgmwigug0p57by36cvlj1p1n3s5ra116z04eccr0aiwfstrr07foy1kykdripg6mcq7d0716apyxny3body7z1xpa3iwbiy02jeisn8r0pgr27nccqpcvl2iei9sykfvmcft1rvw92cadj84jb8vef5c6md59xgtjhpy5303i5r4j3ipxeo67dhev2jnqqk1ujua9lk25l5nfwubq4gfhs6v3100jb3azsuk70vlh8s0yj9coewn31ai6p7t27dre49vchy4otxvklqvnh4hg7rm4zow39tjvcktnxf1b5egmx6xhlvtdyp8ityw1rgppmgercgeoux9yjkgx122ss9cepxs4sguor8bd258vqcn97n53unz5nj2qabfockwvekjtetgqgwidk4w44w20outafreki40b6gyglqkzpv6a4humkwwkad762mwu56kanxutw48io5teym6h57u3o01fcj2iba1t52kt9ad4t0a44f5k9hacqj111cizn == \c\d\x\g\m\w\i\g\u\g\0\p\5\7\b\y\3\6\c\v\l\j\1\p\1\n\3\s\5\r\a\1\1\6\z\0\4\e\c\c\r\0\a\i\w\f\s\t\r\r\0\7\f\o\y\1\k\y\k\d\r\i\p\g\6\m\c\q\7\d\0\7\1\6\a\p\y\x\n\y\3\b\o\d\y\7\z\1\x\p\a\3\i\w\b\i\y\0\2\j\e\i\s\n\8\r\0\p\g\r\2\7\n\c\c\q\p\c\v\l\2\i\e\i\9\s\y\k\f\v\m\c\f\t\1\r\v\w\9\2\c\a\d\j\8\4\j\b\8\v\e\f\5\c\6\m\d\5\9\x\g\t\j\h\p\y\5\3\0\3\i\5\r\4\j\3\i\p\x\e\o\6\7\d\h\e\v\2\j\n\q\q\k\1\u\j\u\a\9\l\k\2\5\l\5\n\f\w\u\b\q\4\g\f\h\s\6\v\3\1\0\0\j\b\3\a\z\s\u\k\7\0\v\l\h\8\s\0\y\j\9\c\o\e\w\n\3\1\a\i\6\p\7\t\2\7\d\r\e\4\9\v\c\h\y\4\o\t\x\v\k\l\q\v\n\h\4\h\g\7\r\m\4\z\o\w\3\9\t\j\v\c\k\t\n\x\f\1\b\5\e\g\m\x\6\x\h\l\v\t\d\y\p\8\i\t\y\w\1\r\g\p\p\m\g\e\r\c\g\e\o\u\x\9\y\j\k\g\x\1\2\2\s\s\9\c\e\p\x\s\4\s\g\u\o\r\8\b\d\2\5\8\v\q\c\n\9\7\n\5\3\u\n\z\5\n\j\2\q\a\b\f\o\c\k\w\v\e\k\j\t\e\t\g\q\g\w\i\d\k\4\w\4\4\w\2\0\o\u\t\a\f\r\e\k\i\4\0\b\6\g\y\g\l\q\k\z\p\v\6\a\4\h\u\m\k\w\w\k\a\d\7\6\2\m\w\u\5\6\k\a\n\x\u\t\w\4\8\i\o\5\t\e\y\m\6\h\5\7\u\3\o\0\1\f\c\j\2\i\b\a\1\t\5\2\k\t\9\a\d\4\t\0\a\4\4\f\5\k\9\h\a\c\q\j\1\1\1\c\i\z\n ]] 00:38:11.150 07:48:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:11.150 07:48:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:11.150 [2024-07-12 07:48:44.978572] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:11.150 [2024-07-12 07:48:44.978850] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174803 ] 00:38:11.407 [2024-07-12 07:48:45.133494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.407 [2024-07-12 07:48:45.176502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:11.665  Copying: 512/512 [B] (average 100 kBps) 00:38:11.665 00:38:11.665 07:48:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cdxgmwigug0p57by36cvlj1p1n3s5ra116z04eccr0aiwfstrr07foy1kykdripg6mcq7d0716apyxny3body7z1xpa3iwbiy02jeisn8r0pgr27nccqpcvl2iei9sykfvmcft1rvw92cadj84jb8vef5c6md59xgtjhpy5303i5r4j3ipxeo67dhev2jnqqk1ujua9lk25l5nfwubq4gfhs6v3100jb3azsuk70vlh8s0yj9coewn31ai6p7t27dre49vchy4otxvklqvnh4hg7rm4zow39tjvcktnxf1b5egmx6xhlvtdyp8ityw1rgppmgercgeoux9yjkgx122ss9cepxs4sguor8bd258vqcn97n53unz5nj2qabfockwvekjtetgqgwidk4w44w20outafreki40b6gyglqkzpv6a4humkwwkad762mwu56kanxutw48io5teym6h57u3o01fcj2iba1t52kt9ad4t0a44f5k9hacqj111cizn == \c\d\x\g\m\w\i\g\u\g\0\p\5\7\b\y\3\6\c\v\l\j\1\p\1\n\3\s\5\r\a\1\1\6\z\0\4\e\c\c\r\0\a\i\w\f\s\t\r\r\0\7\f\o\y\1\k\y\k\d\r\i\p\g\6\m\c\q\7\d\0\7\1\6\a\p\y\x\n\y\3\b\o\d\y\7\z\1\x\p\a\3\i\w\b\i\y\0\2\j\e\i\s\n\8\r\0\p\g\r\2\7\n\c\c\q\p\c\v\l\2\i\e\i\9\s\y\k\f\v\m\c\f\t\1\r\v\w\9\2\c\a\d\j\8\4\j\b\8\v\e\f\5\c\6\m\d\5\9\x\g\t\j\h\p\y\5\3\0\3\i\5\r\4\j\3\i\p\x\e\o\6\7\d\h\e\v\2\j\n\q\q\k\1\u\j\u\a\9\l\k\2\5\l\5\n\f\w\u\b\q\4\g\f\h\s\6\v\3\1\0\0\j\b\3\a\z\s\u\k\7\0\v\l\h\8\s\0\y\j\9\c\o\e\w\n\3\1\a\i\6\p\7\t\2\7\d\r\e\4\9\v\c\h\y\4\o\t\x\v\k\l\q\v\n\h\4\h\g\7\r\m\4\z\o\w\3\9\t\j\v\c\k\t\n\x\f\1\b\5\e\g\m\x\6\x\h\l\v\t\d\y\p\8\i\t\y\w\1\r\g\p\p\m\g\e\r\c\g\e\o\u\x\9\y\j\k\g\x\1\2\2\s\s\9\c\e\p\x\s\4\s\g\u\o\r\8\b\d\2\5\8\v\q\c\n\9\7\n\5\3\u\n\z\5\n\j\2\q\a\b\f\o\c\k\w\v\e\k\j\t\e\t\g\q\g\w\i\d\k\4\w\4\4\w\2\0\o\u\t\a\f\r\e\k\i\4\0\b\6\g\y\g\l\q\k\z\p\v\6\a\4\h\u\m\k\w\w\k\a\d\7\6\2\m\w\u\5\6\k\a\n\x\u\t\w\4\8\i\o\5\t\e\y\m\6\h\5\7\u\3\o\0\1\f\c\j\2\i\b\a\1\t\5\2\k\t\9\a\d\4\t\0\a\4\4\f\5\k\9\h\a\c\q\j\1\1\1\c\i\z\n ]] 00:38:11.665 07:48:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:11.665 07:48:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:11.923 [2024-07-12 07:48:45.603032] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:11.923 [2024-07-12 07:48:45.603288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174816 ] 00:38:11.923 [2024-07-12 07:48:45.757761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.182 [2024-07-12 07:48:45.809984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.441  Copying: 512/512 [B] (average 166 kBps) 00:38:12.441 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ cdxgmwigug0p57by36cvlj1p1n3s5ra116z04eccr0aiwfstrr07foy1kykdripg6mcq7d0716apyxny3body7z1xpa3iwbiy02jeisn8r0pgr27nccqpcvl2iei9sykfvmcft1rvw92cadj84jb8vef5c6md59xgtjhpy5303i5r4j3ipxeo67dhev2jnqqk1ujua9lk25l5nfwubq4gfhs6v3100jb3azsuk70vlh8s0yj9coewn31ai6p7t27dre49vchy4otxvklqvnh4hg7rm4zow39tjvcktnxf1b5egmx6xhlvtdyp8ityw1rgppmgercgeoux9yjkgx122ss9cepxs4sguor8bd258vqcn97n53unz5nj2qabfockwvekjtetgqgwidk4w44w20outafreki40b6gyglqkzpv6a4humkwwkad762mwu56kanxutw48io5teym6h57u3o01fcj2iba1t52kt9ad4t0a44f5k9hacqj111cizn == \c\d\x\g\m\w\i\g\u\g\0\p\5\7\b\y\3\6\c\v\l\j\1\p\1\n\3\s\5\r\a\1\1\6\z\0\4\e\c\c\r\0\a\i\w\f\s\t\r\r\0\7\f\o\y\1\k\y\k\d\r\i\p\g\6\m\c\q\7\d\0\7\1\6\a\p\y\x\n\y\3\b\o\d\y\7\z\1\x\p\a\3\i\w\b\i\y\0\2\j\e\i\s\n\8\r\0\p\g\r\2\7\n\c\c\q\p\c\v\l\2\i\e\i\9\s\y\k\f\v\m\c\f\t\1\r\v\w\9\2\c\a\d\j\8\4\j\b\8\v\e\f\5\c\6\m\d\5\9\x\g\t\j\h\p\y\5\3\0\3\i\5\r\4\j\3\i\p\x\e\o\6\7\d\h\e\v\2\j\n\q\q\k\1\u\j\u\a\9\l\k\2\5\l\5\n\f\w\u\b\q\4\g\f\h\s\6\v\3\1\0\0\j\b\3\a\z\s\u\k\7\0\v\l\h\8\s\0\y\j\9\c\o\e\w\n\3\1\a\i\6\p\7\t\2\7\d\r\e\4\9\v\c\h\y\4\o\t\x\v\k\l\q\v\n\h\4\h\g\7\r\m\4\z\o\w\3\9\t\j\v\c\k\t\n\x\f\1\b\5\e\g\m\x\6\x\h\l\v\t\d\y\p\8\i\t\y\w\1\r\g\p\p\m\g\e\r\c\g\e\o\u\x\9\y\j\k\g\x\1\2\2\s\s\9\c\e\p\x\s\4\s\g\u\o\r\8\b\d\2\5\8\v\q\c\n\9\7\n\5\3\u\n\z\5\n\j\2\q\a\b\f\o\c\k\w\v\e\k\j\t\e\t\g\q\g\w\i\d\k\4\w\4\4\w\2\0\o\u\t\a\f\r\e\k\i\4\0\b\6\g\y\g\l\q\k\z\p\v\6\a\4\h\u\m\k\w\w\k\a\d\7\6\2\m\w\u\5\6\k\a\n\x\u\t\w\4\8\i\o\5\t\e\y\m\6\h\5\7\u\3\o\0\1\f\c\j\2\i\b\a\1\t\5\2\k\t\9\a\d\4\t\0\a\4\4\f\5\k\9\h\a\c\q\j\1\1\1\c\i\z\n ]] 00:38:12.441 00:38:12.441 real 0m5.078s 00:38:12.441 user 0m2.251s 00:38:12.441 sys 0m1.682s 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:12.441 ************************************ 00:38:12.441 END TEST dd_flags_misc 00:38:12.441 ************************************ 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:38:12.441 * Second test run, using AIO 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:12.441 ************************************ 00:38:12.441 START TEST dd_flag_append_forced_aio 00:38:12.441 ************************************ 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1121 -- # append 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=3h9bx3hq3fd2qzwk4yvbm9gdwochnvmk 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=znpkrqn86ttqjzdf2zg93j1nmoy8rhxt 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 3h9bx3hq3fd2qzwk4yvbm9gdwochnvmk 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s znpkrqn86ttqjzdf2zg93j1nmoy8rhxt 00:38:12.441 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:38:12.441 [2024-07-12 07:48:46.316636] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:12.441 [2024-07-12 07:48:46.316910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174855 ] 00:38:12.701 [2024-07-12 07:48:46.469994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.701 [2024-07-12 07:48:46.511807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.960  Copying: 32/32 [B] (average 31 kBps) 00:38:12.960 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ znpkrqn86ttqjzdf2zg93j1nmoy8rhxt3h9bx3hq3fd2qzwk4yvbm9gdwochnvmk == \z\n\p\k\r\q\n\8\6\t\t\q\j\z\d\f\2\z\g\9\3\j\1\n\m\o\y\8\r\h\x\t\3\h\9\b\x\3\h\q\3\f\d\2\q\z\w\k\4\y\v\b\m\9\g\d\w\o\c\h\n\v\m\k ]] 00:38:13.220 00:38:13.220 real 0m0.620s 00:38:13.220 user 0m0.300s 00:38:13.220 sys 0m0.186s 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:13.220 ************************************ 00:38:13.220 END TEST dd_flag_append_forced_aio 00:38:13.220 ************************************ 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:13.220 ************************************ 00:38:13.220 START TEST dd_flag_directory_forced_aio 00:38:13.220 ************************************ 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1121 -- # directory 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:13.220 07:48:46 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:13.220 [2024-07-12 07:48:47.004846] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:13.220 [2024-07-12 07:48:47.005134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174881 ] 00:38:13.480 [2024-07-12 07:48:47.157222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.480 [2024-07-12 07:48:47.209781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:13.480 [2024-07-12 07:48:47.277244] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:13.480 [2024-07-12 07:48:47.277404] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:13.480 [2024-07-12 07:48:47.277440] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:13.739 [2024-07-12 07:48:47.379561] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:13.739 07:48:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:13.739 [2024-07-12 07:48:47.600145] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:13.739 [2024-07-12 07:48:47.600407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174901 ] 00:38:13.998 [2024-07-12 07:48:47.753805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.998 [2024-07-12 07:48:47.793767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:13.998 [2024-07-12 07:48:47.855022] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:13.998 [2024-07-12 07:48:47.855092] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:13.998 [2024-07-12 07:48:47.855131] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:14.258 [2024-07-12 07:48:47.957487] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:14.258 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:38:14.258 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:14.258 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:38:14.258 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:14.258 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:14.258 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:14.258 00:38:14.258 real 0m1.171s 00:38:14.258 user 0m0.559s 00:38:14.258 sys 0m0.412s 00:38:14.258 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:14.258 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:14.258 ************************************ 00:38:14.258 END TEST dd_flag_directory_forced_aio 00:38:14.258 ************************************ 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:14.518 ************************************ 00:38:14.518 START TEST dd_flag_nofollow_forced_aio 00:38:14.518 ************************************ 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1121 -- # nofollow 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:14.518 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:14.518 [2024-07-12 07:48:48.254637] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:14.518 [2024-07-12 07:48:48.254953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174937 ] 00:38:14.778 [2024-07-12 07:48:48.410131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.778 [2024-07-12 07:48:48.462130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.778 [2024-07-12 07:48:48.529397] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:14.778 [2024-07-12 07:48:48.529474] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:14.778 [2024-07-12 07:48:48.529506] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:14.778 [2024-07-12 07:48:48.631809] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:15.038 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:15.039 07:48:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:15.039 [2024-07-12 07:48:48.845835] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:15.039 [2024-07-12 07:48:48.846079] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174945 ] 00:38:15.298 [2024-07-12 07:48:49.001149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:15.298 [2024-07-12 07:48:49.040752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:15.298 [2024-07-12 07:48:49.100560] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:15.298 [2024-07-12 07:48:49.100641] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:15.298 [2024-07-12 07:48:49.100679] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:15.557 [2024-07-12 07:48:49.203031] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:15.557 07:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:38:15.557 07:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:15.557 07:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:38:15.557 07:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:15.557 07:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:15.557 07:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:15.557 07:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:38:15.557 07:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:15.557 07:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:15.557 07:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:15.557 [2024-07-12 07:48:49.423541] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:15.557 [2024-07-12 07:48:49.423802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174961 ] 00:38:15.817 [2024-07-12 07:48:49.578441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:15.817 [2024-07-12 07:48:49.630060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:16.386  Copying: 512/512 [B] (average 500 kBps) 00:38:16.386 00:38:16.386 07:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 7215i5saszb7ndtrulnfpl5eyzuzk84p7azujlv6gi7he9bf7e9v4apdbjofcoa2ssodqmi7itjenl7y2pjdrcjdpxvtxkh0mitlkkfqx3vy3zx015ea8xhh8095t1ahdacj4ixny7jheducft2ylvbby68hnst6x8kz2qts0m7m40wokg662k320weaxorf2fprjcj43guxqg0bs3r4xl7hz4qsf9dqvkoyuko78uc73qr8tx68rq78ywyehlmbbcflxryj8ddzfxwx9gfpiffrk17gxu4zj8j4lnbe5oxzxrhplnrh80ieca0806npf9aziulqfpwff341ofcmooogvd205dh4i3ge5gsnuxklenph5j5ukihshyaz7h418hfbfbkaznj9iexuxlhir5ytng3itk3oqzrxqmtjnny6fy03x9n5jwaq9m1extpqp0phq3h6agepktjmk8gkx6r4dsm1kcy6cj6xxe27bmf9cpmok0cnek3yx8j395hl == \7\2\1\5\i\5\s\a\s\z\b\7\n\d\t\r\u\l\n\f\p\l\5\e\y\z\u\z\k\8\4\p\7\a\z\u\j\l\v\6\g\i\7\h\e\9\b\f\7\e\9\v\4\a\p\d\b\j\o\f\c\o\a\2\s\s\o\d\q\m\i\7\i\t\j\e\n\l\7\y\2\p\j\d\r\c\j\d\p\x\v\t\x\k\h\0\m\i\t\l\k\k\f\q\x\3\v\y\3\z\x\0\1\5\e\a\8\x\h\h\8\0\9\5\t\1\a\h\d\a\c\j\4\i\x\n\y\7\j\h\e\d\u\c\f\t\2\y\l\v\b\b\y\6\8\h\n\s\t\6\x\8\k\z\2\q\t\s\0\m\7\m\4\0\w\o\k\g\6\6\2\k\3\2\0\w\e\a\x\o\r\f\2\f\p\r\j\c\j\4\3\g\u\x\q\g\0\b\s\3\r\4\x\l\7\h\z\4\q\s\f\9\d\q\v\k\o\y\u\k\o\7\8\u\c\7\3\q\r\8\t\x\6\8\r\q\7\8\y\w\y\e\h\l\m\b\b\c\f\l\x\r\y\j\8\d\d\z\f\x\w\x\9\g\f\p\i\f\f\r\k\1\7\g\x\u\4\z\j\8\j\4\l\n\b\e\5\o\x\z\x\r\h\p\l\n\r\h\8\0\i\e\c\a\0\8\0\6\n\p\f\9\a\z\i\u\l\q\f\p\w\f\f\3\4\1\o\f\c\m\o\o\o\g\v\d\2\0\5\d\h\4\i\3\g\e\5\g\s\n\u\x\k\l\e\n\p\h\5\j\5\u\k\i\h\s\h\y\a\z\7\h\4\1\8\h\f\b\f\b\k\a\z\n\j\9\i\e\x\u\x\l\h\i\r\5\y\t\n\g\3\i\t\k\3\o\q\z\r\x\q\m\t\j\n\n\y\6\f\y\0\3\x\9\n\5\j\w\a\q\9\m\1\e\x\t\p\q\p\0\p\h\q\3\h\6\a\g\e\p\k\t\j\m\k\8\g\k\x\6\r\4\d\s\m\1\k\c\y\6\c\j\6\x\x\e\2\7\b\m\f\9\c\p\m\o\k\0\c\n\e\k\3\y\x\8\j\3\9\5\h\l ]] 00:38:16.386 00:38:16.386 real 0m1.807s 00:38:16.386 user 0m0.874s 00:38:16.386 sys 0m0.598s 00:38:16.386 07:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:16.386 07:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:16.386 ************************************ 00:38:16.386 END TEST dd_flag_nofollow_forced_aio 00:38:16.386 ************************************ 00:38:16.386 07:48:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:38:16.386 07:48:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:16.386 07:48:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:16.386 07:48:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:16.386 ************************************ 00:38:16.386 START TEST dd_flag_noatime_forced_aio 00:38:16.386 ************************************ 00:38:16.386 07:48:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1121 -- # noatime 00:38:16.386 07:48:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:38:16.386 07:48:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:38:16.386 07:48:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:38:16.386 07:48:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:16.386 07:48:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:16.386 07:48:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:16.386 07:48:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1720770529 00:38:16.386 07:48:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:16.386 07:48:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1720770529 00:38:16.386 07:48:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:38:17.325 07:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:17.325 [2024-07-12 07:48:51.151513] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:17.325 [2024-07-12 07:48:51.151721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175006 ] 00:38:17.585 [2024-07-12 07:48:51.291199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.585 [2024-07-12 07:48:51.338930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:17.844  Copying: 512/512 [B] (average 500 kBps) 00:38:17.844 00:38:17.844 07:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:17.844 07:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1720770529 )) 00:38:17.844 07:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:17.844 07:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1720770529 )) 00:38:17.844 07:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:18.102 [2024-07-12 07:48:51.787559] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:18.102 [2024-07-12 07:48:51.787807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175024 ] 00:38:18.102 [2024-07-12 07:48:51.941922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:18.360 [2024-07-12 07:48:51.988282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:18.620  Copying: 512/512 [B] (average 500 kBps) 00:38:18.620 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1720770532 )) 00:38:18.620 00:38:18.620 real 0m2.285s 00:38:18.620 user 0m0.550s 00:38:18.620 sys 0m0.438s 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:18.620 ************************************ 00:38:18.620 END TEST dd_flag_noatime_forced_aio 00:38:18.620 ************************************ 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:18.620 ************************************ 00:38:18.620 START TEST dd_flags_misc_forced_aio 00:38:18.620 ************************************ 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1121 -- # io 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:18.620 07:48:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:18.880 [2024-07-12 07:48:52.506858] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:18.880 [2024-07-12 07:48:52.507156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175056 ] 00:38:18.880 [2024-07-12 07:48:52.663747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:18.880 [2024-07-12 07:48:52.715108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:19.399  Copying: 512/512 [B] (average 500 kBps) 00:38:19.399 00:38:19.399 07:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tp8052niqv7wc5o6sbum25xtea2ckqgmbdd75d219mn4jagrr1jb2eduqtw4y7aiaru1wlrldn9zwwuugmumojzoxp9ef7qzjmymfazqfayi3lp74f7ef0wcnwp1skxsusm16oohkr8gbu2lc6rmvz1c8ugisj305sx5ub5962ji8882gqd343qa8ycosgwjc5ffbhdjwhdtqj3cikara2jjrftb8ldkm9ny8wlxbcjxd56qdg3j4a4vqd2a71anjga1s6zqy1lfodf52flxdfas4qowld3vl28blhrh9nip1tqzac8ubowiia4geqx56n2dn99m9q83w4bmue9er7n0ddoeauz7afqzqoeycwyw8nb9k7p597gn4ivowtbp79ublk3o2jbqh6jonmlp1xsco127hc47aq0uwqus7aogzzl4gtw7be8hfc2o222xa8omxz6gmjsgradwnm9s4hhu3vf3r743cuook2kwgcoholykzgbor77tpxy3ns8k == \t\p\8\0\5\2\n\i\q\v\7\w\c\5\o\6\s\b\u\m\2\5\x\t\e\a\2\c\k\q\g\m\b\d\d\7\5\d\2\1\9\m\n\4\j\a\g\r\r\1\j\b\2\e\d\u\q\t\w\4\y\7\a\i\a\r\u\1\w\l\r\l\d\n\9\z\w\w\u\u\g\m\u\m\o\j\z\o\x\p\9\e\f\7\q\z\j\m\y\m\f\a\z\q\f\a\y\i\3\l\p\7\4\f\7\e\f\0\w\c\n\w\p\1\s\k\x\s\u\s\m\1\6\o\o\h\k\r\8\g\b\u\2\l\c\6\r\m\v\z\1\c\8\u\g\i\s\j\3\0\5\s\x\5\u\b\5\9\6\2\j\i\8\8\8\2\g\q\d\3\4\3\q\a\8\y\c\o\s\g\w\j\c\5\f\f\b\h\d\j\w\h\d\t\q\j\3\c\i\k\a\r\a\2\j\j\r\f\t\b\8\l\d\k\m\9\n\y\8\w\l\x\b\c\j\x\d\5\6\q\d\g\3\j\4\a\4\v\q\d\2\a\7\1\a\n\j\g\a\1\s\6\z\q\y\1\l\f\o\d\f\5\2\f\l\x\d\f\a\s\4\q\o\w\l\d\3\v\l\2\8\b\l\h\r\h\9\n\i\p\1\t\q\z\a\c\8\u\b\o\w\i\i\a\4\g\e\q\x\5\6\n\2\d\n\9\9\m\9\q\8\3\w\4\b\m\u\e\9\e\r\7\n\0\d\d\o\e\a\u\z\7\a\f\q\z\q\o\e\y\c\w\y\w\8\n\b\9\k\7\p\5\9\7\g\n\4\i\v\o\w\t\b\p\7\9\u\b\l\k\3\o\2\j\b\q\h\6\j\o\n\m\l\p\1\x\s\c\o\1\2\7\h\c\4\7\a\q\0\u\w\q\u\s\7\a\o\g\z\z\l\4\g\t\w\7\b\e\8\h\f\c\2\o\2\2\2\x\a\8\o\m\x\z\6\g\m\j\s\g\r\a\d\w\n\m\9\s\4\h\h\u\3\v\f\3\r\7\4\3\c\u\o\o\k\2\k\w\g\c\o\h\o\l\y\k\z\g\b\o\r\7\7\t\p\x\y\3\n\s\8\k ]] 00:38:19.399 07:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:19.399 07:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:19.399 [2024-07-12 07:48:53.143576] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:19.399 [2024-07-12 07:48:53.144630] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175067 ] 00:38:19.657 [2024-07-12 07:48:53.298247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.657 [2024-07-12 07:48:53.348089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:19.915  Copying: 512/512 [B] (average 500 kBps) 00:38:19.915 00:38:19.915 07:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tp8052niqv7wc5o6sbum25xtea2ckqgmbdd75d219mn4jagrr1jb2eduqtw4y7aiaru1wlrldn9zwwuugmumojzoxp9ef7qzjmymfazqfayi3lp74f7ef0wcnwp1skxsusm16oohkr8gbu2lc6rmvz1c8ugisj305sx5ub5962ji8882gqd343qa8ycosgwjc5ffbhdjwhdtqj3cikara2jjrftb8ldkm9ny8wlxbcjxd56qdg3j4a4vqd2a71anjga1s6zqy1lfodf52flxdfas4qowld3vl28blhrh9nip1tqzac8ubowiia4geqx56n2dn99m9q83w4bmue9er7n0ddoeauz7afqzqoeycwyw8nb9k7p597gn4ivowtbp79ublk3o2jbqh6jonmlp1xsco127hc47aq0uwqus7aogzzl4gtw7be8hfc2o222xa8omxz6gmjsgradwnm9s4hhu3vf3r743cuook2kwgcoholykzgbor77tpxy3ns8k == \t\p\8\0\5\2\n\i\q\v\7\w\c\5\o\6\s\b\u\m\2\5\x\t\e\a\2\c\k\q\g\m\b\d\d\7\5\d\2\1\9\m\n\4\j\a\g\r\r\1\j\b\2\e\d\u\q\t\w\4\y\7\a\i\a\r\u\1\w\l\r\l\d\n\9\z\w\w\u\u\g\m\u\m\o\j\z\o\x\p\9\e\f\7\q\z\j\m\y\m\f\a\z\q\f\a\y\i\3\l\p\7\4\f\7\e\f\0\w\c\n\w\p\1\s\k\x\s\u\s\m\1\6\o\o\h\k\r\8\g\b\u\2\l\c\6\r\m\v\z\1\c\8\u\g\i\s\j\3\0\5\s\x\5\u\b\5\9\6\2\j\i\8\8\8\2\g\q\d\3\4\3\q\a\8\y\c\o\s\g\w\j\c\5\f\f\b\h\d\j\w\h\d\t\q\j\3\c\i\k\a\r\a\2\j\j\r\f\t\b\8\l\d\k\m\9\n\y\8\w\l\x\b\c\j\x\d\5\6\q\d\g\3\j\4\a\4\v\q\d\2\a\7\1\a\n\j\g\a\1\s\6\z\q\y\1\l\f\o\d\f\5\2\f\l\x\d\f\a\s\4\q\o\w\l\d\3\v\l\2\8\b\l\h\r\h\9\n\i\p\1\t\q\z\a\c\8\u\b\o\w\i\i\a\4\g\e\q\x\5\6\n\2\d\n\9\9\m\9\q\8\3\w\4\b\m\u\e\9\e\r\7\n\0\d\d\o\e\a\u\z\7\a\f\q\z\q\o\e\y\c\w\y\w\8\n\b\9\k\7\p\5\9\7\g\n\4\i\v\o\w\t\b\p\7\9\u\b\l\k\3\o\2\j\b\q\h\6\j\o\n\m\l\p\1\x\s\c\o\1\2\7\h\c\4\7\a\q\0\u\w\q\u\s\7\a\o\g\z\z\l\4\g\t\w\7\b\e\8\h\f\c\2\o\2\2\2\x\a\8\o\m\x\z\6\g\m\j\s\g\r\a\d\w\n\m\9\s\4\h\h\u\3\v\f\3\r\7\4\3\c\u\o\o\k\2\k\w\g\c\o\h\o\l\y\k\z\g\b\o\r\7\7\t\p\x\y\3\n\s\8\k ]] 00:38:19.915 07:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:19.915 07:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:19.915 [2024-07-12 07:48:53.765790] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:19.915 [2024-07-12 07:48:53.766055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175082 ] 00:38:20.173 [2024-07-12 07:48:53.921468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.174 [2024-07-12 07:48:53.971367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.444  Copying: 512/512 [B] (average 125 kBps) 00:38:20.444 00:38:20.723 07:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tp8052niqv7wc5o6sbum25xtea2ckqgmbdd75d219mn4jagrr1jb2eduqtw4y7aiaru1wlrldn9zwwuugmumojzoxp9ef7qzjmymfazqfayi3lp74f7ef0wcnwp1skxsusm16oohkr8gbu2lc6rmvz1c8ugisj305sx5ub5962ji8882gqd343qa8ycosgwjc5ffbhdjwhdtqj3cikara2jjrftb8ldkm9ny8wlxbcjxd56qdg3j4a4vqd2a71anjga1s6zqy1lfodf52flxdfas4qowld3vl28blhrh9nip1tqzac8ubowiia4geqx56n2dn99m9q83w4bmue9er7n0ddoeauz7afqzqoeycwyw8nb9k7p597gn4ivowtbp79ublk3o2jbqh6jonmlp1xsco127hc47aq0uwqus7aogzzl4gtw7be8hfc2o222xa8omxz6gmjsgradwnm9s4hhu3vf3r743cuook2kwgcoholykzgbor77tpxy3ns8k == \t\p\8\0\5\2\n\i\q\v\7\w\c\5\o\6\s\b\u\m\2\5\x\t\e\a\2\c\k\q\g\m\b\d\d\7\5\d\2\1\9\m\n\4\j\a\g\r\r\1\j\b\2\e\d\u\q\t\w\4\y\7\a\i\a\r\u\1\w\l\r\l\d\n\9\z\w\w\u\u\g\m\u\m\o\j\z\o\x\p\9\e\f\7\q\z\j\m\y\m\f\a\z\q\f\a\y\i\3\l\p\7\4\f\7\e\f\0\w\c\n\w\p\1\s\k\x\s\u\s\m\1\6\o\o\h\k\r\8\g\b\u\2\l\c\6\r\m\v\z\1\c\8\u\g\i\s\j\3\0\5\s\x\5\u\b\5\9\6\2\j\i\8\8\8\2\g\q\d\3\4\3\q\a\8\y\c\o\s\g\w\j\c\5\f\f\b\h\d\j\w\h\d\t\q\j\3\c\i\k\a\r\a\2\j\j\r\f\t\b\8\l\d\k\m\9\n\y\8\w\l\x\b\c\j\x\d\5\6\q\d\g\3\j\4\a\4\v\q\d\2\a\7\1\a\n\j\g\a\1\s\6\z\q\y\1\l\f\o\d\f\5\2\f\l\x\d\f\a\s\4\q\o\w\l\d\3\v\l\2\8\b\l\h\r\h\9\n\i\p\1\t\q\z\a\c\8\u\b\o\w\i\i\a\4\g\e\q\x\5\6\n\2\d\n\9\9\m\9\q\8\3\w\4\b\m\u\e\9\e\r\7\n\0\d\d\o\e\a\u\z\7\a\f\q\z\q\o\e\y\c\w\y\w\8\n\b\9\k\7\p\5\9\7\g\n\4\i\v\o\w\t\b\p\7\9\u\b\l\k\3\o\2\j\b\q\h\6\j\o\n\m\l\p\1\x\s\c\o\1\2\7\h\c\4\7\a\q\0\u\w\q\u\s\7\a\o\g\z\z\l\4\g\t\w\7\b\e\8\h\f\c\2\o\2\2\2\x\a\8\o\m\x\z\6\g\m\j\s\g\r\a\d\w\n\m\9\s\4\h\h\u\3\v\f\3\r\7\4\3\c\u\o\o\k\2\k\w\g\c\o\h\o\l\y\k\z\g\b\o\r\7\7\t\p\x\y\3\n\s\8\k ]] 00:38:20.723 07:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:20.723 07:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:20.723 [2024-07-12 07:48:54.404435] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:20.723 [2024-07-12 07:48:54.404692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175092 ] 00:38:20.723 [2024-07-12 07:48:54.557672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.723 [2024-07-12 07:48:54.600285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:21.251  Copying: 512/512 [B] (average 166 kBps) 00:38:21.251 00:38:21.251 07:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ tp8052niqv7wc5o6sbum25xtea2ckqgmbdd75d219mn4jagrr1jb2eduqtw4y7aiaru1wlrldn9zwwuugmumojzoxp9ef7qzjmymfazqfayi3lp74f7ef0wcnwp1skxsusm16oohkr8gbu2lc6rmvz1c8ugisj305sx5ub5962ji8882gqd343qa8ycosgwjc5ffbhdjwhdtqj3cikara2jjrftb8ldkm9ny8wlxbcjxd56qdg3j4a4vqd2a71anjga1s6zqy1lfodf52flxdfas4qowld3vl28blhrh9nip1tqzac8ubowiia4geqx56n2dn99m9q83w4bmue9er7n0ddoeauz7afqzqoeycwyw8nb9k7p597gn4ivowtbp79ublk3o2jbqh6jonmlp1xsco127hc47aq0uwqus7aogzzl4gtw7be8hfc2o222xa8omxz6gmjsgradwnm9s4hhu3vf3r743cuook2kwgcoholykzgbor77tpxy3ns8k == \t\p\8\0\5\2\n\i\q\v\7\w\c\5\o\6\s\b\u\m\2\5\x\t\e\a\2\c\k\q\g\m\b\d\d\7\5\d\2\1\9\m\n\4\j\a\g\r\r\1\j\b\2\e\d\u\q\t\w\4\y\7\a\i\a\r\u\1\w\l\r\l\d\n\9\z\w\w\u\u\g\m\u\m\o\j\z\o\x\p\9\e\f\7\q\z\j\m\y\m\f\a\z\q\f\a\y\i\3\l\p\7\4\f\7\e\f\0\w\c\n\w\p\1\s\k\x\s\u\s\m\1\6\o\o\h\k\r\8\g\b\u\2\l\c\6\r\m\v\z\1\c\8\u\g\i\s\j\3\0\5\s\x\5\u\b\5\9\6\2\j\i\8\8\8\2\g\q\d\3\4\3\q\a\8\y\c\o\s\g\w\j\c\5\f\f\b\h\d\j\w\h\d\t\q\j\3\c\i\k\a\r\a\2\j\j\r\f\t\b\8\l\d\k\m\9\n\y\8\w\l\x\b\c\j\x\d\5\6\q\d\g\3\j\4\a\4\v\q\d\2\a\7\1\a\n\j\g\a\1\s\6\z\q\y\1\l\f\o\d\f\5\2\f\l\x\d\f\a\s\4\q\o\w\l\d\3\v\l\2\8\b\l\h\r\h\9\n\i\p\1\t\q\z\a\c\8\u\b\o\w\i\i\a\4\g\e\q\x\5\6\n\2\d\n\9\9\m\9\q\8\3\w\4\b\m\u\e\9\e\r\7\n\0\d\d\o\e\a\u\z\7\a\f\q\z\q\o\e\y\c\w\y\w\8\n\b\9\k\7\p\5\9\7\g\n\4\i\v\o\w\t\b\p\7\9\u\b\l\k\3\o\2\j\b\q\h\6\j\o\n\m\l\p\1\x\s\c\o\1\2\7\h\c\4\7\a\q\0\u\w\q\u\s\7\a\o\g\z\z\l\4\g\t\w\7\b\e\8\h\f\c\2\o\2\2\2\x\a\8\o\m\x\z\6\g\m\j\s\g\r\a\d\w\n\m\9\s\4\h\h\u\3\v\f\3\r\7\4\3\c\u\o\o\k\2\k\w\g\c\o\h\o\l\y\k\z\g\b\o\r\7\7\t\p\x\y\3\n\s\8\k ]] 00:38:21.251 07:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:21.251 07:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:38:21.251 07:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:21.251 07:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:21.251 07:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:21.251 07:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:21.251 [2024-07-12 07:48:55.029902] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:21.251 [2024-07-12 07:48:55.030182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175104 ] 00:38:21.510 [2024-07-12 07:48:55.185335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.510 [2024-07-12 07:48:55.237064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:21.769  Copying: 512/512 [B] (average 500 kBps) 00:38:21.769 00:38:21.769 07:48:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9nh2nit0pvwftml8swiaisxtyuemjpe1woj1zwryv2xzgxssl3ytv3zp72rl4h44iew7fvcl6cl2jwlyo41q1gtxcs55q5ir9gek6dhlmndkufg89tjrdfdzw2s8hzdulpeptan2vq3x6mrhnsq5lh1se5xznxcozowdixzbqu311gqkqodz8qcx4tgih0nmnl46iwe4a6mlefobfnkae37s3o0qrj3x2oixm5o022qdxiooz2ol90cdzo6haxogqqiuagxsgczzftblah21l8lvkdaj1zi0godbmo4jainthdqonbp5uybfbfvo70v5lvqqvv00aomr033dtnubfc89bfslceh5eevb3ol6lbr15imibkmk94h6o3xdthbxug4t6vwa55j5fhmgsd9roftj1jdqmtfj6o8gihwgkns1lw8kx45yy5a85ul8a6xn95defw8eo54wb8yvotovmn3tt0lj5i0g074m6dhece8f3wkydvrhe1ts0q6rpkuw == \9\n\h\2\n\i\t\0\p\v\w\f\t\m\l\8\s\w\i\a\i\s\x\t\y\u\e\m\j\p\e\1\w\o\j\1\z\w\r\y\v\2\x\z\g\x\s\s\l\3\y\t\v\3\z\p\7\2\r\l\4\h\4\4\i\e\w\7\f\v\c\l\6\c\l\2\j\w\l\y\o\4\1\q\1\g\t\x\c\s\5\5\q\5\i\r\9\g\e\k\6\d\h\l\m\n\d\k\u\f\g\8\9\t\j\r\d\f\d\z\w\2\s\8\h\z\d\u\l\p\e\p\t\a\n\2\v\q\3\x\6\m\r\h\n\s\q\5\l\h\1\s\e\5\x\z\n\x\c\o\z\o\w\d\i\x\z\b\q\u\3\1\1\g\q\k\q\o\d\z\8\q\c\x\4\t\g\i\h\0\n\m\n\l\4\6\i\w\e\4\a\6\m\l\e\f\o\b\f\n\k\a\e\3\7\s\3\o\0\q\r\j\3\x\2\o\i\x\m\5\o\0\2\2\q\d\x\i\o\o\z\2\o\l\9\0\c\d\z\o\6\h\a\x\o\g\q\q\i\u\a\g\x\s\g\c\z\z\f\t\b\l\a\h\2\1\l\8\l\v\k\d\a\j\1\z\i\0\g\o\d\b\m\o\4\j\a\i\n\t\h\d\q\o\n\b\p\5\u\y\b\f\b\f\v\o\7\0\v\5\l\v\q\q\v\v\0\0\a\o\m\r\0\3\3\d\t\n\u\b\f\c\8\9\b\f\s\l\c\e\h\5\e\e\v\b\3\o\l\6\l\b\r\1\5\i\m\i\b\k\m\k\9\4\h\6\o\3\x\d\t\h\b\x\u\g\4\t\6\v\w\a\5\5\j\5\f\h\m\g\s\d\9\r\o\f\t\j\1\j\d\q\m\t\f\j\6\o\8\g\i\h\w\g\k\n\s\1\l\w\8\k\x\4\5\y\y\5\a\8\5\u\l\8\a\6\x\n\9\5\d\e\f\w\8\e\o\5\4\w\b\8\y\v\o\t\o\v\m\n\3\t\t\0\l\j\5\i\0\g\0\7\4\m\6\d\h\e\c\e\8\f\3\w\k\y\d\v\r\h\e\1\t\s\0\q\6\r\p\k\u\w ]] 00:38:21.769 07:48:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:21.769 07:48:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:22.027 [2024-07-12 07:48:55.662288] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:22.028 [2024-07-12 07:48:55.662559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175120 ] 00:38:22.028 [2024-07-12 07:48:55.817539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:22.028 [2024-07-12 07:48:55.858481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:22.546  Copying: 512/512 [B] (average 500 kBps) 00:38:22.546 00:38:22.546 07:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9nh2nit0pvwftml8swiaisxtyuemjpe1woj1zwryv2xzgxssl3ytv3zp72rl4h44iew7fvcl6cl2jwlyo41q1gtxcs55q5ir9gek6dhlmndkufg89tjrdfdzw2s8hzdulpeptan2vq3x6mrhnsq5lh1se5xznxcozowdixzbqu311gqkqodz8qcx4tgih0nmnl46iwe4a6mlefobfnkae37s3o0qrj3x2oixm5o022qdxiooz2ol90cdzo6haxogqqiuagxsgczzftblah21l8lvkdaj1zi0godbmo4jainthdqonbp5uybfbfvo70v5lvqqvv00aomr033dtnubfc89bfslceh5eevb3ol6lbr15imibkmk94h6o3xdthbxug4t6vwa55j5fhmgsd9roftj1jdqmtfj6o8gihwgkns1lw8kx45yy5a85ul8a6xn95defw8eo54wb8yvotovmn3tt0lj5i0g074m6dhece8f3wkydvrhe1ts0q6rpkuw == \9\n\h\2\n\i\t\0\p\v\w\f\t\m\l\8\s\w\i\a\i\s\x\t\y\u\e\m\j\p\e\1\w\o\j\1\z\w\r\y\v\2\x\z\g\x\s\s\l\3\y\t\v\3\z\p\7\2\r\l\4\h\4\4\i\e\w\7\f\v\c\l\6\c\l\2\j\w\l\y\o\4\1\q\1\g\t\x\c\s\5\5\q\5\i\r\9\g\e\k\6\d\h\l\m\n\d\k\u\f\g\8\9\t\j\r\d\f\d\z\w\2\s\8\h\z\d\u\l\p\e\p\t\a\n\2\v\q\3\x\6\m\r\h\n\s\q\5\l\h\1\s\e\5\x\z\n\x\c\o\z\o\w\d\i\x\z\b\q\u\3\1\1\g\q\k\q\o\d\z\8\q\c\x\4\t\g\i\h\0\n\m\n\l\4\6\i\w\e\4\a\6\m\l\e\f\o\b\f\n\k\a\e\3\7\s\3\o\0\q\r\j\3\x\2\o\i\x\m\5\o\0\2\2\q\d\x\i\o\o\z\2\o\l\9\0\c\d\z\o\6\h\a\x\o\g\q\q\i\u\a\g\x\s\g\c\z\z\f\t\b\l\a\h\2\1\l\8\l\v\k\d\a\j\1\z\i\0\g\o\d\b\m\o\4\j\a\i\n\t\h\d\q\o\n\b\p\5\u\y\b\f\b\f\v\o\7\0\v\5\l\v\q\q\v\v\0\0\a\o\m\r\0\3\3\d\t\n\u\b\f\c\8\9\b\f\s\l\c\e\h\5\e\e\v\b\3\o\l\6\l\b\r\1\5\i\m\i\b\k\m\k\9\4\h\6\o\3\x\d\t\h\b\x\u\g\4\t\6\v\w\a\5\5\j\5\f\h\m\g\s\d\9\r\o\f\t\j\1\j\d\q\m\t\f\j\6\o\8\g\i\h\w\g\k\n\s\1\l\w\8\k\x\4\5\y\y\5\a\8\5\u\l\8\a\6\x\n\9\5\d\e\f\w\8\e\o\5\4\w\b\8\y\v\o\t\o\v\m\n\3\t\t\0\l\j\5\i\0\g\0\7\4\m\6\d\h\e\c\e\8\f\3\w\k\y\d\v\r\h\e\1\t\s\0\q\6\r\p\k\u\w ]] 00:38:22.546 07:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:22.546 07:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:22.546 [2024-07-12 07:48:56.276542] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:22.546 [2024-07-12 07:48:56.276812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175126 ] 00:38:22.546 [2024-07-12 07:48:56.424638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:22.806 [2024-07-12 07:48:56.471330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.065  Copying: 512/512 [B] (average 125 kBps) 00:38:23.065 00:38:23.065 07:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9nh2nit0pvwftml8swiaisxtyuemjpe1woj1zwryv2xzgxssl3ytv3zp72rl4h44iew7fvcl6cl2jwlyo41q1gtxcs55q5ir9gek6dhlmndkufg89tjrdfdzw2s8hzdulpeptan2vq3x6mrhnsq5lh1se5xznxcozowdixzbqu311gqkqodz8qcx4tgih0nmnl46iwe4a6mlefobfnkae37s3o0qrj3x2oixm5o022qdxiooz2ol90cdzo6haxogqqiuagxsgczzftblah21l8lvkdaj1zi0godbmo4jainthdqonbp5uybfbfvo70v5lvqqvv00aomr033dtnubfc89bfslceh5eevb3ol6lbr15imibkmk94h6o3xdthbxug4t6vwa55j5fhmgsd9roftj1jdqmtfj6o8gihwgkns1lw8kx45yy5a85ul8a6xn95defw8eo54wb8yvotovmn3tt0lj5i0g074m6dhece8f3wkydvrhe1ts0q6rpkuw == \9\n\h\2\n\i\t\0\p\v\w\f\t\m\l\8\s\w\i\a\i\s\x\t\y\u\e\m\j\p\e\1\w\o\j\1\z\w\r\y\v\2\x\z\g\x\s\s\l\3\y\t\v\3\z\p\7\2\r\l\4\h\4\4\i\e\w\7\f\v\c\l\6\c\l\2\j\w\l\y\o\4\1\q\1\g\t\x\c\s\5\5\q\5\i\r\9\g\e\k\6\d\h\l\m\n\d\k\u\f\g\8\9\t\j\r\d\f\d\z\w\2\s\8\h\z\d\u\l\p\e\p\t\a\n\2\v\q\3\x\6\m\r\h\n\s\q\5\l\h\1\s\e\5\x\z\n\x\c\o\z\o\w\d\i\x\z\b\q\u\3\1\1\g\q\k\q\o\d\z\8\q\c\x\4\t\g\i\h\0\n\m\n\l\4\6\i\w\e\4\a\6\m\l\e\f\o\b\f\n\k\a\e\3\7\s\3\o\0\q\r\j\3\x\2\o\i\x\m\5\o\0\2\2\q\d\x\i\o\o\z\2\o\l\9\0\c\d\z\o\6\h\a\x\o\g\q\q\i\u\a\g\x\s\g\c\z\z\f\t\b\l\a\h\2\1\l\8\l\v\k\d\a\j\1\z\i\0\g\o\d\b\m\o\4\j\a\i\n\t\h\d\q\o\n\b\p\5\u\y\b\f\b\f\v\o\7\0\v\5\l\v\q\q\v\v\0\0\a\o\m\r\0\3\3\d\t\n\u\b\f\c\8\9\b\f\s\l\c\e\h\5\e\e\v\b\3\o\l\6\l\b\r\1\5\i\m\i\b\k\m\k\9\4\h\6\o\3\x\d\t\h\b\x\u\g\4\t\6\v\w\a\5\5\j\5\f\h\m\g\s\d\9\r\o\f\t\j\1\j\d\q\m\t\f\j\6\o\8\g\i\h\w\g\k\n\s\1\l\w\8\k\x\4\5\y\y\5\a\8\5\u\l\8\a\6\x\n\9\5\d\e\f\w\8\e\o\5\4\w\b\8\y\v\o\t\o\v\m\n\3\t\t\0\l\j\5\i\0\g\0\7\4\m\6\d\h\e\c\e\8\f\3\w\k\y\d\v\r\h\e\1\t\s\0\q\6\r\p\k\u\w ]] 00:38:23.065 07:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:23.065 07:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:23.065 [2024-07-12 07:48:56.894458] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:23.065 [2024-07-12 07:48:56.894709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175142 ] 00:38:23.325 [2024-07-12 07:48:57.049190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.325 [2024-07-12 07:48:57.090517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.584  Copying: 512/512 [B] (average 166 kBps) 00:38:23.584 00:38:23.584 07:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9nh2nit0pvwftml8swiaisxtyuemjpe1woj1zwryv2xzgxssl3ytv3zp72rl4h44iew7fvcl6cl2jwlyo41q1gtxcs55q5ir9gek6dhlmndkufg89tjrdfdzw2s8hzdulpeptan2vq3x6mrhnsq5lh1se5xznxcozowdixzbqu311gqkqodz8qcx4tgih0nmnl46iwe4a6mlefobfnkae37s3o0qrj3x2oixm5o022qdxiooz2ol90cdzo6haxogqqiuagxsgczzftblah21l8lvkdaj1zi0godbmo4jainthdqonbp5uybfbfvo70v5lvqqvv00aomr033dtnubfc89bfslceh5eevb3ol6lbr15imibkmk94h6o3xdthbxug4t6vwa55j5fhmgsd9roftj1jdqmtfj6o8gihwgkns1lw8kx45yy5a85ul8a6xn95defw8eo54wb8yvotovmn3tt0lj5i0g074m6dhece8f3wkydvrhe1ts0q6rpkuw == \9\n\h\2\n\i\t\0\p\v\w\f\t\m\l\8\s\w\i\a\i\s\x\t\y\u\e\m\j\p\e\1\w\o\j\1\z\w\r\y\v\2\x\z\g\x\s\s\l\3\y\t\v\3\z\p\7\2\r\l\4\h\4\4\i\e\w\7\f\v\c\l\6\c\l\2\j\w\l\y\o\4\1\q\1\g\t\x\c\s\5\5\q\5\i\r\9\g\e\k\6\d\h\l\m\n\d\k\u\f\g\8\9\t\j\r\d\f\d\z\w\2\s\8\h\z\d\u\l\p\e\p\t\a\n\2\v\q\3\x\6\m\r\h\n\s\q\5\l\h\1\s\e\5\x\z\n\x\c\o\z\o\w\d\i\x\z\b\q\u\3\1\1\g\q\k\q\o\d\z\8\q\c\x\4\t\g\i\h\0\n\m\n\l\4\6\i\w\e\4\a\6\m\l\e\f\o\b\f\n\k\a\e\3\7\s\3\o\0\q\r\j\3\x\2\o\i\x\m\5\o\0\2\2\q\d\x\i\o\o\z\2\o\l\9\0\c\d\z\o\6\h\a\x\o\g\q\q\i\u\a\g\x\s\g\c\z\z\f\t\b\l\a\h\2\1\l\8\l\v\k\d\a\j\1\z\i\0\g\o\d\b\m\o\4\j\a\i\n\t\h\d\q\o\n\b\p\5\u\y\b\f\b\f\v\o\7\0\v\5\l\v\q\q\v\v\0\0\a\o\m\r\0\3\3\d\t\n\u\b\f\c\8\9\b\f\s\l\c\e\h\5\e\e\v\b\3\o\l\6\l\b\r\1\5\i\m\i\b\k\m\k\9\4\h\6\o\3\x\d\t\h\b\x\u\g\4\t\6\v\w\a\5\5\j\5\f\h\m\g\s\d\9\r\o\f\t\j\1\j\d\q\m\t\f\j\6\o\8\g\i\h\w\g\k\n\s\1\l\w\8\k\x\4\5\y\y\5\a\8\5\u\l\8\a\6\x\n\9\5\d\e\f\w\8\e\o\5\4\w\b\8\y\v\o\t\o\v\m\n\3\t\t\0\l\j\5\i\0\g\0\7\4\m\6\d\h\e\c\e\8\f\3\w\k\y\d\v\r\h\e\1\t\s\0\q\6\r\p\k\u\w ]] 00:38:23.584 00:38:23.584 real 0m5.014s 00:38:23.584 user 0m2.221s 00:38:23.584 sys 0m1.687s 00:38:23.584 07:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:23.584 07:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:23.584 ************************************ 00:38:23.584 END TEST dd_flags_misc_forced_aio 00:38:23.584 ************************************ 00:38:23.844 07:48:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:38:23.844 07:48:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:23.844 07:48:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:23.844 00:38:23.844 real 0m22.778s 00:38:23.844 user 0m9.362s 00:38:23.844 sys 0m7.202s 00:38:23.844 07:48:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:23.844 07:48:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:23.844 ************************************ 00:38:23.844 END TEST spdk_dd_posix 00:38:23.844 ************************************ 00:38:23.844 07:48:57 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:38:23.844 07:48:57 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:23.844 07:48:57 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:23.844 07:48:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:38:23.844 ************************************ 00:38:23.844 START TEST spdk_dd_malloc 00:38:23.844 ************************************ 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:38:23.844 * Looking for test storage... 00:38:23.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:38:23.844 ************************************ 00:38:23.844 START TEST dd_malloc_copy 00:38:23.844 ************************************ 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1121 -- # malloc_copy 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:38:23.844 07:48:57 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:38:24.104 [2024-07-12 07:48:57.764031] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:24.104 [2024-07-12 07:48:57.764213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175222 ] 00:38:24.104 { 00:38:24.104 "subsystems": [ 00:38:24.104 { 00:38:24.104 "subsystem": "bdev", 00:38:24.104 "config": [ 00:38:24.104 { 00:38:24.104 "params": { 00:38:24.104 "block_size": 512, 00:38:24.104 "num_blocks": 1048576, 00:38:24.104 "name": "malloc0" 00:38:24.104 }, 00:38:24.104 "method": "bdev_malloc_create" 00:38:24.104 }, 00:38:24.104 { 00:38:24.104 "params": { 00:38:24.104 "block_size": 512, 00:38:24.104 "num_blocks": 1048576, 00:38:24.104 "name": "malloc1" 00:38:24.104 }, 00:38:24.104 "method": "bdev_malloc_create" 00:38:24.104 }, 00:38:24.104 { 00:38:24.104 "method": "bdev_wait_for_examine" 00:38:24.104 } 00:38:24.104 ] 00:38:24.104 } 00:38:24.104 ] 00:38:24.104 } 00:38:24.104 [2024-07-12 07:48:57.902159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.104 [2024-07-12 07:48:57.943825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.250  Copying: 244/512 [MB] (244 MBps) Copying: 488/512 [MB] (244 MBps) Copying: 512/512 [MB] (average 244 MBps) 00:38:27.250 00:38:27.250 07:49:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:38:27.250 07:49:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:38:27.250 07:49:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:38:27.250 07:49:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:38:27.250 { 00:38:27.250 "subsystems": [ 00:38:27.250 { 00:38:27.250 "subsystem": "bdev", 00:38:27.250 "config": [ 00:38:27.250 { 00:38:27.250 "params": { 00:38:27.250 "block_size": 512, 00:38:27.250 "num_blocks": 1048576, 00:38:27.250 "name": "malloc0" 00:38:27.250 }, 00:38:27.250 "method": "bdev_malloc_create" 00:38:27.250 }, 00:38:27.250 { 00:38:27.250 "params": { 00:38:27.250 "block_size": 512, 00:38:27.250 "num_blocks": 1048576, 00:38:27.250 "name": "malloc1" 00:38:27.250 }, 00:38:27.250 "method": "bdev_malloc_create" 00:38:27.250 }, 00:38:27.250 { 00:38:27.250 "method": "bdev_wait_for_examine" 00:38:27.250 } 00:38:27.250 ] 00:38:27.250 } 00:38:27.250 ] 00:38:27.250 } 00:38:27.250 [2024-07-12 07:49:01.009629] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:27.250 [2024-07-12 07:49:01.009881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175273 ] 00:38:27.508 [2024-07-12 07:49:01.162447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.508 [2024-07-12 07:49:01.204020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:30.377  Copying: 244/512 [MB] (244 MBps) Copying: 489/512 [MB] (244 MBps) Copying: 512/512 [MB] (average 244 MBps) 00:38:30.377 00:38:30.377 00:38:30.377 real 0m6.469s 00:38:30.377 user 0m5.423s 00:38:30.377 sys 0m0.887s 00:38:30.377 07:49:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:30.377 07:49:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:38:30.377 ************************************ 00:38:30.377 END TEST dd_malloc_copy 00:38:30.377 ************************************ 00:38:30.377 00:38:30.377 real 0m6.678s 00:38:30.377 user 0m5.512s 00:38:30.377 sys 0m1.022s 00:38:30.377 07:49:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:30.377 07:49:04 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:38:30.377 ************************************ 00:38:30.377 END TEST spdk_dd_malloc 00:38:30.377 ************************************ 00:38:30.634 07:49:04 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:38:30.634 07:49:04 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:38:30.634 07:49:04 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:30.634 07:49:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:38:30.634 ************************************ 00:38:30.634 START TEST spdk_dd_bdev_to_bdev 00:38:30.634 ************************************ 00:38:30.634 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:38:30.634 * Looking for test storage... 00:38:30.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:30.634 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:30.634 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:30.634 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:38:30.635 07:49:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:38:30.635 [2024-07-12 07:49:04.510187] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:30.635 [2024-07-12 07:49:04.510500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175374 ] 00:38:30.893 [2024-07-12 07:49:04.668814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.893 [2024-07-12 07:49:04.722877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.412  Copying: 256/256 [MB] (average 1158 MBps) 00:38:31.412 00:38:31.412 07:49:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:31.412 07:49:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:31.672 07:49:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:38:31.672 07:49:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:38:31.672 07:49:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:38:31.672 07:49:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:38:31.672 07:49:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:31.672 07:49:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:38:31.672 ************************************ 00:38:31.672 START TEST dd_inflate_file 00:38:31.672 ************************************ 00:38:31.672 07:49:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:38:31.672 [2024-07-12 07:49:05.379688] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:31.672 [2024-07-12 07:49:05.379951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175397 ] 00:38:31.672 [2024-07-12 07:49:05.534157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:31.931 [2024-07-12 07:49:05.586246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:32.216  Copying: 64/64 [MB] (average 984 MBps) 00:38:32.216 00:38:32.216 00:38:32.216 real 0m0.688s 00:38:32.216 user 0m0.265s 00:38:32.216 sys 0m0.293s 00:38:32.216 07:49:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:32.216 07:49:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:38:32.216 ************************************ 00:38:32.216 END TEST dd_inflate_file 00:38:32.216 ************************************ 00:38:32.216 07:49:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:38:32.216 07:49:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:38:32.216 07:49:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:38:32.216 07:49:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:38:32.216 07:49:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:38:32.216 07:49:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:32.216 07:49:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:38:32.216 07:49:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:38:32.216 07:49:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:38:32.216 ************************************ 00:38:32.216 START TEST dd_copy_to_out_bdev 00:38:32.216 ************************************ 00:38:32.216 07:49:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:38:32.475 { 00:38:32.475 "subsystems": [ 00:38:32.475 { 00:38:32.475 "subsystem": "bdev", 00:38:32.475 "config": [ 00:38:32.475 { 00:38:32.475 "params": { 00:38:32.475 "block_size": 4096, 00:38:32.475 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:38:32.475 "name": "aio1" 00:38:32.475 }, 00:38:32.475 "method": "bdev_aio_create" 00:38:32.475 }, 00:38:32.475 { 00:38:32.475 "params": { 00:38:32.475 "trtype": "pcie", 00:38:32.475 "traddr": "0000:00:10.0", 00:38:32.475 "name": "Nvme0" 00:38:32.475 }, 00:38:32.475 "method": "bdev_nvme_attach_controller" 00:38:32.475 }, 00:38:32.475 { 00:38:32.475 "method": "bdev_wait_for_examine" 00:38:32.475 } 00:38:32.475 ] 00:38:32.475 } 00:38:32.475 ] 00:38:32.475 } 00:38:32.475 [2024-07-12 07:49:06.149345] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:32.475 [2024-07-12 07:49:06.149602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175437 ] 00:38:32.475 [2024-07-12 07:49:06.305386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.475 [2024-07-12 07:49:06.352114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.419  Copying: 54/64 [MB] (54 MBps) Copying: 64/64 [MB] (average 54 MBps) 00:38:34.419 00:38:34.419 00:38:34.419 real 0m1.940s 00:38:34.419 user 0m1.568s 00:38:34.419 sys 0m0.274s 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:34.419 ************************************ 00:38:34.419 END TEST dd_copy_to_out_bdev 00:38:34.419 ************************************ 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:38:34.419 ************************************ 00:38:34.419 START TEST dd_offset_magic 00:38:34.419 ************************************ 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1121 -- # offset_magic 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:38:34.419 07:49:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:38:34.419 { 00:38:34.419 "subsystems": [ 00:38:34.419 { 00:38:34.419 "subsystem": "bdev", 00:38:34.419 "config": [ 00:38:34.419 { 00:38:34.419 "params": { 00:38:34.419 "block_size": 4096, 00:38:34.419 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:38:34.419 "name": "aio1" 00:38:34.419 }, 00:38:34.419 "method": "bdev_aio_create" 00:38:34.419 }, 00:38:34.419 { 00:38:34.419 "params": { 00:38:34.419 "trtype": "pcie", 00:38:34.419 "traddr": "0000:00:10.0", 00:38:34.419 "name": "Nvme0" 00:38:34.419 }, 00:38:34.419 "method": "bdev_nvme_attach_controller" 00:38:34.419 }, 00:38:34.419 { 00:38:34.419 "method": "bdev_wait_for_examine" 00:38:34.419 } 00:38:34.419 ] 00:38:34.419 } 00:38:34.419 ] 00:38:34.419 } 00:38:34.419 [2024-07-12 07:49:08.154423] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:34.419 [2024-07-12 07:49:08.154700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175488 ] 00:38:34.679 [2024-07-12 07:49:08.310220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.679 [2024-07-12 07:49:08.363598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:35.875  Copying: 65/65 [MB] (average 102 MBps) 00:38:35.875 00:38:35.875 07:49:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:38:35.875 07:49:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:38:35.875 07:49:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:38:35.875 07:49:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:38:35.875 { 00:38:35.875 "subsystems": [ 00:38:35.875 { 00:38:35.875 "subsystem": "bdev", 00:38:35.875 "config": [ 00:38:35.875 { 00:38:35.875 "params": { 00:38:35.875 "block_size": 4096, 00:38:35.875 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:38:35.875 "name": "aio1" 00:38:35.875 }, 00:38:35.875 "method": "bdev_aio_create" 00:38:35.875 }, 00:38:35.875 { 00:38:35.875 "params": { 00:38:35.875 "trtype": "pcie", 00:38:35.875 "traddr": "0000:00:10.0", 00:38:35.875 "name": "Nvme0" 00:38:35.875 }, 00:38:35.875 "method": "bdev_nvme_attach_controller" 00:38:35.875 }, 00:38:35.875 { 00:38:35.875 "method": "bdev_wait_for_examine" 00:38:35.875 } 00:38:35.875 ] 00:38:35.875 } 00:38:35.875 ] 00:38:35.875 } 00:38:35.875 [2024-07-12 07:49:09.634625] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:35.876 [2024-07-12 07:49:09.634947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175516 ] 00:38:36.134 [2024-07-12 07:49:09.795253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.134 [2024-07-12 07:49:09.861694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:36.651  Copying: 1024/1024 [kB] (average 333 MBps) 00:38:36.652 00:38:36.652 07:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:38:36.652 07:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:38:36.652 07:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:38:36.652 07:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:38:36.652 07:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:38:36.652 07:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:38:36.652 07:49:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:38:36.911 { 00:38:36.911 "subsystems": [ 00:38:36.911 { 00:38:36.911 "subsystem": "bdev", 00:38:36.911 "config": [ 00:38:36.911 { 00:38:36.911 "params": { 00:38:36.911 "block_size": 4096, 00:38:36.911 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:38:36.911 "name": "aio1" 00:38:36.911 }, 00:38:36.911 "method": "bdev_aio_create" 00:38:36.911 }, 00:38:36.911 { 00:38:36.911 "params": { 00:38:36.911 "trtype": "pcie", 00:38:36.911 "traddr": "0000:00:10.0", 00:38:36.911 "name": "Nvme0" 00:38:36.911 }, 00:38:36.911 "method": "bdev_nvme_attach_controller" 00:38:36.911 }, 00:38:36.911 { 00:38:36.911 "method": "bdev_wait_for_examine" 00:38:36.911 } 00:38:36.911 ] 00:38:36.911 } 00:38:36.911 ] 00:38:36.911 } 00:38:36.911 [2024-07-12 07:49:10.590160] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:36.911 [2024-07-12 07:49:10.590416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175537 ] 00:38:36.911 [2024-07-12 07:49:10.745529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.170 [2024-07-12 07:49:10.793144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.998  Copying: 65/65 [MB] (average 147 MBps) 00:38:37.998 00:38:37.998 07:49:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:38:37.998 07:49:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:38:37.998 07:49:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:38:37.998 07:49:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:38:38.257 { 00:38:38.257 "subsystems": [ 00:38:38.257 { 00:38:38.257 "subsystem": "bdev", 00:38:38.258 "config": [ 00:38:38.258 { 00:38:38.258 "params": { 00:38:38.258 "block_size": 4096, 00:38:38.258 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:38:38.258 "name": "aio1" 00:38:38.258 }, 00:38:38.258 "method": "bdev_aio_create" 00:38:38.258 }, 00:38:38.258 { 00:38:38.258 "params": { 00:38:38.258 "trtype": "pcie", 00:38:38.258 "traddr": "0000:00:10.0", 00:38:38.258 "name": "Nvme0" 00:38:38.258 }, 00:38:38.258 "method": "bdev_nvme_attach_controller" 00:38:38.258 }, 00:38:38.258 { 00:38:38.258 "method": "bdev_wait_for_examine" 00:38:38.258 } 00:38:38.258 ] 00:38:38.258 } 00:38:38.258 ] 00:38:38.258 } 00:38:38.258 [2024-07-12 07:49:11.911174] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:38.258 [2024-07-12 07:49:11.911442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175559 ] 00:38:38.258 [2024-07-12 07:49:12.070617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.258 [2024-07-12 07:49:12.128465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:39.087  Copying: 1024/1024 [kB] (average 333 MBps) 00:38:39.087 00:38:39.087 07:49:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:38:39.087 07:49:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:38:39.087 00:38:39.087 real 0m4.729s 00:38:39.087 user 0m2.217s 00:38:39.087 sys 0m1.185s 00:38:39.087 07:49:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:39.087 07:49:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:38:39.087 ************************************ 00:38:39.087 END TEST dd_offset_magic 00:38:39.087 ************************************ 00:38:39.087 07:49:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:38:39.087 07:49:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:38:39.087 07:49:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:39.087 07:49:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:38:39.087 07:49:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:38:39.087 07:49:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:38:39.087 07:49:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:38:39.087 07:49:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:38:39.087 07:49:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:38:39.087 07:49:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:38:39.087 07:49:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:38:39.087 { 00:38:39.087 "subsystems": [ 00:38:39.087 { 00:38:39.087 "subsystem": "bdev", 00:38:39.087 "config": [ 00:38:39.087 { 00:38:39.087 "params": { 00:38:39.087 "block_size": 4096, 00:38:39.087 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:38:39.087 "name": "aio1" 00:38:39.087 }, 00:38:39.087 "method": "bdev_aio_create" 00:38:39.087 }, 00:38:39.087 { 00:38:39.087 "params": { 00:38:39.087 "trtype": "pcie", 00:38:39.087 "traddr": "0000:00:10.0", 00:38:39.087 "name": "Nvme0" 00:38:39.087 }, 00:38:39.087 "method": "bdev_nvme_attach_controller" 00:38:39.087 }, 00:38:39.087 { 00:38:39.087 "method": "bdev_wait_for_examine" 00:38:39.087 } 00:38:39.087 ] 00:38:39.087 } 00:38:39.087 ] 00:38:39.087 } 00:38:39.087 [2024-07-12 07:49:12.955008] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:39.087 [2024-07-12 07:49:12.955300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175593 ] 00:38:39.346 [2024-07-12 07:49:13.110863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:39.346 [2024-07-12 07:49:13.162835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:39.862  Copying: 5120/5120 [kB] (average 1000 MBps) 00:38:39.862 00:38:39.862 07:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:38:39.862 07:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1 00:38:39.862 07:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:38:39.862 07:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:38:39.862 07:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:38:39.862 07:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:38:39.862 07:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:38:39.862 07:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:38:39.862 07:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:38:39.862 07:49:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:38:39.863 { 00:38:39.863 "subsystems": [ 00:38:39.863 { 00:38:39.863 "subsystem": "bdev", 00:38:39.863 "config": [ 00:38:39.863 { 00:38:39.863 "params": { 00:38:39.863 "block_size": 4096, 00:38:39.863 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:38:39.863 "name": "aio1" 00:38:39.863 }, 00:38:39.863 "method": "bdev_aio_create" 00:38:39.863 }, 00:38:39.863 { 00:38:39.863 "params": { 00:38:39.863 "trtype": "pcie", 00:38:39.863 "traddr": "0000:00:10.0", 00:38:39.863 "name": "Nvme0" 00:38:39.863 }, 00:38:39.863 "method": "bdev_nvme_attach_controller" 00:38:39.863 }, 00:38:39.863 { 00:38:39.863 "method": "bdev_wait_for_examine" 00:38:39.863 } 00:38:39.863 ] 00:38:39.863 } 00:38:39.863 ] 00:38:39.863 } 00:38:39.863 [2024-07-12 07:49:13.707236] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:39.863 [2024-07-12 07:49:13.707494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175614 ] 00:38:40.121 [2024-07-12 07:49:13.861876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:40.121 [2024-07-12 07:49:13.907724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:40.640  Copying: 5120/5120 [kB] (average 156 MBps) 00:38:40.640 00:38:40.640 07:49:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:38:40.640 00:38:40.640 real 0m10.197s 00:38:40.640 user 0m5.337s 00:38:40.640 sys 0m2.929s 00:38:40.640 07:49:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:40.640 ************************************ 00:38:40.640 END TEST spdk_dd_bdev_to_bdev 00:38:40.640 ************************************ 00:38:40.640 07:49:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:38:40.900 07:49:14 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:38:40.900 07:49:14 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:38:40.900 07:49:14 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:40.900 07:49:14 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:40.900 07:49:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:38:40.900 ************************************ 00:38:40.900 START TEST spdk_dd_sparse 00:38:40.900 ************************************ 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:38:40.900 * Looking for test storage... 00:38:40.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:38:40.900 1+0 records in 00:38:40.900 1+0 records out 00:38:40.900 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0121323 s, 346 MB/s 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:38:40.900 1+0 records in 00:38:40.900 1+0 records out 00:38:40.900 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0122547 s, 342 MB/s 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:38:40.900 1+0 records in 00:38:40.900 1+0 records out 00:38:40.900 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00917415 s, 457 MB/s 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:38:40.900 ************************************ 00:38:40.900 START TEST dd_sparse_file_to_file 00:38:40.900 ************************************ 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1121 -- # file_to_file 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:38:40.900 07:49:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:38:41.160 { 00:38:41.160 "subsystems": [ 00:38:41.160 { 00:38:41.160 "subsystem": "bdev", 00:38:41.160 "config": [ 00:38:41.160 { 00:38:41.160 "params": { 00:38:41.160 "block_size": 4096, 00:38:41.160 "filename": "dd_sparse_aio_disk", 00:38:41.160 "name": "dd_aio" 00:38:41.160 }, 00:38:41.160 "method": "bdev_aio_create" 00:38:41.160 }, 00:38:41.160 { 00:38:41.160 "params": { 00:38:41.160 "lvs_name": "dd_lvstore", 00:38:41.160 "bdev_name": "dd_aio" 00:38:41.160 }, 00:38:41.160 "method": "bdev_lvol_create_lvstore" 00:38:41.160 }, 00:38:41.160 { 00:38:41.160 "method": "bdev_wait_for_examine" 00:38:41.160 } 00:38:41.160 ] 00:38:41.160 } 00:38:41.160 ] 00:38:41.160 } 00:38:41.160 [2024-07-12 07:49:14.837583] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:41.160 [2024-07-12 07:49:14.837867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175692 ] 00:38:41.160 [2024-07-12 07:49:14.994216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.420 [2024-07-12 07:49:15.046878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.680  Copying: 12/36 [MB] (average 1090 MBps) 00:38:41.680 00:38:41.680 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:38:41.680 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:38:41.680 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:38:41.680 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:38:41.680 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:38:41.680 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:38:41.680 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:38:41.680 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:38:41.939 00:38:41.939 real 0m0.796s 00:38:41.939 user 0m0.390s 00:38:41.939 sys 0m0.269s 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:38:41.939 ************************************ 00:38:41.939 END TEST dd_sparse_file_to_file 00:38:41.939 ************************************ 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:38:41.939 ************************************ 00:38:41.939 START TEST dd_sparse_file_to_bdev 00:38:41.939 ************************************ 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1121 -- # file_to_bdev 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:38:41.939 07:49:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:38:41.940 { 00:38:41.940 "subsystems": [ 00:38:41.940 { 00:38:41.940 "subsystem": "bdev", 00:38:41.940 "config": [ 00:38:41.940 { 00:38:41.940 "params": { 00:38:41.940 "block_size": 4096, 00:38:41.940 "filename": "dd_sparse_aio_disk", 00:38:41.940 "name": "dd_aio" 00:38:41.940 }, 00:38:41.940 "method": "bdev_aio_create" 00:38:41.940 }, 00:38:41.940 { 00:38:41.940 "params": { 00:38:41.940 "lvs_name": "dd_lvstore", 00:38:41.940 "lvol_name": "dd_lvol", 00:38:41.940 "size_in_mib": 36, 00:38:41.940 "thin_provision": true 00:38:41.940 }, 00:38:41.940 "method": "bdev_lvol_create" 00:38:41.940 }, 00:38:41.940 { 00:38:41.940 "method": "bdev_wait_for_examine" 00:38:41.940 } 00:38:41.940 ] 00:38:41.940 } 00:38:41.940 ] 00:38:41.940 } 00:38:41.940 [2024-07-12 07:49:15.698115] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:41.940 [2024-07-12 07:49:15.698314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175738 ] 00:38:42.199 [2024-07-12 07:49:15.842215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:42.199 [2024-07-12 07:49:15.882783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:42.459  Copying: 12/36 [MB] (average 461 MBps) 00:38:42.459 00:38:42.719 00:38:42.719 real 0m0.710s 00:38:42.719 user 0m0.345s 00:38:42.719 sys 0m0.241s 00:38:42.719 07:49:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:42.719 07:49:16 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:38:42.719 ************************************ 00:38:42.719 END TEST dd_sparse_file_to_bdev 00:38:42.719 ************************************ 00:38:42.719 07:49:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:38:42.719 07:49:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:42.719 07:49:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:42.719 07:49:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:38:42.719 ************************************ 00:38:42.719 START TEST dd_sparse_bdev_to_file 00:38:42.719 ************************************ 00:38:42.719 07:49:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1121 -- # bdev_to_file 00:38:42.719 07:49:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:38:42.719 07:49:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:38:42.719 07:49:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:38:42.719 07:49:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:38:42.719 07:49:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:38:42.719 07:49:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:38:42.719 07:49:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:38:42.719 07:49:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:38:42.719 { 00:38:42.719 "subsystems": [ 00:38:42.719 { 00:38:42.719 "subsystem": "bdev", 00:38:42.719 "config": [ 00:38:42.719 { 00:38:42.719 "params": { 00:38:42.719 "block_size": 4096, 00:38:42.719 "filename": "dd_sparse_aio_disk", 00:38:42.719 "name": "dd_aio" 00:38:42.719 }, 00:38:42.719 "method": "bdev_aio_create" 00:38:42.719 }, 00:38:42.719 { 00:38:42.719 "method": "bdev_wait_for_examine" 00:38:42.719 } 00:38:42.719 ] 00:38:42.719 } 00:38:42.719 ] 00:38:42.719 } 00:38:42.719 [2024-07-12 07:49:16.493098] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:42.719 [2024-07-12 07:49:16.493405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175783 ] 00:38:42.979 [2024-07-12 07:49:16.649278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:42.979 [2024-07-12 07:49:16.701908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.238  Copying: 12/36 [MB] (average 923 MBps) 00:38:43.238 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:38:43.499 00:38:43.499 real 0m0.741s 00:38:43.499 user 0m0.359s 00:38:43.499 sys 0m0.272s 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:43.499 ************************************ 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:38:43.499 END TEST dd_sparse_bdev_to_file 00:38:43.499 ************************************ 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:38:43.499 00:38:43.499 real 0m2.676s 00:38:43.499 user 0m1.256s 00:38:43.499 sys 0m1.056s 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:43.499 07:49:17 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:38:43.499 ************************************ 00:38:43.499 END TEST spdk_dd_sparse 00:38:43.499 ************************************ 00:38:43.499 07:49:17 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:38:43.499 07:49:17 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:43.499 07:49:17 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:43.499 07:49:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:38:43.499 ************************************ 00:38:43.499 START TEST spdk_dd_negative 00:38:43.499 ************************************ 00:38:43.499 07:49:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:38:43.759 * Looking for test storage... 00:38:43.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:43.759 07:49:17 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:43.759 07:49:17 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:43.759 07:49:17 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:43.759 07:49:17 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:43.759 07:49:17 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:38:43.760 ************************************ 00:38:43.760 START TEST dd_invalid_arguments 00:38:43.760 ************************************ 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1121 -- # invalid_arguments 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:38:43.760 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:38:43.760 00:38:43.760 CPU options: 00:38:43.760 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:38:43.760 (like [0,1,10]) 00:38:43.760 --lcores lcore to CPU mapping list. The list is in the format: 00:38:43.760 [<,lcores[@CPUs]>...] 00:38:43.760 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:38:43.760 Within the group, '-' is used for range separator, 00:38:43.760 ',' is used for single number separator. 00:38:43.760 '( )' can be omitted for single element group, 00:38:43.760 '@' can be omitted if cpus and lcores have the same value 00:38:43.760 --disable-cpumask-locks Disable CPU core lock files. 00:38:43.760 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:38:43.760 pollers in the app support interrupt mode) 00:38:43.760 -p, --main-core main (primary) core for DPDK 00:38:43.760 00:38:43.760 Configuration options: 00:38:43.760 -c, --config, --json JSON config file 00:38:43.760 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:38:43.760 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:38:43.760 --wait-for-rpc wait for RPCs to initialize subsystems 00:38:43.760 --rpcs-allowed comma-separated list of permitted RPCS 00:38:43.760 --json-ignore-init-errors don't exit on invalid config entry 00:38:43.760 00:38:43.760 Memory options: 00:38:43.760 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:38:43.760 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:38:43.760 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:38:43.760 -R, --huge-unlink unlink huge files after initialization 00:38:43.760 -n, --mem-channels number of memory channels used for DPDK 00:38:43.760 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:38:43.760 --msg-mempool-size global message memory pool size in count (default: 262143) 00:38:43.760 --no-huge run without using hugepages 00:38:43.760 -i, --shm-id shared memory ID (optional) 00:38:43.760 -g, --single-file-segments force creating just one hugetlbfs file 00:38:43.760 00:38:43.760 PCI options: 00:38:43.760 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:38:43.760 -B, --pci-blocked pci addr to block (can be used more than once) 00:38:43.760 -u, --no-pci disable PCI access 00:38:43.760 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:38:43.760 00:38:43.760 Log options: 00:38:43.760 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:38:43.760 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:38:43.760 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, 00:38:43.760 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:38:43.760 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:38:43.760 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:38:43.760 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:38:43.760 sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, 00:38:43.760 vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, 00:38:43.760 vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:38:43.760 virtio_vfio_user, vmd) 00:38:43.760 --silence-noticelog disable notice level logging to stderr 00:38:43.760 00:38:43.760 Trace options: 00:38:43.760 --num-trace-entries number of trace entries for each core, must be power of 2, 00:38:43.760 setting 0 to disable trace (default 32768) 00:38:43.760 Tracepoints vary in size and can use more than one trace entry. 00:38:43.760 -e, --tpoint-group [:] 00:38:43.760 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:38:43.760 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:38:43.760 [2024-07-12 07:49:17.547268] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:38:43.760 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:38:43.760 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:38:43.760 a tracepoint group. First tpoint inside a group can be enabled by 00:38:43.760 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:38:43.760 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:38:43.760 in /include/spdk_internal/trace_defs.h 00:38:43.760 00:38:43.760 Other options: 00:38:43.760 -h, --help show this usage 00:38:43.760 -v, --version print SPDK version 00:38:43.760 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:38:43.760 --env-context Opaque context for use of the env implementation 00:38:43.760 00:38:43.760 Application specific: 00:38:43.760 [--------- DD Options ---------] 00:38:43.760 --if Input file. Must specify either --if or --ib. 00:38:43.760 --ib Input bdev. Must specifier either --if or --ib 00:38:43.760 --of Output file. Must specify either --of or --ob. 00:38:43.760 --ob Output bdev. Must specify either --of or --ob. 00:38:43.760 --iflag Input file flags. 00:38:43.760 --oflag Output file flags. 00:38:43.760 --bs I/O unit size (default: 4096) 00:38:43.760 --qd Queue depth (default: 2) 00:38:43.760 --count I/O unit count. The number of I/O units to copy. (default: all) 00:38:43.760 --skip Skip this many I/O units at start of input. (default: 0) 00:38:43.760 --seek Skip this many I/O units at start of output. (default: 0) 00:38:43.760 --aio Force usage of AIO. (by default io_uring is used if available) 00:38:43.760 --sparse Enable hole skipping in input target 00:38:43.760 Available iflag and oflag values: 00:38:43.760 append - append mode 00:38:43.760 direct - use direct I/O for data 00:38:43.760 directory - fail unless a directory 00:38:43.760 dsync - use synchronized I/O for data 00:38:43.760 noatime - do not update access time 00:38:43.760 noctty - do not assign controlling terminal from file 00:38:43.760 nofollow - do not follow symlinks 00:38:43.760 nonblock - use non-blocking I/O 00:38:43.760 sync - use synchronized I/O for data and metadata 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:43.760 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:43.760 00:38:43.760 real 0m0.144s 00:38:43.761 user 0m0.073s 00:38:43.761 sys 0m0.072s 00:38:43.761 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:43.761 07:49:17 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:38:43.761 ************************************ 00:38:43.761 END TEST dd_invalid_arguments 00:38:43.761 ************************************ 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:38:44.021 ************************************ 00:38:44.021 START TEST dd_double_input 00:38:44.021 ************************************ 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1121 -- # double_input 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:38:44.021 [2024-07-12 07:49:17.766944] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:44.021 00:38:44.021 real 0m0.143s 00:38:44.021 user 0m0.069s 00:38:44.021 sys 0m0.074s 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:44.021 ************************************ 00:38:44.021 END TEST dd_double_input 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:38:44.021 ************************************ 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:44.021 07:49:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:38:44.282 ************************************ 00:38:44.282 START TEST dd_double_output 00:38:44.282 ************************************ 00:38:44.282 07:49:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1121 -- # double_output 00:38:44.282 07:49:17 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:38:44.282 07:49:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:38:44.282 07:49:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:38:44.282 07:49:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.282 07:49:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.282 07:49:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.282 07:49:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.282 07:49:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.282 07:49:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.282 07:49:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.282 07:49:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:44.282 07:49:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:38:44.282 [2024-07-12 07:49:17.991442] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:44.282 00:38:44.282 real 0m0.147s 00:38:44.282 user 0m0.055s 00:38:44.282 sys 0m0.094s 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:44.282 ************************************ 00:38:44.282 END TEST dd_double_output 00:38:44.282 ************************************ 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:38:44.282 ************************************ 00:38:44.282 START TEST dd_no_input 00:38:44.282 ************************************ 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1121 -- # no_input 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:44.282 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:38:44.543 [2024-07-12 07:49:18.213558] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:44.543 00:38:44.543 real 0m0.146s 00:38:44.543 user 0m0.049s 00:38:44.543 sys 0m0.098s 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:38:44.543 ************************************ 00:38:44.543 END TEST dd_no_input 00:38:44.543 ************************************ 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:38:44.543 ************************************ 00:38:44.543 START TEST dd_no_output 00:38:44.543 ************************************ 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1121 -- # no_output 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:44.543 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:44.803 [2024-07-12 07:49:18.442262] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:44.803 00:38:44.803 real 0m0.146s 00:38:44.803 user 0m0.055s 00:38:44.803 sys 0m0.092s 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:38:44.803 ************************************ 00:38:44.803 END TEST dd_no_output 00:38:44.803 ************************************ 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:38:44.803 ************************************ 00:38:44.803 START TEST dd_wrong_blocksize 00:38:44.803 ************************************ 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1121 -- # wrong_blocksize 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:44.803 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:38:44.803 [2024-07-12 07:49:18.654330] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:45.063 00:38:45.063 real 0m0.133s 00:38:45.063 user 0m0.054s 00:38:45.063 sys 0m0.079s 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:38:45.063 ************************************ 00:38:45.063 END TEST dd_wrong_blocksize 00:38:45.063 ************************************ 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:38:45.063 ************************************ 00:38:45.063 START TEST dd_smaller_blocksize 00:38:45.063 ************************************ 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1121 -- # smaller_blocksize 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:45.063 07:49:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:38:45.063 [2024-07-12 07:49:18.877130] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:45.063 [2024-07-12 07:49:18.877460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176047 ] 00:38:45.323 [2024-07-12 07:49:19.039987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.323 [2024-07-12 07:49:19.100365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.583 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:38:45.583 [2024-07-12 07:49:19.322395] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:38:45.583 [2024-07-12 07:49:19.322534] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:45.583 [2024-07-12 07:49:19.460219] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:45.843 00:38:45.843 real 0m0.809s 00:38:45.843 user 0m0.375s 00:38:45.843 sys 0m0.333s 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:38:45.843 ************************************ 00:38:45.843 END TEST dd_smaller_blocksize 00:38:45.843 ************************************ 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:38:45.843 ************************************ 00:38:45.843 START TEST dd_invalid_count 00:38:45.843 ************************************ 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1121 -- # invalid_count 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:45.843 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:38:46.103 [2024-07-12 07:49:19.735833] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:46.103 00:38:46.103 real 0m0.116s 00:38:46.103 user 0m0.041s 00:38:46.103 sys 0m0.075s 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:38:46.103 ************************************ 00:38:46.103 END TEST dd_invalid_count 00:38:46.103 ************************************ 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:38:46.103 ************************************ 00:38:46.103 START TEST dd_invalid_oflag 00:38:46.103 ************************************ 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1121 -- # invalid_oflag 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:46.103 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:38:46.103 [2024-07-12 07:49:19.922774] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:38:46.104 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:38:46.104 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:46.104 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:46.104 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:46.104 00:38:46.104 real 0m0.109s 00:38:46.104 user 0m0.045s 00:38:46.104 sys 0m0.065s 00:38:46.104 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:46.104 ************************************ 00:38:46.104 END TEST dd_invalid_oflag 00:38:46.104 07:49:19 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:38:46.104 ************************************ 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:38:46.363 ************************************ 00:38:46.363 START TEST dd_invalid_iflag 00:38:46.363 ************************************ 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1121 -- # invalid_iflag 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:38:46.363 [2024-07-12 07:49:20.087745] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:46.363 00:38:46.363 real 0m0.099s 00:38:46.363 user 0m0.047s 00:38:46.363 sys 0m0.052s 00:38:46.363 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:38:46.364 ************************************ 00:38:46.364 END TEST dd_invalid_iflag 00:38:46.364 ************************************ 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:38:46.364 ************************************ 00:38:46.364 START TEST dd_unknown_flag 00:38:46.364 ************************************ 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1121 -- # unknown_flag 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:46.364 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:38:46.623 [2024-07-12 07:49:20.278970] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:46.624 [2024-07-12 07:49:20.279237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176172 ] 00:38:46.624 [2024-07-12 07:49:20.433465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.624 [2024-07-12 07:49:20.476246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:46.884 [2024-07-12 07:49:20.537334] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:38:46.884 [2024-07-12 07:49:20.537423] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:46.884  Copying: 0/0 [B] (average 0 Bps)[2024-07-12 07:49:20.537591] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:38:46.884 [2024-07-12 07:49:20.639417] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:47.146 00:38:47.146 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:47.146 00:38:47.146 real 0m0.617s 00:38:47.146 user 0m0.292s 00:38:47.146 sys 0m0.191s 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:38:47.146 ************************************ 00:38:47.146 END TEST dd_unknown_flag 00:38:47.146 ************************************ 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:38:47.146 ************************************ 00:38:47.146 START TEST dd_invalid_json 00:38:47.146 ************************************ 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1121 -- # invalid_json 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:47.146 07:49:20 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:38:47.146 [2024-07-12 07:49:20.973332] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:47.146 [2024-07-12 07:49:20.974604] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176198 ] 00:38:47.423 [2024-07-12 07:49:21.130800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.423 [2024-07-12 07:49:21.183279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.423 [2024-07-12 07:49:21.183382] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:38:47.423 [2024-07-12 07:49:21.183419] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:38:47.423 [2024-07-12 07:49:21.183449] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:47.423 [2024-07-12 07:49:21.183538] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:47.691 07:49:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:38:47.692 07:49:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:47.692 07:49:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:38:47.692 07:49:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:38:47.692 07:49:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:38:47.692 07:49:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:47.692 00:38:47.692 real 0m0.426s 00:38:47.692 user 0m0.174s 00:38:47.692 sys 0m0.153s 00:38:47.692 07:49:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:47.692 07:49:21 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:38:47.692 ************************************ 00:38:47.692 END TEST dd_invalid_json 00:38:47.692 ************************************ 00:38:47.692 00:38:47.692 real 0m4.049s 00:38:47.692 user 0m1.796s 00:38:47.692 sys 0m1.928s 00:38:47.692 07:49:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:47.692 07:49:21 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:38:47.692 ************************************ 00:38:47.692 END TEST spdk_dd_negative 00:38:47.692 ************************************ 00:38:47.692 00:38:47.692 real 1m7.020s 00:38:47.692 user 0m35.077s 00:38:47.692 sys 0m21.183s 00:38:47.692 07:49:21 spdk_dd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:47.692 07:49:21 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:38:47.692 ************************************ 00:38:47.692 END TEST spdk_dd 00:38:47.692 ************************************ 00:38:47.692 07:49:21 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:38:47.692 07:49:21 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:38:47.692 07:49:21 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:38:47.692 07:49:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:47.692 07:49:21 -- common/autotest_common.sh@10 -- # set +x 00:38:47.692 ************************************ 00:38:47.692 START TEST blockdev_nvme 00:38:47.692 ************************************ 00:38:47.692 07:49:21 blockdev_nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:38:47.970 * Looking for test storage... 00:38:47.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:38:47.970 07:49:21 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=176295 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:38:47.970 07:49:21 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 176295 00:38:47.970 07:49:21 blockdev_nvme -- common/autotest_common.sh@827 -- # '[' -z 176295 ']' 00:38:47.970 07:49:21 blockdev_nvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:47.970 07:49:21 blockdev_nvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:47.970 07:49:21 blockdev_nvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:47.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:47.970 07:49:21 blockdev_nvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:47.970 07:49:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:38:47.970 [2024-07-12 07:49:21.719377] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:47.970 [2024-07-12 07:49:21.719633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176295 ] 00:38:48.242 [2024-07-12 07:49:21.874056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.242 [2024-07-12 07:49:21.917666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.811 07:49:22 blockdev_nvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:48.811 07:49:22 blockdev_nvme -- common/autotest_common.sh@860 -- # return 0 00:38:48.811 07:49:22 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:38:48.811 07:49:22 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:38:48.811 07:49:22 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:38:48.811 07:49:22 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:38:48.811 07:49:22 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:38:48.811 07:49:22 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:38:48.811 07:49:22 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.811 07:49:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:38:49.071 07:49:22 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:49.071 07:49:22 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:38:49.071 07:49:22 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:49.072 07:49:22 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:38:49.072 07:49:22 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:49.072 07:49:22 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:49.072 07:49:22 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:49.072 07:49:22 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:38:49.072 07:49:22 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:49.072 07:49:22 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:49.072 07:49:22 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:38:49.072 07:49:22 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4022fbf9-ffbd-43d6-b4b5-aee8f2915d3d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4022fbf9-ffbd-43d6-b4b5-aee8f2915d3d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:38:49.072 07:49:22 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:38:49.072 07:49:22 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:38:49.072 07:49:22 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:38:49.072 07:49:22 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:38:49.072 07:49:22 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 176295 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@946 -- # '[' -z 176295 ']' 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@950 -- # kill -0 176295 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@951 -- # uname 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 176295 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 176295' 00:38:49.072 killing process with pid 176295 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@965 -- # kill 176295 00:38:49.072 07:49:22 blockdev_nvme -- common/autotest_common.sh@970 -- # wait 176295 00:38:49.640 07:49:23 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:49.640 07:49:23 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:38:49.640 07:49:23 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:38:49.640 07:49:23 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:49.640 07:49:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:38:49.640 ************************************ 00:38:49.640 START TEST bdev_hello_world 00:38:49.640 ************************************ 00:38:49.640 07:49:23 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:38:49.640 [2024-07-12 07:49:23.400935] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:49.640 [2024-07-12 07:49:23.401195] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176361 ] 00:38:49.900 [2024-07-12 07:49:23.554194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.900 [2024-07-12 07:49:23.600720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:50.159 [2024-07-12 07:49:23.788668] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:38:50.159 [2024-07-12 07:49:23.788739] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:38:50.159 [2024-07-12 07:49:23.788779] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:38:50.159 [2024-07-12 07:49:23.790940] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:38:50.159 [2024-07-12 07:49:23.791586] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:38:50.159 [2024-07-12 07:49:23.791628] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:38:50.159 [2024-07-12 07:49:23.791883] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:38:50.159 00:38:50.159 [2024-07-12 07:49:23.791930] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:38:50.418 00:38:50.418 real 0m0.713s 00:38:50.418 user 0m0.434s 00:38:50.418 sys 0m0.181s 00:38:50.418 07:49:24 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:50.418 07:49:24 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:38:50.418 ************************************ 00:38:50.418 END TEST bdev_hello_world 00:38:50.418 ************************************ 00:38:50.418 07:49:24 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:38:50.418 07:49:24 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:38:50.418 07:49:24 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:50.418 07:49:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:38:50.418 ************************************ 00:38:50.418 START TEST bdev_bounds 00:38:50.418 ************************************ 00:38:50.418 07:49:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:38:50.418 07:49:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=176383 00:38:50.418 07:49:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:38:50.418 07:49:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:38:50.418 Process bdevio pid: 176383 00:38:50.418 07:49:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 176383' 00:38:50.418 07:49:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 176383 00:38:50.418 07:49:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 176383 ']' 00:38:50.418 07:49:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:50.418 07:49:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:50.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:50.418 07:49:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:50.418 07:49:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:50.418 07:49:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:38:50.418 [2024-07-12 07:49:24.200639] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:50.418 [2024-07-12 07:49:24.200941] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176383 ] 00:38:50.677 [2024-07-12 07:49:24.377447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:50.677 [2024-07-12 07:49:24.426802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:50.677 [2024-07-12 07:49:24.426993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:50.677 [2024-07-12 07:49:24.426992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:51.616 07:49:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:51.616 07:49:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:38:51.616 07:49:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:38:51.616 I/O targets: 00:38:51.616 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:38:51.616 00:38:51.616 00:38:51.616 CUnit - A unit testing framework for C - Version 2.1-3 00:38:51.616 http://cunit.sourceforge.net/ 00:38:51.616 00:38:51.616 00:38:51.616 Suite: bdevio tests on: Nvme0n1 00:38:51.616 Test: blockdev write read block ...passed 00:38:51.616 Test: blockdev write zeroes read block ...passed 00:38:51.616 Test: blockdev write zeroes read no split ...passed 00:38:51.616 Test: blockdev write zeroes read split ...passed 00:38:51.616 Test: blockdev write zeroes read split partial ...passed 00:38:51.616 Test: blockdev reset ...[2024-07-12 07:49:25.264756] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:38:51.616 [2024-07-12 07:49:25.267289] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:51.616 passed 00:38:51.616 Test: blockdev write read 8 blocks ...passed 00:38:51.616 Test: blockdev write read size > 128k ...passed 00:38:51.616 Test: blockdev write read invalid size ...passed 00:38:51.616 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:51.616 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:51.616 Test: blockdev write read max offset ...passed 00:38:51.616 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:51.616 Test: blockdev writev readv 8 blocks ...passed 00:38:51.616 Test: blockdev writev readv 30 x 1block ...passed 00:38:51.616 Test: blockdev writev readv block ...passed 00:38:51.616 Test: blockdev writev readv size > 128k ...passed 00:38:51.616 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:51.616 Test: blockdev comparev and writev ...[2024-07-12 07:49:25.273942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x9fa0d000 len:0x1000 00:38:51.616 [2024-07-12 07:49:25.274078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:38:51.616 passed 00:38:51.616 Test: blockdev nvme passthru rw ...passed 00:38:51.616 Test: blockdev nvme passthru vendor specific ...[2024-07-12 07:49:25.275000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:38:51.616 [2024-07-12 07:49:25.275068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:38:51.616 passed 00:38:51.616 Test: blockdev nvme admin passthru ...passed 00:38:51.616 Test: blockdev copy ...passed 00:38:51.616 00:38:51.616 Run Summary: Type Total Ran Passed Failed Inactive 00:38:51.616 suites 1 1 n/a 0 0 00:38:51.616 tests 23 23 23 0 0 00:38:51.616 asserts 152 152 152 0 n/a 00:38:51.616 00:38:51.616 Elapsed time = 0.059 seconds 00:38:51.616 0 00:38:51.616 07:49:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 176383 00:38:51.616 07:49:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 176383 ']' 00:38:51.616 07:49:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 176383 00:38:51.616 07:49:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:38:51.616 07:49:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:51.616 07:49:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 176383 00:38:51.616 07:49:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:38:51.616 killing process with pid 176383 00:38:51.616 07:49:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:38:51.616 07:49:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 176383' 00:38:51.616 07:49:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 176383 00:38:51.616 07:49:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 176383 00:38:51.876 07:49:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:38:51.876 00:38:51.877 real 0m1.425s 00:38:51.877 user 0m3.545s 00:38:51.877 sys 0m0.355s 00:38:51.877 07:49:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:51.877 ************************************ 00:38:51.877 END TEST bdev_bounds 00:38:51.877 07:49:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:38:51.877 ************************************ 00:38:51.877 07:49:25 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:38:51.877 07:49:25 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:38:51.877 07:49:25 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:51.877 07:49:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:38:51.877 ************************************ 00:38:51.877 START TEST bdev_nbd 00:38:51.877 ************************************ 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1') 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=1 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0') 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1') 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=176446 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 176446 /var/tmp/spdk-nbd.sock 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 176446 ']' 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:38:51.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:51.877 07:49:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:38:51.877 [2024-07-12 07:49:25.684109] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:51.877 [2024-07-12 07:49:25.684296] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:52.137 [2024-07-12 07:49:25.828963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.137 [2024-07-12 07:49:25.870192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:53.076 1+0 records in 00:38:53.076 1+0 records out 00:38:53.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000872388 s, 4.7 MB/s 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:38:53.076 07:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:53.336 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:38:53.336 { 00:38:53.336 "nbd_device": "/dev/nbd0", 00:38:53.336 "bdev_name": "Nvme0n1" 00:38:53.336 } 00:38:53.336 ]' 00:38:53.336 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:38:53.336 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:38:53.336 { 00:38:53.336 "nbd_device": "/dev/nbd0", 00:38:53.336 "bdev_name": "Nvme0n1" 00:38:53.336 } 00:38:53.336 ]' 00:38:53.336 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:38:53.336 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:38:53.336 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:53.336 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:53.336 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:53.336 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:38:53.336 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:53.336 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:38:53.596 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:53.596 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:53.596 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:53.596 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:53.596 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:53.596 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:53.596 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:38:53.596 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:38:53.596 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:38:53.596 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:53.596 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:53.856 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:38:54.116 /dev/nbd0 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:54.116 1+0 records in 00:38:54.116 1+0 records out 00:38:54.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627276 s, 6.5 MB/s 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:54.116 07:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:38:54.375 { 00:38:54.375 "nbd_device": "/dev/nbd0", 00:38:54.375 "bdev_name": "Nvme0n1" 00:38:54.375 } 00:38:54.375 ]' 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:38:54.375 { 00:38:54.375 "nbd_device": "/dev/nbd0", 00:38:54.375 "bdev_name": "Nvme0n1" 00:38:54.375 } 00:38:54.375 ]' 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:38:54.375 256+0 records in 00:38:54.375 256+0 records out 00:38:54.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121894 s, 86.0 MB/s 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:38:54.375 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:38:54.634 256+0 records in 00:38:54.634 256+0 records out 00:38:54.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0698098 s, 15.0 MB/s 00:38:54.634 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:38:54.634 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:38:54.634 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:38:54.634 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:38:54.634 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:38:54.634 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:38:54.634 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:38:54.634 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:38:54.635 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:38:54.635 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:38:54.635 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:38:54.635 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:54.635 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:54.635 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:54.635 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:38:54.635 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:54.635 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:38:54.893 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:54.893 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:54.893 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:54.893 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:54.893 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:54.893 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:54.893 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:38:54.893 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:38:54.893 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:38:54.893 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:54.893 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:38:55.153 07:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:38:55.412 malloc_lvol_verify 00:38:55.412 07:49:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:38:55.671 8676184c-b574-46d7-9317-f4f725be3f41 00:38:55.671 07:49:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:38:55.930 48ac14d9-75a3-435d-9154-dc8c607aed11 00:38:55.930 07:49:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:38:56.189 /dev/nbd0 00:38:56.189 07:49:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:38:56.189 mke2fs 1.46.5 (30-Dec-2021) 00:38:56.189 00:38:56.189 Filesystem too small for a journal 00:38:56.189 Discarding device blocks: 0/1024 done 00:38:56.189 Creating filesystem with 1024 4k blocks and 1024 inodes 00:38:56.189 00:38:56.189 Allocating group tables: 0/1 done 00:38:56.189 Writing inode tables: 0/1 done 00:38:56.189 Writing superblocks and filesystem accounting information: 0/1 done 00:38:56.189 00:38:56.189 07:49:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:38:56.189 07:49:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:38:56.189 07:49:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:56.189 07:49:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:56.189 07:49:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:56.189 07:49:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:38:56.189 07:49:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:56.189 07:49:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 176446 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 176446 ']' 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 176446 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 176446 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 176446' 00:38:56.189 killing process with pid 176446 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 176446 00:38:56.189 07:49:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 176446 00:38:56.447 07:49:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:38:56.447 00:38:56.447 real 0m4.706s 00:38:56.447 user 0m6.922s 00:38:56.447 sys 0m1.372s 00:38:56.448 07:49:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:56.448 07:49:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:38:56.448 ************************************ 00:38:56.448 END TEST bdev_nbd 00:38:56.448 ************************************ 00:38:56.707 07:49:30 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:38:56.707 07:49:30 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:38:56.707 skipping fio tests on NVMe due to multi-ns failures. 00:38:56.707 07:49:30 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:38:56.707 07:49:30 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:56.707 07:49:30 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:38:56.707 07:49:30 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:38:56.707 07:49:30 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:56.707 07:49:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:38:56.707 ************************************ 00:38:56.707 START TEST bdev_verify 00:38:56.707 ************************************ 00:38:56.707 07:49:30 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:38:56.707 [2024-07-12 07:49:30.459085] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:56.707 [2024-07-12 07:49:30.459284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176622 ] 00:38:56.966 [2024-07-12 07:49:30.605039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:56.966 [2024-07-12 07:49:30.650486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:56.966 [2024-07-12 07:49:30.650492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:56.966 Running I/O for 5 seconds... 00:39:02.233 00:39:02.233 Latency(us) 00:39:02.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:02.233 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:02.233 Verification LBA range: start 0x0 length 0xa0000 00:39:02.233 Nvme0n1 : 5.01 10511.11 41.06 0.00 0.00 12110.04 612.45 18599.74 00:39:02.233 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:02.233 Verification LBA range: start 0xa0000 length 0xa0000 00:39:02.233 Nvme0n1 : 5.01 6869.08 26.83 0.00 0.00 18529.49 928.43 28336.52 00:39:02.233 =================================================================================================================== 00:39:02.233 Total : 17380.19 67.89 0.00 0.00 14647.64 612.45 28336.52 00:39:02.492 00:39:02.492 real 0m5.831s 00:39:02.492 user 0m10.937s 00:39:02.492 sys 0m0.226s 00:39:02.492 07:49:36 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:02.492 07:49:36 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:39:02.492 ************************************ 00:39:02.492 END TEST bdev_verify 00:39:02.492 ************************************ 00:39:02.492 07:49:36 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:39:02.492 07:49:36 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:39:02.492 07:49:36 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:02.492 07:49:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:02.492 ************************************ 00:39:02.492 START TEST bdev_verify_big_io 00:39:02.492 ************************************ 00:39:02.492 07:49:36 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:39:02.751 [2024-07-12 07:49:36.375662] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:02.751 [2024-07-12 07:49:36.376642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176711 ] 00:39:02.751 [2024-07-12 07:49:36.536807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:02.751 [2024-07-12 07:49:36.590183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:02.751 [2024-07-12 07:49:36.590192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.010 Running I/O for 5 seconds... 00:39:08.280 00:39:08.280 Latency(us) 00:39:08.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:08.280 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:39:08.280 Verification LBA range: start 0x0 length 0xa000 00:39:08.280 Nvme0n1 : 5.13 585.21 36.58 0.00 0.00 211993.12 1669.61 243669.09 00:39:08.280 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:39:08.280 Verification LBA range: start 0xa000 length 0xa000 00:39:08.280 Nvme0n1 : 5.13 535.85 33.49 0.00 0.00 231587.46 1864.66 425422.26 00:39:08.280 =================================================================================================================== 00:39:08.280 Total : 1121.05 70.07 0.00 0.00 221357.51 1669.61 425422.26 00:39:08.539 00:39:08.539 real 0m6.110s 00:39:08.539 user 0m11.462s 00:39:08.539 sys 0m0.226s 00:39:08.539 07:49:42 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:08.539 07:49:42 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:39:08.539 ************************************ 00:39:08.539 END TEST bdev_verify_big_io 00:39:08.539 ************************************ 00:39:08.798 07:49:42 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:08.798 07:49:42 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:39:08.798 07:49:42 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:08.798 07:49:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:08.798 ************************************ 00:39:08.798 START TEST bdev_write_zeroes 00:39:08.798 ************************************ 00:39:08.798 07:49:42 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:08.798 [2024-07-12 07:49:42.527198] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:08.798 [2024-07-12 07:49:42.527497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176808 ] 00:39:08.798 [2024-07-12 07:49:42.666602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:09.056 [2024-07-12 07:49:42.709901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:09.056 Running I/O for 1 seconds... 00:39:10.439 00:39:10.439 Latency(us) 00:39:10.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:10.439 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:39:10.440 Nvme0n1 : 1.00 71189.57 278.08 0.00 0.00 1794.01 573.44 12233.39 00:39:10.440 =================================================================================================================== 00:39:10.440 Total : 71189.57 278.08 0.00 0.00 1794.01 573.44 12233.39 00:39:10.440 00:39:10.440 real 0m1.670s 00:39:10.440 user 0m1.390s 00:39:10.440 sys 0m0.180s 00:39:10.440 07:49:44 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:10.440 07:49:44 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:39:10.440 ************************************ 00:39:10.440 END TEST bdev_write_zeroes 00:39:10.440 ************************************ 00:39:10.440 07:49:44 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:10.440 07:49:44 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:39:10.440 07:49:44 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:10.440 07:49:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:10.440 ************************************ 00:39:10.440 START TEST bdev_json_nonenclosed 00:39:10.440 ************************************ 00:39:10.440 07:49:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:10.440 [2024-07-12 07:49:44.268507] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:10.440 [2024-07-12 07:49:44.268652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176852 ] 00:39:10.697 [2024-07-12 07:49:44.409785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.697 [2024-07-12 07:49:44.456403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:10.697 [2024-07-12 07:49:44.456721] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:39:10.697 [2024-07-12 07:49:44.456847] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:10.697 [2024-07-12 07:49:44.456952] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:10.955 00:39:10.955 real 0m0.363s 00:39:10.955 user 0m0.131s 00:39:10.955 sys 0m0.131s 00:39:10.955 07:49:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:10.955 07:49:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:39:10.955 ************************************ 00:39:10.955 END TEST bdev_json_nonenclosed 00:39:10.955 ************************************ 00:39:10.955 07:49:44 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:10.955 07:49:44 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:39:10.955 07:49:44 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:10.955 07:49:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:10.955 ************************************ 00:39:10.955 START TEST bdev_json_nonarray 00:39:10.955 ************************************ 00:39:10.955 07:49:44 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:10.955 [2024-07-12 07:49:44.709734] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:10.955 [2024-07-12 07:49:44.709915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176874 ] 00:39:11.213 [2024-07-12 07:49:44.847092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.213 [2024-07-12 07:49:44.891144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:11.213 [2024-07-12 07:49:44.891477] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:39:11.213 [2024-07-12 07:49:44.891605] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:11.213 [2024-07-12 07:49:44.891658] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:11.213 00:39:11.213 real 0m0.364s 00:39:11.213 user 0m0.164s 00:39:11.213 sys 0m0.100s 00:39:11.213 07:49:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:11.213 07:49:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:39:11.213 ************************************ 00:39:11.213 END TEST bdev_json_nonarray 00:39:11.213 ************************************ 00:39:11.213 07:49:45 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:39:11.213 07:49:45 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:39:11.213 07:49:45 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:39:11.213 07:49:45 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:39:11.213 07:49:45 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:39:11.213 07:49:45 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:39:11.213 07:49:45 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:11.213 07:49:45 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:39:11.213 07:49:45 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:39:11.214 07:49:45 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:39:11.214 07:49:45 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:39:11.214 00:39:11.214 real 0m23.600s 00:39:11.214 user 0m37.103s 00:39:11.214 sys 0m3.678s 00:39:11.214 07:49:45 blockdev_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:11.214 ************************************ 00:39:11.214 END TEST blockdev_nvme 00:39:11.214 ************************************ 00:39:11.214 07:49:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:11.473 07:49:45 -- spdk/autotest.sh@213 -- # uname -s 00:39:11.473 07:49:45 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:39:11.473 07:49:45 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:39:11.473 07:49:45 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:39:11.473 07:49:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:11.473 07:49:45 -- common/autotest_common.sh@10 -- # set +x 00:39:11.473 ************************************ 00:39:11.473 START TEST blockdev_nvme_gpt 00:39:11.473 ************************************ 00:39:11.473 07:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:39:11.473 * Looking for test storage... 00:39:11.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=176960 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:39:11.473 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 176960 00:39:11.473 07:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@827 -- # '[' -z 176960 ']' 00:39:11.473 07:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:11.473 07:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:11.473 07:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:11.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:11.474 07:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:11.474 07:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:11.474 07:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:39:11.474 [2024-07-12 07:49:45.347181] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:11.474 [2024-07-12 07:49:45.347389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176960 ] 00:39:11.733 [2024-07-12 07:49:45.488775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.733 [2024-07-12 07:49:45.530351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.671 07:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:12.671 07:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # return 0 00:39:12.671 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:39:12.671 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:39:12.671 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:39:12.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:39:12.931 Waiting for block devices as requested 00:39:12.931 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:39:12.931 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:39:12.931 07:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:39:12.931 07:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:39:12.931 07:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@1666 -- # local nvme bdf 00:39:12.931 07:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:39:12.931 07:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:39:12.931 07:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:39:12.931 07:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:12.931 07:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:39:12.931 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme0/nvme0n1') 00:39:12.931 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:39:12.931 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:39:12.931 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:39:12.931 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:39:12.931 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme0n1 00:39:12.931 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme0n1 -ms print 00:39:13.191 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:39:13.191 BYT; 00:39:13.191 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:39:13.191 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:39:13.191 BYT; 00:39:13.191 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:39:13.191 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme0n1 00:39:13.191 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:39:13.191 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme0n1 ]] 00:39:13.191 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:39:13.191 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:39:13.191 07:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:39:13.450 07:49:47 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:39:13.450 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:39:13.450 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:39:13.450 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:39:13.450 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:39:13.450 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:39:13.450 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:39:13.450 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:39:13.450 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:39:13.450 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:39:13.450 07:49:47 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:39:13.450 07:49:47 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:39:13.450 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:39:13.450 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:39:13.450 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:39:13.450 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:39:13.450 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:39:13.450 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:39:13.451 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:39:13.451 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:39:13.451 07:49:47 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:39:13.451 07:49:47 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:39:13.451 07:49:47 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:39:14.390 The operation has completed successfully. 00:39:14.390 07:49:48 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:39:15.770 The operation has completed successfully. 00:39:15.770 07:49:49 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:16.030 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:39:16.030 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:39:17.936 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:39:17.936 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.936 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:17.936 [] 00:39:17.936 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:39:17.937 07:49:51 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 176960 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@946 -- # '[' -z 176960 ']' 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # kill -0 176960 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # uname 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 176960 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 176960' 00:39:17.937 killing process with pid 176960 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@965 -- # kill 176960 00:39:17.937 07:49:51 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # wait 176960 00:39:18.196 07:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:18.196 07:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:39:18.196 07:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:39:18.196 07:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:18.196 07:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:18.455 ************************************ 00:39:18.455 START TEST bdev_hello_world 00:39:18.455 ************************************ 00:39:18.455 07:49:52 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:39:18.455 [2024-07-12 07:49:52.123831] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:18.455 [2024-07-12 07:49:52.124134] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177385 ] 00:39:18.455 [2024-07-12 07:49:52.262645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:18.455 [2024-07-12 07:49:52.309873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:18.714 [2024-07-12 07:49:52.497959] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:39:18.714 [2024-07-12 07:49:52.498210] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:39:18.714 [2024-07-12 07:49:52.498286] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:39:18.714 [2024-07-12 07:49:52.500559] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:39:18.714 [2024-07-12 07:49:52.501109] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:39:18.714 [2024-07-12 07:49:52.501242] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:39:18.714 [2024-07-12 07:49:52.501623] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:39:18.714 00:39:18.714 [2024-07-12 07:49:52.501749] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:39:18.973 00:39:18.973 real 0m0.671s 00:39:18.973 user 0m0.387s 00:39:18.973 sys 0m0.184s 00:39:18.973 07:49:52 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:18.973 07:49:52 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:39:18.973 ************************************ 00:39:18.973 END TEST bdev_hello_world 00:39:18.973 ************************************ 00:39:18.973 07:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:39:18.973 07:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:39:18.973 07:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:18.973 07:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:18.973 ************************************ 00:39:18.973 START TEST bdev_bounds 00:39:18.973 ************************************ 00:39:18.973 07:49:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:39:18.973 07:49:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=177408 00:39:18.973 07:49:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:39:18.973 Process bdevio pid: 177408 00:39:18.973 07:49:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 177408' 00:39:18.973 07:49:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 177408 00:39:18.973 07:49:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 177408 ']' 00:39:18.973 07:49:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:18.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:18.973 07:49:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:18.973 07:49:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:18.973 07:49:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:18.973 07:49:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:18.973 07:49:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:39:19.231 [2024-07-12 07:49:52.895406] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:19.231 [2024-07-12 07:49:52.895688] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177408 ] 00:39:19.231 [2024-07-12 07:49:53.061129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:19.488 [2024-07-12 07:49:53.116732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:19.488 [2024-07-12 07:49:53.116894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:19.488 [2024-07-12 07:49:53.117109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:20.056 07:49:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:20.056 07:49:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:39:20.056 07:49:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:39:20.056 I/O targets: 00:39:20.056 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:39:20.056 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:39:20.056 00:39:20.056 00:39:20.056 CUnit - A unit testing framework for C - Version 2.1-3 00:39:20.056 http://cunit.sourceforge.net/ 00:39:20.056 00:39:20.056 00:39:20.056 Suite: bdevio tests on: Nvme0n1p2 00:39:20.056 Test: blockdev write read block ...passed 00:39:20.056 Test: blockdev write zeroes read block ...passed 00:39:20.056 Test: blockdev write zeroes read no split ...passed 00:39:20.056 Test: blockdev write zeroes read split ...passed 00:39:20.056 Test: blockdev write zeroes read split partial ...passed 00:39:20.056 Test: blockdev reset ...[2024-07-12 07:49:53.866143] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:39:20.056 [2024-07-12 07:49:53.868390] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:20.056 passed 00:39:20.056 Test: blockdev write read 8 blocks ...passed 00:39:20.056 Test: blockdev write read size > 128k ...passed 00:39:20.056 Test: blockdev write read invalid size ...passed 00:39:20.056 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:20.056 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:20.056 Test: blockdev write read max offset ...passed 00:39:20.056 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:20.056 Test: blockdev writev readv 8 blocks ...passed 00:39:20.056 Test: blockdev writev readv 30 x 1block ...passed 00:39:20.056 Test: blockdev writev readv block ...passed 00:39:20.056 Test: blockdev writev readv size > 128k ...passed 00:39:20.056 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:20.056 Test: blockdev comparev and writev ...[2024-07-12 07:49:53.876001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0xb140b000 len:0x1000 00:39:20.056 [2024-07-12 07:49:53.876093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:39:20.056 passed 00:39:20.056 Test: blockdev nvme passthru rw ...passed 00:39:20.056 Test: blockdev nvme passthru vendor specific ...passed 00:39:20.056 Test: blockdev nvme admin passthru ...passed 00:39:20.056 Test: blockdev copy ...passed 00:39:20.056 Suite: bdevio tests on: Nvme0n1p1 00:39:20.056 Test: blockdev write read block ...passed 00:39:20.056 Test: blockdev write zeroes read block ...passed 00:39:20.056 Test: blockdev write zeroes read no split ...passed 00:39:20.056 Test: blockdev write zeroes read split ...passed 00:39:20.056 Test: blockdev write zeroes read split partial ...passed 00:39:20.056 Test: blockdev reset ...[2024-07-12 07:49:53.890796] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:39:20.056 [2024-07-12 07:49:53.892740] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:20.056 passed 00:39:20.056 Test: blockdev write read 8 blocks ...passed 00:39:20.056 Test: blockdev write read size > 128k ...passed 00:39:20.056 Test: blockdev write read invalid size ...passed 00:39:20.056 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:20.056 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:20.056 Test: blockdev write read max offset ...passed 00:39:20.056 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:20.056 Test: blockdev writev readv 8 blocks ...passed 00:39:20.056 Test: blockdev writev readv 30 x 1block ...passed 00:39:20.056 Test: blockdev writev readv block ...passed 00:39:20.056 Test: blockdev writev readv size > 128k ...passed 00:39:20.056 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:20.056 Test: blockdev comparev and writev ...[2024-07-12 07:49:53.899456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0xb140d000 len:0x1000 00:39:20.056 [2024-07-12 07:49:53.899521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:39:20.056 passed 00:39:20.056 Test: blockdev nvme passthru rw ...passed 00:39:20.056 Test: blockdev nvme passthru vendor specific ...passed 00:39:20.056 Test: blockdev nvme admin passthru ...passed 00:39:20.056 Test: blockdev copy ...passed 00:39:20.056 00:39:20.056 Run Summary: Type Total Ran Passed Failed Inactive 00:39:20.056 suites 2 2 n/a 0 0 00:39:20.056 tests 46 46 46 0 0 00:39:20.056 asserts 284 284 284 0 n/a 00:39:20.056 00:39:20.056 Elapsed time = 0.115 seconds 00:39:20.056 0 00:39:20.056 07:49:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 177408 00:39:20.056 07:49:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 177408 ']' 00:39:20.056 07:49:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 177408 00:39:20.056 07:49:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:39:20.056 07:49:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:20.056 07:49:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 177408 00:39:20.315 07:49:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:20.315 07:49:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:20.315 killing process with pid 177408 00:39:20.315 07:49:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 177408' 00:39:20.315 07:49:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@965 -- # kill 177408 00:39:20.315 07:49:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # wait 177408 00:39:20.315 07:49:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:39:20.315 00:39:20.315 real 0m1.347s 00:39:20.315 user 0m3.226s 00:39:20.315 sys 0m0.347s 00:39:20.315 ************************************ 00:39:20.315 END TEST bdev_bounds 00:39:20.315 ************************************ 00:39:20.315 07:49:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:20.315 07:49:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:20.581 07:49:54 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:39:20.581 07:49:54 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:39:20.581 07:49:54 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:20.581 07:49:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:20.581 ************************************ 00:39:20.581 START TEST bdev_nbd 00:39:20.581 ************************************ 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=2 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=177465 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 177465 /var/tmp/spdk-nbd.sock 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 177465 ']' 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:39:20.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:20.581 07:49:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:20.581 [2024-07-12 07:49:54.316220] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:20.581 [2024-07-12 07:49:54.316478] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:20.850 [2024-07-12 07:49:54.470560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:20.850 [2024-07-12 07:49:54.514422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.448 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:21.448 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:39:21.448 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:39:21.448 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:21.448 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:39:21.448 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:39:21.448 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:39:21.448 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:21.449 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:39:21.449 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:39:21.449 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:39:21.449 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:39:21.449 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:39:21.449 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:39:21.449 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:21.708 1+0 records in 00:39:21.708 1+0 records out 00:39:21.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490447 s, 8.4 MB/s 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:39:21.708 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:21.967 1+0 records in 00:39:21.967 1+0 records out 00:39:21.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000918883 s, 4.5 MB/s 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:39:21.967 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:22.226 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:39:22.226 { 00:39:22.226 "nbd_device": "/dev/nbd0", 00:39:22.226 "bdev_name": "Nvme0n1p1" 00:39:22.226 }, 00:39:22.226 { 00:39:22.226 "nbd_device": "/dev/nbd1", 00:39:22.226 "bdev_name": "Nvme0n1p2" 00:39:22.226 } 00:39:22.226 ]' 00:39:22.226 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:39:22.226 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:39:22.226 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:39:22.226 { 00:39:22.226 "nbd_device": "/dev/nbd0", 00:39:22.226 "bdev_name": "Nvme0n1p1" 00:39:22.226 }, 00:39:22.226 { 00:39:22.226 "nbd_device": "/dev/nbd1", 00:39:22.226 "bdev_name": "Nvme0n1p2" 00:39:22.226 } 00:39:22.226 ]' 00:39:22.226 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:39:22.226 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:22.226 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:22.226 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:22.226 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:22.226 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:22.226 07:49:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:22.485 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:22.485 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:22.485 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:22.485 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:22.486 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:22.486 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:22.486 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:22.486 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:22.486 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:22.486 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:39:22.744 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:22.744 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:22.744 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:22.744 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:22.744 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:22.744 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:22.744 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:22.744 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:22.744 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:22.744 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:22.745 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:23.003 07:49:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:39:23.262 /dev/nbd0 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:23.262 1+0 records in 00:39:23.262 1+0 records out 00:39:23.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00079186 s, 5.2 MB/s 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:23.262 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:39:23.520 /dev/nbd1 00:39:23.520 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:23.520 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:23.520 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:39:23.520 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:39:23.520 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:39:23.520 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:39:23.520 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:39:23.520 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:39:23.520 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:39:23.521 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:39:23.521 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:23.521 1+0 records in 00:39:23.521 1+0 records out 00:39:23.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000740946 s, 5.5 MB/s 00:39:23.521 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:23.521 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:39:23.521 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:23.521 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:39:23.521 07:49:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:39:23.521 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:23.521 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:23.521 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:23.521 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:23.521 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:39:23.823 { 00:39:23.823 "nbd_device": "/dev/nbd0", 00:39:23.823 "bdev_name": "Nvme0n1p1" 00:39:23.823 }, 00:39:23.823 { 00:39:23.823 "nbd_device": "/dev/nbd1", 00:39:23.823 "bdev_name": "Nvme0n1p2" 00:39:23.823 } 00:39:23.823 ]' 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:39:23.823 { 00:39:23.823 "nbd_device": "/dev/nbd0", 00:39:23.823 "bdev_name": "Nvme0n1p1" 00:39:23.823 }, 00:39:23.823 { 00:39:23.823 "nbd_device": "/dev/nbd1", 00:39:23.823 "bdev_name": "Nvme0n1p2" 00:39:23.823 } 00:39:23.823 ]' 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:39:23.823 /dev/nbd1' 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:39:23.823 /dev/nbd1' 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:39:23.823 256+0 records in 00:39:23.823 256+0 records out 00:39:23.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118198 s, 88.7 MB/s 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:23.823 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:39:24.081 256+0 records in 00:39:24.081 256+0 records out 00:39:24.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0846855 s, 12.4 MB/s 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:39:24.081 256+0 records in 00:39:24.081 256+0 records out 00:39:24.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0858795 s, 12.2 MB/s 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:24.081 07:49:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:24.340 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:24.340 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:24.340 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:24.340 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:24.340 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:24.340 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:24.340 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:24.340 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:24.340 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:24.340 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:39:24.598 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:24.598 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:24.598 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:24.598 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:24.598 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:24.599 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:24.599 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:24.599 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:24.599 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:24.599 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:24.599 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:39:24.857 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:39:25.116 malloc_lvol_verify 00:39:25.116 07:49:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:39:25.374 2b9d4b1f-824e-41ac-8306-cc7b05a66e6a 00:39:25.374 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:39:25.633 8ff8f48d-38ea-494c-9197-fbef42f8e4d3 00:39:25.633 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:39:25.633 /dev/nbd0 00:39:25.633 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:39:25.633 mke2fs 1.46.5 (30-Dec-2021) 00:39:25.633 00:39:25.633 Filesystem too small for a journal 00:39:25.633 Discarding device blocks: 0/1024 done 00:39:25.633 Creating filesystem with 1024 4k blocks and 1024 inodes 00:39:25.633 00:39:25.633 Allocating group tables: 0/1 done 00:39:25.633 Writing inode tables: 0/1 done 00:39:25.633 Writing superblocks and filesystem accounting information: 0/1 done 00:39:25.633 00:39:25.633 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:39:25.633 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:25.633 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:25.633 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:25.633 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:25.633 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:25.633 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:25.633 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 177465 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 177465 ']' 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 177465 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 177465 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 177465' 00:39:25.892 killing process with pid 177465 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@965 -- # kill 177465 00:39:25.892 07:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # wait 177465 00:39:26.151 07:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:39:26.152 00:39:26.152 real 0m5.724s 00:39:26.152 user 0m8.203s 00:39:26.152 sys 0m1.913s 00:39:26.152 07:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:26.152 07:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:26.152 ************************************ 00:39:26.152 END TEST bdev_nbd 00:39:26.152 ************************************ 00:39:26.152 07:50:00 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:39:26.152 skipping fio tests on NVMe due to multi-ns failures. 00:39:26.152 07:50:00 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:39:26.152 07:50:00 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:39:26.152 07:50:00 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:39:26.152 07:50:00 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:26.152 07:50:00 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:39:26.152 07:50:00 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:39:26.152 07:50:00 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:26.152 07:50:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:26.152 ************************************ 00:39:26.152 START TEST bdev_verify 00:39:26.152 ************************************ 00:39:26.152 07:50:00 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:39:26.411 [2024-07-12 07:50:00.097383] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:26.411 [2024-07-12 07:50:00.097650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177704 ] 00:39:26.411 [2024-07-12 07:50:00.255008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:26.670 [2024-07-12 07:50:00.298661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.670 [2024-07-12 07:50:00.298662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:26.670 Running I/O for 5 seconds... 00:39:31.937 00:39:31.938 Latency(us) 00:39:31.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:31.938 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:31.938 Verification LBA range: start 0x0 length 0x4ff80 00:39:31.938 Nvme0n1p1 : 5.02 4103.54 16.03 0.00 0.00 31086.10 5398.92 37199.48 00:39:31.938 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:31.938 Verification LBA range: start 0x4ff80 length 0x4ff80 00:39:31.938 Nvme0n1p1 : 5.02 3209.68 12.54 0.00 0.00 39697.90 6803.26 45438.29 00:39:31.938 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:31.938 Verification LBA range: start 0x0 length 0x4ff7f 00:39:31.938 Nvme0n1p2 : 5.03 4111.45 16.06 0.00 0.00 30955.22 1396.54 38198.13 00:39:31.938 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:31.938 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:39:31.938 Nvme0n1p2 : 5.03 3229.19 12.61 0.00 0.00 39348.32 1435.55 44938.97 00:39:31.938 =================================================================================================================== 00:39:31.938 Total : 14653.85 57.24 0.00 0.00 34758.13 1396.54 45438.29 00:39:32.196 00:39:32.196 real 0m5.940s 00:39:32.196 user 0m11.137s 00:39:32.196 sys 0m0.224s 00:39:32.196 07:50:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:32.196 07:50:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:39:32.196 ************************************ 00:39:32.196 END TEST bdev_verify 00:39:32.196 ************************************ 00:39:32.197 07:50:06 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:39:32.197 07:50:06 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:39:32.197 07:50:06 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:32.197 07:50:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:32.197 ************************************ 00:39:32.197 START TEST bdev_verify_big_io 00:39:32.197 ************************************ 00:39:32.197 07:50:06 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:39:32.456 [2024-07-12 07:50:06.098216] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:32.456 [2024-07-12 07:50:06.098449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177795 ] 00:39:32.456 [2024-07-12 07:50:06.255892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:32.456 [2024-07-12 07:50:06.307135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:32.456 [2024-07-12 07:50:06.307131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:32.715 Running I/O for 5 seconds... 00:39:37.987 00:39:37.988 Latency(us) 00:39:37.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:37.988 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:39:37.988 Verification LBA range: start 0x0 length 0x4ff8 00:39:37.988 Nvme0n1p1 : 5.28 290.66 18.17 0.00 0.00 425719.41 10423.34 499321.90 00:39:37.988 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:39:37.988 Verification LBA range: start 0x4ff8 length 0x4ff8 00:39:37.988 Nvme0n1p1 : 5.35 287.25 17.95 0.00 0.00 433297.15 65910.49 495327.33 00:39:37.988 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:39:37.988 Verification LBA range: start 0x0 length 0x4ff7 00:39:37.988 Nvme0n1p2 : 5.35 299.69 18.73 0.00 0.00 394128.33 1076.66 385476.51 00:39:37.988 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:39:37.988 Verification LBA range: start 0x4ff7 length 0x4ff7 00:39:37.988 Nvme0n1p2 : 5.35 294.99 18.44 0.00 0.00 401989.81 2075.31 357514.48 00:39:37.988 =================================================================================================================== 00:39:37.988 Total : 1172.60 73.29 0.00 0.00 413492.01 1076.66 499321.90 00:39:38.557 00:39:38.557 real 0m6.356s 00:39:38.557 user 0m11.980s 00:39:38.557 sys 0m0.217s 00:39:38.557 07:50:12 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:38.557 07:50:12 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:39:38.557 ************************************ 00:39:38.557 END TEST bdev_verify_big_io 00:39:38.557 ************************************ 00:39:38.816 07:50:12 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:38.816 07:50:12 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:39:38.816 07:50:12 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:38.816 07:50:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:38.816 ************************************ 00:39:38.816 START TEST bdev_write_zeroes 00:39:38.816 ************************************ 00:39:38.816 07:50:12 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:38.816 [2024-07-12 07:50:12.518981] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:38.816 [2024-07-12 07:50:12.519164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177886 ] 00:39:38.816 [2024-07-12 07:50:12.658709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.074 [2024-07-12 07:50:12.700714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:39.074 Running I/O for 1 seconds... 00:39:40.035 00:39:40.035 Latency(us) 00:39:40.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.035 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:39:40.035 Nvme0n1p1 : 1.00 30709.28 119.96 0.00 0.00 4160.10 2293.76 13918.60 00:39:40.035 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:39:40.035 Nvme0n1p2 : 1.00 30543.48 119.31 0.00 0.00 4177.79 2293.76 17850.76 00:39:40.035 =================================================================================================================== 00:39:40.035 Total : 61252.76 239.27 0.00 0.00 4168.92 2293.76 17850.76 00:39:40.294 00:39:40.294 real 0m1.692s 00:39:40.294 user 0m1.407s 00:39:40.294 sys 0m0.185s 00:39:40.294 07:50:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:40.294 07:50:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:39:40.294 ************************************ 00:39:40.294 END TEST bdev_write_zeroes 00:39:40.294 ************************************ 00:39:40.553 07:50:14 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:40.553 07:50:14 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:39:40.553 07:50:14 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:40.553 07:50:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:40.553 ************************************ 00:39:40.553 START TEST bdev_json_nonenclosed 00:39:40.553 ************************************ 00:39:40.553 07:50:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:40.553 [2024-07-12 07:50:14.266402] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:40.553 [2024-07-12 07:50:14.266742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177937 ] 00:39:40.553 [2024-07-12 07:50:14.406410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.813 [2024-07-12 07:50:14.451139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.813 [2024-07-12 07:50:14.451494] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:39:40.813 [2024-07-12 07:50:14.451649] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:40.813 [2024-07-12 07:50:14.451701] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:40.813 00:39:40.813 real 0m0.361s 00:39:40.813 user 0m0.153s 00:39:40.813 sys 0m0.108s 00:39:40.813 07:50:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:40.813 07:50:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:39:40.813 ************************************ 00:39:40.813 END TEST bdev_json_nonenclosed 00:39:40.813 ************************************ 00:39:40.813 07:50:14 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:40.813 07:50:14 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:39:40.813 07:50:14 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:40.813 07:50:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:40.813 ************************************ 00:39:40.813 START TEST bdev_json_nonarray 00:39:40.813 ************************************ 00:39:40.813 07:50:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:41.073 [2024-07-12 07:50:14.740250] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:41.073 [2024-07-12 07:50:14.740544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177960 ] 00:39:41.073 [2024-07-12 07:50:14.895130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:41.073 [2024-07-12 07:50:14.948644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:41.073 [2024-07-12 07:50:14.949034] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:39:41.073 [2024-07-12 07:50:14.949164] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:41.073 [2024-07-12 07:50:14.949226] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:41.332 00:39:41.332 real 0m0.427s 00:39:41.332 user 0m0.152s 00:39:41.332 sys 0m0.173s 00:39:41.332 07:50:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:41.332 07:50:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:39:41.332 ************************************ 00:39:41.332 END TEST bdev_json_nonarray 00:39:41.332 ************************************ 00:39:41.332 07:50:15 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:39:41.332 07:50:15 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:39:41.332 07:50:15 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:39:41.332 07:50:15 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:41.332 07:50:15 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:41.332 07:50:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:41.332 ************************************ 00:39:41.332 START TEST bdev_gpt_uuid 00:39:41.332 ************************************ 00:39:41.332 07:50:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1121 -- # bdev_gpt_uuid 00:39:41.332 07:50:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:39:41.332 07:50:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:39:41.332 07:50:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=177991 00:39:41.332 07:50:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:39:41.332 07:50:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:39:41.332 07:50:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 177991 00:39:41.332 07:50:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@827 -- # '[' -z 177991 ']' 00:39:41.332 07:50:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:41.332 07:50:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:41.332 07:50:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:41.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:41.332 07:50:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:41.332 07:50:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:39:41.591 [2024-07-12 07:50:15.257778] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:41.591 [2024-07-12 07:50:15.258831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177991 ] 00:39:41.591 [2024-07-12 07:50:15.421171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:41.850 [2024-07-12 07:50:15.475524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:42.419 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:42.419 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # return 0 00:39:42.419 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:42.419 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.419 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:39:42.678 Some configs were skipped because the RPC state that can call them passed over. 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:39:42.678 { 00:39:42.678 "name": "Nvme0n1p1", 00:39:42.678 "aliases": [ 00:39:42.678 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:39:42.678 ], 00:39:42.678 "product_name": "GPT Disk", 00:39:42.678 "block_size": 4096, 00:39:42.678 "num_blocks": 655104, 00:39:42.678 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:39:42.678 "assigned_rate_limits": { 00:39:42.678 "rw_ios_per_sec": 0, 00:39:42.678 "rw_mbytes_per_sec": 0, 00:39:42.678 "r_mbytes_per_sec": 0, 00:39:42.678 "w_mbytes_per_sec": 0 00:39:42.678 }, 00:39:42.678 "claimed": false, 00:39:42.678 "zoned": false, 00:39:42.678 "supported_io_types": { 00:39:42.678 "read": true, 00:39:42.678 "write": true, 00:39:42.678 "unmap": true, 00:39:42.678 "write_zeroes": true, 00:39:42.678 "flush": true, 00:39:42.678 "reset": true, 00:39:42.678 "compare": true, 00:39:42.678 "compare_and_write": false, 00:39:42.678 "abort": true, 00:39:42.678 "nvme_admin": false, 00:39:42.678 "nvme_io": false 00:39:42.678 }, 00:39:42.678 "driver_specific": { 00:39:42.678 "gpt": { 00:39:42.678 "base_bdev": "Nvme0n1", 00:39:42.678 "offset_blocks": 256, 00:39:42.678 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:39:42.678 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:39:42.678 "partition_name": "SPDK_TEST_first" 00:39:42.678 } 00:39:42.678 } 00:39:42.678 } 00:39:42.678 ]' 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.678 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:39:42.679 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.679 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:39:42.679 { 00:39:42.679 "name": "Nvme0n1p2", 00:39:42.679 "aliases": [ 00:39:42.679 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:39:42.679 ], 00:39:42.679 "product_name": "GPT Disk", 00:39:42.679 "block_size": 4096, 00:39:42.679 "num_blocks": 655103, 00:39:42.679 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:39:42.679 "assigned_rate_limits": { 00:39:42.679 "rw_ios_per_sec": 0, 00:39:42.679 "rw_mbytes_per_sec": 0, 00:39:42.679 "r_mbytes_per_sec": 0, 00:39:42.679 "w_mbytes_per_sec": 0 00:39:42.679 }, 00:39:42.679 "claimed": false, 00:39:42.679 "zoned": false, 00:39:42.679 "supported_io_types": { 00:39:42.679 "read": true, 00:39:42.679 "write": true, 00:39:42.679 "unmap": true, 00:39:42.679 "write_zeroes": true, 00:39:42.679 "flush": true, 00:39:42.679 "reset": true, 00:39:42.679 "compare": true, 00:39:42.679 "compare_and_write": false, 00:39:42.679 "abort": true, 00:39:42.679 "nvme_admin": false, 00:39:42.679 "nvme_io": false 00:39:42.679 }, 00:39:42.679 "driver_specific": { 00:39:42.679 "gpt": { 00:39:42.679 "base_bdev": "Nvme0n1", 00:39:42.679 "offset_blocks": 655360, 00:39:42.679 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:39:42.679 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:39:42.679 "partition_name": "SPDK_TEST_second" 00:39:42.679 } 00:39:42.679 } 00:39:42.679 } 00:39:42.679 ]' 00:39:42.679 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:39:42.679 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:39:42.938 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:39:42.938 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:39:42.938 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:39:42.938 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:39:42.938 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 177991 00:39:42.938 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@946 -- # '[' -z 177991 ']' 00:39:42.938 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # kill -0 177991 00:39:42.938 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # uname 00:39:42.938 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:42.938 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 177991 00:39:42.938 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:42.938 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:42.938 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 177991' 00:39:42.938 killing process with pid 177991 00:39:42.938 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@965 -- # kill 177991 00:39:42.938 07:50:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # wait 177991 00:39:43.197 00:39:43.197 real 0m1.917s 00:39:43.197 user 0m2.196s 00:39:43.197 sys 0m0.436s 00:39:43.197 07:50:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:43.197 07:50:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:39:43.197 ************************************ 00:39:43.197 END TEST bdev_gpt_uuid 00:39:43.197 ************************************ 00:39:43.457 07:50:17 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:39:43.457 07:50:17 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:39:43.457 07:50:17 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:39:43.457 07:50:17 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:39:43.457 07:50:17 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:43.457 07:50:17 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:39:43.457 07:50:17 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:39:43.457 07:50:17 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:39:43.457 07:50:17 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:39:43.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:39:43.716 Waiting for block devices as requested 00:39:43.976 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:39:43.976 07:50:17 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:39:43.976 07:50:17 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:39:43.976 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:39:43.976 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:39:43.976 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:39:43.976 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:39:43.976 07:50:17 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:39:43.976 00:39:43.976 real 0m32.646s 00:39:43.976 user 0m46.864s 00:39:43.976 sys 0m7.348s 00:39:43.976 07:50:17 blockdev_nvme_gpt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:43.976 07:50:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:43.976 ************************************ 00:39:43.976 END TEST blockdev_nvme_gpt 00:39:43.976 ************************************ 00:39:43.976 07:50:17 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:39:43.976 07:50:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:43.976 07:50:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:43.976 07:50:17 -- common/autotest_common.sh@10 -- # set +x 00:39:44.236 ************************************ 00:39:44.236 START TEST nvme 00:39:44.236 ************************************ 00:39:44.236 07:50:17 nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:39:44.236 * Looking for test storage... 00:39:44.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:39:44.236 07:50:17 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:44.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:39:44.805 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:39:46.184 07:50:19 nvme -- nvme/nvme.sh@79 -- # uname 00:39:46.184 07:50:19 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:39:46.184 07:50:19 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:39:46.184 07:50:19 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:39:46.184 07:50:19 nvme -- common/autotest_common.sh@1078 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:39:46.184 07:50:19 nvme -- common/autotest_common.sh@1064 -- # _randomize_va_space=2 00:39:46.184 07:50:19 nvme -- common/autotest_common.sh@1065 -- # echo 0 00:39:46.184 07:50:19 nvme -- common/autotest_common.sh@1067 -- # stubpid=178386 00:39:46.184 07:50:19 nvme -- common/autotest_common.sh@1066 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:39:46.184 Waiting for stub to ready for secondary processes... 00:39:46.184 07:50:19 nvme -- common/autotest_common.sh@1068 -- # echo Waiting for stub to ready for secondary processes... 00:39:46.184 07:50:19 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:39:46.184 07:50:19 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/178386 ]] 00:39:46.184 07:50:19 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:39:46.184 [2024-07-12 07:50:19.742673] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:46.184 [2024-07-12 07:50:19.742923] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:39:47.121 07:50:20 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:39:47.121 07:50:20 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/178386 ]] 00:39:47.121 07:50:20 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:39:48.056 [2024-07-12 07:50:21.575677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:48.056 [2024-07-12 07:50:21.630654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:48.056 [2024-07-12 07:50:21.630885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:39:48.056 [2024-07-12 07:50:21.630887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:48.056 [2024-07-12 07:50:21.642218] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:39:48.056 [2024-07-12 07:50:21.642353] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:39:48.056 [2024-07-12 07:50:21.656267] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:39:48.057 [2024-07-12 07:50:21.656659] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:39:48.057 07:50:21 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:39:48.057 done. 00:39:48.057 07:50:21 nvme -- common/autotest_common.sh@1074 -- # echo done. 00:39:48.057 07:50:21 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:39:48.057 07:50:21 nvme -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:39:48.057 07:50:21 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:48.057 07:50:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:39:48.057 ************************************ 00:39:48.057 START TEST nvme_reset 00:39:48.057 ************************************ 00:39:48.057 07:50:21 nvme.nvme_reset -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:39:48.315 Initializing NVMe Controllers 00:39:48.315 Skipping QEMU NVMe SSD at 0000:00:10.0 00:39:48.315 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:39:48.315 00:39:48.315 real 0m0.301s 00:39:48.315 user 0m0.076s 00:39:48.315 sys 0m0.149s 00:39:48.315 07:50:22 nvme.nvme_reset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:48.315 07:50:22 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:39:48.315 ************************************ 00:39:48.315 END TEST nvme_reset 00:39:48.315 ************************************ 00:39:48.315 07:50:22 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:39:48.315 07:50:22 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:48.315 07:50:22 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:48.315 07:50:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:39:48.315 ************************************ 00:39:48.315 START TEST nvme_identify 00:39:48.315 ************************************ 00:39:48.315 07:50:22 nvme.nvme_identify -- common/autotest_common.sh@1121 -- # nvme_identify 00:39:48.315 07:50:22 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:39:48.315 07:50:22 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:39:48.315 07:50:22 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:39:48.315 07:50:22 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:39:48.315 07:50:22 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # bdfs=() 00:39:48.315 07:50:22 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # local bdfs 00:39:48.315 07:50:22 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:39:48.315 07:50:22 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:39:48.315 07:50:22 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:39:48.315 07:50:22 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:39:48.315 07:50:22 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:39:48.315 07:50:22 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:39:48.574 [2024-07-12 07:50:22.367457] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 178424 terminated unexpected 00:39:48.574 ===================================================== 00:39:48.574 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:39:48.575 ===================================================== 00:39:48.575 Controller Capabilities/Features 00:39:48.575 ================================ 00:39:48.575 Vendor ID: 1b36 00:39:48.575 Subsystem Vendor ID: 1af4 00:39:48.575 Serial Number: 12340 00:39:48.575 Model Number: QEMU NVMe Ctrl 00:39:48.575 Firmware Version: 8.0.0 00:39:48.575 Recommended Arb Burst: 6 00:39:48.575 IEEE OUI Identifier: 00 54 52 00:39:48.575 Multi-path I/O 00:39:48.575 May have multiple subsystem ports: No 00:39:48.575 May have multiple controllers: No 00:39:48.575 Associated with SR-IOV VF: No 00:39:48.575 Max Data Transfer Size: 524288 00:39:48.575 Max Number of Namespaces: 256 00:39:48.575 Max Number of I/O Queues: 64 00:39:48.575 NVMe Specification Version (VS): 1.4 00:39:48.575 NVMe Specification Version (Identify): 1.4 00:39:48.575 Maximum Queue Entries: 2048 00:39:48.575 Contiguous Queues Required: Yes 00:39:48.575 Arbitration Mechanisms Supported 00:39:48.575 Weighted Round Robin: Not Supported 00:39:48.575 Vendor Specific: Not Supported 00:39:48.575 Reset Timeout: 7500 ms 00:39:48.575 Doorbell Stride: 4 bytes 00:39:48.575 NVM Subsystem Reset: Not Supported 00:39:48.575 Command Sets Supported 00:39:48.575 NVM Command Set: Supported 00:39:48.575 Boot Partition: Not Supported 00:39:48.575 Memory Page Size Minimum: 4096 bytes 00:39:48.575 Memory Page Size Maximum: 65536 bytes 00:39:48.575 Persistent Memory Region: Not Supported 00:39:48.575 Optional Asynchronous Events Supported 00:39:48.575 Namespace Attribute Notices: Supported 00:39:48.575 Firmware Activation Notices: Not Supported 00:39:48.575 ANA Change Notices: Not Supported 00:39:48.575 PLE Aggregate Log Change Notices: Not Supported 00:39:48.575 LBA Status Info Alert Notices: Not Supported 00:39:48.575 EGE Aggregate Log Change Notices: Not Supported 00:39:48.575 Normal NVM Subsystem Shutdown event: Not Supported 00:39:48.575 Zone Descriptor Change Notices: Not Supported 00:39:48.575 Discovery Log Change Notices: Not Supported 00:39:48.575 Controller Attributes 00:39:48.575 128-bit Host Identifier: Not Supported 00:39:48.575 Non-Operational Permissive Mode: Not Supported 00:39:48.575 NVM Sets: Not Supported 00:39:48.575 Read Recovery Levels: Not Supported 00:39:48.575 Endurance Groups: Not Supported 00:39:48.575 Predictable Latency Mode: Not Supported 00:39:48.575 Traffic Based Keep ALive: Not Supported 00:39:48.575 Namespace Granularity: Not Supported 00:39:48.575 SQ Associations: Not Supported 00:39:48.575 UUID List: Not Supported 00:39:48.575 Multi-Domain Subsystem: Not Supported 00:39:48.575 Fixed Capacity Management: Not Supported 00:39:48.575 Variable Capacity Management: Not Supported 00:39:48.575 Delete Endurance Group: Not Supported 00:39:48.575 Delete NVM Set: Not Supported 00:39:48.575 Extended LBA Formats Supported: Supported 00:39:48.575 Flexible Data Placement Supported: Not Supported 00:39:48.575 00:39:48.575 Controller Memory Buffer Support 00:39:48.575 ================================ 00:39:48.575 Supported: No 00:39:48.575 00:39:48.575 Persistent Memory Region Support 00:39:48.575 ================================ 00:39:48.575 Supported: No 00:39:48.575 00:39:48.575 Admin Command Set Attributes 00:39:48.575 ============================ 00:39:48.575 Security Send/Receive: Not Supported 00:39:48.575 Format NVM: Supported 00:39:48.575 Firmware Activate/Download: Not Supported 00:39:48.575 Namespace Management: Supported 00:39:48.575 Device Self-Test: Not Supported 00:39:48.575 Directives: Supported 00:39:48.575 NVMe-MI: Not Supported 00:39:48.575 Virtualization Management: Not Supported 00:39:48.575 Doorbell Buffer Config: Supported 00:39:48.575 Get LBA Status Capability: Not Supported 00:39:48.575 Command & Feature Lockdown Capability: Not Supported 00:39:48.575 Abort Command Limit: 4 00:39:48.575 Async Event Request Limit: 4 00:39:48.575 Number of Firmware Slots: N/A 00:39:48.575 Firmware Slot 1 Read-Only: N/A 00:39:48.575 Firmware Activation Without Reset: N/A 00:39:48.575 Multiple Update Detection Support: N/A 00:39:48.575 Firmware Update Granularity: No Information Provided 00:39:48.575 Per-Namespace SMART Log: Yes 00:39:48.575 Asymmetric Namespace Access Log Page: Not Supported 00:39:48.575 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:39:48.575 Command Effects Log Page: Supported 00:39:48.575 Get Log Page Extended Data: Supported 00:39:48.575 Telemetry Log Pages: Not Supported 00:39:48.575 Persistent Event Log Pages: Not Supported 00:39:48.575 Supported Log Pages Log Page: May Support 00:39:48.575 Commands Supported & Effects Log Page: Not Supported 00:39:48.575 Feature Identifiers & Effects Log Page:May Support 00:39:48.575 NVMe-MI Commands & Effects Log Page: May Support 00:39:48.575 Data Area 4 for Telemetry Log: Not Supported 00:39:48.575 Error Log Page Entries Supported: 1 00:39:48.575 Keep Alive: Not Supported 00:39:48.575 00:39:48.575 NVM Command Set Attributes 00:39:48.575 ========================== 00:39:48.575 Submission Queue Entry Size 00:39:48.575 Max: 64 00:39:48.575 Min: 64 00:39:48.575 Completion Queue Entry Size 00:39:48.575 Max: 16 00:39:48.575 Min: 16 00:39:48.575 Number of Namespaces: 256 00:39:48.575 Compare Command: Supported 00:39:48.575 Write Uncorrectable Command: Not Supported 00:39:48.575 Dataset Management Command: Supported 00:39:48.575 Write Zeroes Command: Supported 00:39:48.575 Set Features Save Field: Supported 00:39:48.575 Reservations: Not Supported 00:39:48.575 Timestamp: Supported 00:39:48.575 Copy: Supported 00:39:48.575 Volatile Write Cache: Present 00:39:48.575 Atomic Write Unit (Normal): 1 00:39:48.575 Atomic Write Unit (PFail): 1 00:39:48.575 Atomic Compare & Write Unit: 1 00:39:48.575 Fused Compare & Write: Not Supported 00:39:48.575 Scatter-Gather List 00:39:48.575 SGL Command Set: Supported 00:39:48.575 SGL Keyed: Not Supported 00:39:48.575 SGL Bit Bucket Descriptor: Not Supported 00:39:48.575 SGL Metadata Pointer: Not Supported 00:39:48.575 Oversized SGL: Not Supported 00:39:48.575 SGL Metadata Address: Not Supported 00:39:48.575 SGL Offset: Not Supported 00:39:48.575 Transport SGL Data Block: Not Supported 00:39:48.575 Replay Protected Memory Block: Not Supported 00:39:48.575 00:39:48.575 Firmware Slot Information 00:39:48.575 ========================= 00:39:48.575 Active slot: 1 00:39:48.575 Slot 1 Firmware Revision: 1.0 00:39:48.575 00:39:48.575 00:39:48.575 Commands Supported and Effects 00:39:48.575 ============================== 00:39:48.575 Admin Commands 00:39:48.575 -------------- 00:39:48.575 Delete I/O Submission Queue (00h): Supported 00:39:48.575 Create I/O Submission Queue (01h): Supported 00:39:48.575 Get Log Page (02h): Supported 00:39:48.575 Delete I/O Completion Queue (04h): Supported 00:39:48.575 Create I/O Completion Queue (05h): Supported 00:39:48.575 Identify (06h): Supported 00:39:48.575 Abort (08h): Supported 00:39:48.575 Set Features (09h): Supported 00:39:48.575 Get Features (0Ah): Supported 00:39:48.575 Asynchronous Event Request (0Ch): Supported 00:39:48.575 Namespace Attachment (15h): Supported NS-Inventory-Change 00:39:48.575 Directive Send (19h): Supported 00:39:48.575 Directive Receive (1Ah): Supported 00:39:48.575 Virtualization Management (1Ch): Supported 00:39:48.575 Doorbell Buffer Config (7Ch): Supported 00:39:48.575 Format NVM (80h): Supported LBA-Change 00:39:48.575 I/O Commands 00:39:48.575 ------------ 00:39:48.575 Flush (00h): Supported LBA-Change 00:39:48.575 Write (01h): Supported LBA-Change 00:39:48.575 Read (02h): Supported 00:39:48.575 Compare (05h): Supported 00:39:48.575 Write Zeroes (08h): Supported LBA-Change 00:39:48.575 Dataset Management (09h): Supported LBA-Change 00:39:48.575 Unknown (0Ch): Supported 00:39:48.575 Unknown (12h): Supported 00:39:48.575 Copy (19h): Supported LBA-Change 00:39:48.575 Unknown (1Dh): Supported LBA-Change 00:39:48.575 00:39:48.575 Error Log 00:39:48.575 ========= 00:39:48.575 00:39:48.575 Arbitration 00:39:48.575 =========== 00:39:48.575 Arbitration Burst: no limit 00:39:48.575 00:39:48.575 Power Management 00:39:48.575 ================ 00:39:48.575 Number of Power States: 1 00:39:48.575 Current Power State: Power State #0 00:39:48.575 Power State #0: 00:39:48.575 Max Power: 25.00 W 00:39:48.575 Non-Operational State: Operational 00:39:48.575 Entry Latency: 16 microseconds 00:39:48.575 Exit Latency: 4 microseconds 00:39:48.575 Relative Read Throughput: 0 00:39:48.575 Relative Read Latency: 0 00:39:48.575 Relative Write Throughput: 0 00:39:48.575 Relative Write Latency: 0 00:39:48.575 Idle Power: Not Reported 00:39:48.575 Active Power: Not Reported 00:39:48.575 Non-Operational Permissive Mode: Not Supported 00:39:48.575 00:39:48.575 Health Information 00:39:48.575 ================== 00:39:48.575 Critical Warnings: 00:39:48.576 Available Spare Space: OK 00:39:48.576 Temperature: OK 00:39:48.576 Device Reliability: OK 00:39:48.576 Read Only: No 00:39:48.576 Volatile Memory Backup: OK 00:39:48.576 Current Temperature: 323 Kelvin (50 Celsius) 00:39:48.576 Temperature Threshold: 343 Kelvin (70 Celsius) 00:39:48.576 Available Spare: 0% 00:39:48.576 Available Spare Threshold: 0% 00:39:48.576 Life Percentage Used: 0% 00:39:48.576 Data Units Read: 3325 00:39:48.576 Data Units Written: 2997 00:39:48.576 Host Read Commands: 177012 00:39:48.576 Host Write Commands: 190173 00:39:48.576 Controller Busy Time: 0 minutes 00:39:48.576 Power Cycles: 0 00:39:48.576 Power On Hours: 0 hours 00:39:48.576 Unsafe Shutdowns: 0 00:39:48.576 Unrecoverable Media Errors: 0 00:39:48.576 Lifetime Error Log Entries: 0 00:39:48.576 Warning Temperature Time: 0 minutes 00:39:48.576 Critical Temperature Time: 0 minutes 00:39:48.576 00:39:48.576 Number of Queues 00:39:48.576 ================ 00:39:48.576 Number of I/O Submission Queues: 64 00:39:48.576 Number of I/O Completion Queues: 64 00:39:48.576 00:39:48.576 ZNS Specific Controller Data 00:39:48.576 ============================ 00:39:48.576 Zone Append Size Limit: 0 00:39:48.576 00:39:48.576 00:39:48.576 Active Namespaces 00:39:48.576 ================= 00:39:48.576 Namespace ID:1 00:39:48.576 Error Recovery Timeout: Unlimited 00:39:48.576 Command Set Identifier: NVM (00h) 00:39:48.576 Deallocate: Supported 00:39:48.576 Deallocated/Unwritten Error: Supported 00:39:48.576 Deallocated Read Value: All 0x00 00:39:48.576 Deallocate in Write Zeroes: Not Supported 00:39:48.576 Deallocated Guard Field: 0xFFFF 00:39:48.576 Flush: Supported 00:39:48.576 Reservation: Not Supported 00:39:48.576 Namespace Sharing Capabilities: Private 00:39:48.576 Size (in LBAs): 1310720 (5GiB) 00:39:48.576 Capacity (in LBAs): 1310720 (5GiB) 00:39:48.576 Utilization (in LBAs): 1310720 (5GiB) 00:39:48.576 Thin Provisioning: Not Supported 00:39:48.576 Per-NS Atomic Units: No 00:39:48.576 Maximum Single Source Range Length: 128 00:39:48.576 Maximum Copy Length: 128 00:39:48.576 Maximum Source Range Count: 128 00:39:48.576 NGUID/EUI64 Never Reused: No 00:39:48.576 Namespace Write Protected: No 00:39:48.576 Number of LBA Formats: 8 00:39:48.576 Current LBA Format: LBA Format #04 00:39:48.576 LBA Format #00: Data Size: 512 Metadata Size: 0 00:39:48.576 LBA Format #01: Data Size: 512 Metadata Size: 8 00:39:48.576 LBA Format #02: Data Size: 512 Metadata Size: 16 00:39:48.576 LBA Format #03: Data Size: 512 Metadata Size: 64 00:39:48.576 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:39:48.576 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:39:48.576 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:39:48.576 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:39:48.576 00:39:48.576 07:50:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:39:48.576 07:50:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:39:48.835 ===================================================== 00:39:48.835 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:39:48.835 ===================================================== 00:39:48.835 Controller Capabilities/Features 00:39:48.835 ================================ 00:39:48.835 Vendor ID: 1b36 00:39:48.835 Subsystem Vendor ID: 1af4 00:39:48.835 Serial Number: 12340 00:39:48.835 Model Number: QEMU NVMe Ctrl 00:39:48.835 Firmware Version: 8.0.0 00:39:48.835 Recommended Arb Burst: 6 00:39:48.835 IEEE OUI Identifier: 00 54 52 00:39:48.835 Multi-path I/O 00:39:48.835 May have multiple subsystem ports: No 00:39:48.835 May have multiple controllers: No 00:39:48.835 Associated with SR-IOV VF: No 00:39:48.835 Max Data Transfer Size: 524288 00:39:48.835 Max Number of Namespaces: 256 00:39:48.835 Max Number of I/O Queues: 64 00:39:48.835 NVMe Specification Version (VS): 1.4 00:39:48.835 NVMe Specification Version (Identify): 1.4 00:39:48.835 Maximum Queue Entries: 2048 00:39:48.835 Contiguous Queues Required: Yes 00:39:48.835 Arbitration Mechanisms Supported 00:39:48.835 Weighted Round Robin: Not Supported 00:39:48.835 Vendor Specific: Not Supported 00:39:48.835 Reset Timeout: 7500 ms 00:39:48.835 Doorbell Stride: 4 bytes 00:39:48.835 NVM Subsystem Reset: Not Supported 00:39:48.835 Command Sets Supported 00:39:48.835 NVM Command Set: Supported 00:39:48.835 Boot Partition: Not Supported 00:39:48.835 Memory Page Size Minimum: 4096 bytes 00:39:48.835 Memory Page Size Maximum: 65536 bytes 00:39:48.835 Persistent Memory Region: Not Supported 00:39:48.835 Optional Asynchronous Events Supported 00:39:48.835 Namespace Attribute Notices: Supported 00:39:48.835 Firmware Activation Notices: Not Supported 00:39:48.835 ANA Change Notices: Not Supported 00:39:48.835 PLE Aggregate Log Change Notices: Not Supported 00:39:48.835 LBA Status Info Alert Notices: Not Supported 00:39:48.835 EGE Aggregate Log Change Notices: Not Supported 00:39:48.835 Normal NVM Subsystem Shutdown event: Not Supported 00:39:48.835 Zone Descriptor Change Notices: Not Supported 00:39:48.835 Discovery Log Change Notices: Not Supported 00:39:48.835 Controller Attributes 00:39:48.835 128-bit Host Identifier: Not Supported 00:39:48.835 Non-Operational Permissive Mode: Not Supported 00:39:48.835 NVM Sets: Not Supported 00:39:48.835 Read Recovery Levels: Not Supported 00:39:48.835 Endurance Groups: Not Supported 00:39:48.835 Predictable Latency Mode: Not Supported 00:39:48.835 Traffic Based Keep ALive: Not Supported 00:39:48.835 Namespace Granularity: Not Supported 00:39:48.835 SQ Associations: Not Supported 00:39:48.835 UUID List: Not Supported 00:39:48.835 Multi-Domain Subsystem: Not Supported 00:39:48.835 Fixed Capacity Management: Not Supported 00:39:48.835 Variable Capacity Management: Not Supported 00:39:48.835 Delete Endurance Group: Not Supported 00:39:48.835 Delete NVM Set: Not Supported 00:39:48.835 Extended LBA Formats Supported: Supported 00:39:48.835 Flexible Data Placement Supported: Not Supported 00:39:48.835 00:39:48.835 Controller Memory Buffer Support 00:39:48.835 ================================ 00:39:48.835 Supported: No 00:39:48.835 00:39:48.835 Persistent Memory Region Support 00:39:48.835 ================================ 00:39:48.835 Supported: No 00:39:48.835 00:39:48.835 Admin Command Set Attributes 00:39:48.835 ============================ 00:39:48.835 Security Send/Receive: Not Supported 00:39:48.835 Format NVM: Supported 00:39:48.835 Firmware Activate/Download: Not Supported 00:39:48.835 Namespace Management: Supported 00:39:48.835 Device Self-Test: Not Supported 00:39:48.835 Directives: Supported 00:39:48.835 NVMe-MI: Not Supported 00:39:48.835 Virtualization Management: Not Supported 00:39:48.835 Doorbell Buffer Config: Supported 00:39:48.835 Get LBA Status Capability: Not Supported 00:39:48.835 Command & Feature Lockdown Capability: Not Supported 00:39:48.835 Abort Command Limit: 4 00:39:48.835 Async Event Request Limit: 4 00:39:48.835 Number of Firmware Slots: N/A 00:39:48.835 Firmware Slot 1 Read-Only: N/A 00:39:48.835 Firmware Activation Without Reset: N/A 00:39:48.835 Multiple Update Detection Support: N/A 00:39:48.835 Firmware Update Granularity: No Information Provided 00:39:48.835 Per-Namespace SMART Log: Yes 00:39:48.835 Asymmetric Namespace Access Log Page: Not Supported 00:39:48.835 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:39:48.835 Command Effects Log Page: Supported 00:39:48.835 Get Log Page Extended Data: Supported 00:39:48.835 Telemetry Log Pages: Not Supported 00:39:48.835 Persistent Event Log Pages: Not Supported 00:39:48.835 Supported Log Pages Log Page: May Support 00:39:48.835 Commands Supported & Effects Log Page: Not Supported 00:39:48.835 Feature Identifiers & Effects Log Page:May Support 00:39:48.835 NVMe-MI Commands & Effects Log Page: May Support 00:39:48.835 Data Area 4 for Telemetry Log: Not Supported 00:39:48.835 Error Log Page Entries Supported: 1 00:39:48.835 Keep Alive: Not Supported 00:39:48.835 00:39:48.835 NVM Command Set Attributes 00:39:48.835 ========================== 00:39:48.835 Submission Queue Entry Size 00:39:48.835 Max: 64 00:39:48.836 Min: 64 00:39:48.836 Completion Queue Entry Size 00:39:48.836 Max: 16 00:39:48.836 Min: 16 00:39:48.836 Number of Namespaces: 256 00:39:48.836 Compare Command: Supported 00:39:48.836 Write Uncorrectable Command: Not Supported 00:39:48.836 Dataset Management Command: Supported 00:39:48.836 Write Zeroes Command: Supported 00:39:48.836 Set Features Save Field: Supported 00:39:48.836 Reservations: Not Supported 00:39:48.836 Timestamp: Supported 00:39:48.836 Copy: Supported 00:39:48.836 Volatile Write Cache: Present 00:39:48.836 Atomic Write Unit (Normal): 1 00:39:48.836 Atomic Write Unit (PFail): 1 00:39:48.836 Atomic Compare & Write Unit: 1 00:39:48.836 Fused Compare & Write: Not Supported 00:39:48.836 Scatter-Gather List 00:39:48.836 SGL Command Set: Supported 00:39:48.836 SGL Keyed: Not Supported 00:39:48.836 SGL Bit Bucket Descriptor: Not Supported 00:39:48.836 SGL Metadata Pointer: Not Supported 00:39:48.836 Oversized SGL: Not Supported 00:39:48.836 SGL Metadata Address: Not Supported 00:39:48.836 SGL Offset: Not Supported 00:39:48.836 Transport SGL Data Block: Not Supported 00:39:48.836 Replay Protected Memory Block: Not Supported 00:39:48.836 00:39:48.836 Firmware Slot Information 00:39:48.836 ========================= 00:39:48.836 Active slot: 1 00:39:48.836 Slot 1 Firmware Revision: 1.0 00:39:48.836 00:39:48.836 00:39:48.836 Commands Supported and Effects 00:39:48.836 ============================== 00:39:48.836 Admin Commands 00:39:48.836 -------------- 00:39:48.836 Delete I/O Submission Queue (00h): Supported 00:39:48.836 Create I/O Submission Queue (01h): Supported 00:39:48.836 Get Log Page (02h): Supported 00:39:48.836 Delete I/O Completion Queue (04h): Supported 00:39:48.836 Create I/O Completion Queue (05h): Supported 00:39:48.836 Identify (06h): Supported 00:39:48.836 Abort (08h): Supported 00:39:48.836 Set Features (09h): Supported 00:39:48.836 Get Features (0Ah): Supported 00:39:48.836 Asynchronous Event Request (0Ch): Supported 00:39:48.836 Namespace Attachment (15h): Supported NS-Inventory-Change 00:39:48.836 Directive Send (19h): Supported 00:39:48.836 Directive Receive (1Ah): Supported 00:39:48.836 Virtualization Management (1Ch): Supported 00:39:48.836 Doorbell Buffer Config (7Ch): Supported 00:39:48.836 Format NVM (80h): Supported LBA-Change 00:39:48.836 I/O Commands 00:39:48.836 ------------ 00:39:48.836 Flush (00h): Supported LBA-Change 00:39:48.836 Write (01h): Supported LBA-Change 00:39:48.836 Read (02h): Supported 00:39:48.836 Compare (05h): Supported 00:39:48.836 Write Zeroes (08h): Supported LBA-Change 00:39:48.836 Dataset Management (09h): Supported LBA-Change 00:39:48.836 Unknown (0Ch): Supported 00:39:48.836 Unknown (12h): Supported 00:39:48.836 Copy (19h): Supported LBA-Change 00:39:48.836 Unknown (1Dh): Supported LBA-Change 00:39:48.836 00:39:48.836 Error Log 00:39:48.836 ========= 00:39:48.836 00:39:48.836 Arbitration 00:39:48.836 =========== 00:39:48.836 Arbitration Burst: no limit 00:39:48.836 00:39:48.836 Power Management 00:39:48.836 ================ 00:39:48.836 Number of Power States: 1 00:39:48.836 Current Power State: Power State #0 00:39:48.836 Power State #0: 00:39:48.836 Max Power: 25.00 W 00:39:48.836 Non-Operational State: Operational 00:39:48.836 Entry Latency: 16 microseconds 00:39:48.836 Exit Latency: 4 microseconds 00:39:48.836 Relative Read Throughput: 0 00:39:48.836 Relative Read Latency: 0 00:39:48.836 Relative Write Throughput: 0 00:39:48.836 Relative Write Latency: 0 00:39:49.095 Idle Power: Not Reported 00:39:49.095 Active Power: Not Reported 00:39:49.095 Non-Operational Permissive Mode: Not Supported 00:39:49.095 00:39:49.095 Health Information 00:39:49.095 ================== 00:39:49.095 Critical Warnings: 00:39:49.095 Available Spare Space: OK 00:39:49.095 Temperature: OK 00:39:49.095 Device Reliability: OK 00:39:49.095 Read Only: No 00:39:49.095 Volatile Memory Backup: OK 00:39:49.095 Current Temperature: 323 Kelvin (50 Celsius) 00:39:49.095 Temperature Threshold: 343 Kelvin (70 Celsius) 00:39:49.095 Available Spare: 0% 00:39:49.095 Available Spare Threshold: 0% 00:39:49.095 Life Percentage Used: 0% 00:39:49.095 Data Units Read: 3325 00:39:49.095 Data Units Written: 2997 00:39:49.095 Host Read Commands: 177012 00:39:49.095 Host Write Commands: 190173 00:39:49.095 Controller Busy Time: 0 minutes 00:39:49.095 Power Cycles: 0 00:39:49.095 Power On Hours: 0 hours 00:39:49.095 Unsafe Shutdowns: 0 00:39:49.095 Unrecoverable Media Errors: 0 00:39:49.095 Lifetime Error Log Entries: 0 00:39:49.095 Warning Temperature Time: 0 minutes 00:39:49.095 Critical Temperature Time: 0 minutes 00:39:49.095 00:39:49.095 Number of Queues 00:39:49.095 ================ 00:39:49.095 Number of I/O Submission Queues: 64 00:39:49.095 Number of I/O Completion Queues: 64 00:39:49.095 00:39:49.095 ZNS Specific Controller Data 00:39:49.095 ============================ 00:39:49.095 Zone Append Size Limit: 0 00:39:49.095 00:39:49.095 00:39:49.095 Active Namespaces 00:39:49.095 ================= 00:39:49.095 Namespace ID:1 00:39:49.095 Error Recovery Timeout: Unlimited 00:39:49.095 Command Set Identifier: NVM (00h) 00:39:49.095 Deallocate: Supported 00:39:49.095 Deallocated/Unwritten Error: Supported 00:39:49.095 Deallocated Read Value: All 0x00 00:39:49.095 Deallocate in Write Zeroes: Not Supported 00:39:49.095 Deallocated Guard Field: 0xFFFF 00:39:49.095 Flush: Supported 00:39:49.095 Reservation: Not Supported 00:39:49.095 Namespace Sharing Capabilities: Private 00:39:49.095 Size (in LBAs): 1310720 (5GiB) 00:39:49.095 Capacity (in LBAs): 1310720 (5GiB) 00:39:49.095 Utilization (in LBAs): 1310720 (5GiB) 00:39:49.095 Thin Provisioning: Not Supported 00:39:49.095 Per-NS Atomic Units: No 00:39:49.095 Maximum Single Source Range Length: 128 00:39:49.095 Maximum Copy Length: 128 00:39:49.095 Maximum Source Range Count: 128 00:39:49.095 NGUID/EUI64 Never Reused: No 00:39:49.095 Namespace Write Protected: No 00:39:49.095 Number of LBA Formats: 8 00:39:49.095 Current LBA Format: LBA Format #04 00:39:49.095 LBA Format #00: Data Size: 512 Metadata Size: 0 00:39:49.095 LBA Format #01: Data Size: 512 Metadata Size: 8 00:39:49.095 LBA Format #02: Data Size: 512 Metadata Size: 16 00:39:49.095 LBA Format #03: Data Size: 512 Metadata Size: 64 00:39:49.095 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:39:49.095 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:39:49.095 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:39:49.095 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:39:49.095 00:39:49.095 00:39:49.095 real 0m0.650s 00:39:49.095 user 0m0.235s 00:39:49.095 sys 0m0.321s 00:39:49.095 07:50:22 nvme.nvme_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:49.095 ************************************ 00:39:49.095 07:50:22 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:39:49.095 END TEST nvme_identify 00:39:49.095 ************************************ 00:39:49.095 07:50:22 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:39:49.095 07:50:22 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:49.095 07:50:22 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:49.095 07:50:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:39:49.095 ************************************ 00:39:49.095 START TEST nvme_perf 00:39:49.095 ************************************ 00:39:49.095 07:50:22 nvme.nvme_perf -- common/autotest_common.sh@1121 -- # nvme_perf 00:39:49.095 07:50:22 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:39:50.471 Initializing NVMe Controllers 00:39:50.471 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:39:50.471 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:39:50.471 Initialization complete. Launching workers. 00:39:50.471 ======================================================== 00:39:50.471 Latency(us) 00:39:50.471 Device Information : IOPS MiB/s Average min max 00:39:50.471 PCIE (0000:00:10.0) NSID 1 from core 0: 92519.30 1084.21 1382.56 667.19 6103.16 00:39:50.471 ======================================================== 00:39:50.471 Total : 92519.30 1084.21 1382.56 667.19 6103.16 00:39:50.471 00:39:50.471 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:39:50.471 ================================================================================= 00:39:50.471 1.00000% : 838.705us 00:39:50.471 10.00000% : 955.733us 00:39:50.471 25.00000% : 1100.069us 00:39:50.471 50.00000% : 1341.928us 00:39:50.471 75.00000% : 1575.985us 00:39:50.471 90.00000% : 1739.825us 00:39:50.471 95.00000% : 1950.476us 00:39:50.471 98.00000% : 2668.251us 00:39:50.471 99.00000% : 2933.516us 00:39:50.471 99.50000% : 3370.423us 00:39:50.471 99.90000% : 4930.804us 00:39:50.471 99.99000% : 5898.240us 00:39:50.471 99.99900% : 6116.693us 00:39:50.471 99.99990% : 6116.693us 00:39:50.471 99.99999% : 6116.693us 00:39:50.471 00:39:50.471 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:39:50.471 ============================================================================== 00:39:50.471 Range in us Cumulative IO count 00:39:50.471 667.063 - 670.964: 0.0022% ( 2) 00:39:50.471 678.766 - 682.667: 0.0065% ( 4) 00:39:50.471 682.667 - 686.568: 0.0076% ( 1) 00:39:50.471 686.568 - 690.469: 0.0086% ( 1) 00:39:50.471 702.171 - 706.072: 0.0097% ( 1) 00:39:50.471 706.072 - 709.973: 0.0108% ( 1) 00:39:50.471 717.775 - 721.676: 0.0119% ( 1) 00:39:50.471 721.676 - 725.577: 0.0130% ( 1) 00:39:50.471 725.577 - 729.478: 0.0162% ( 3) 00:39:50.471 733.379 - 737.280: 0.0216% ( 5) 00:39:50.471 737.280 - 741.181: 0.0249% ( 3) 00:39:50.471 741.181 - 745.082: 0.0259% ( 1) 00:39:50.471 745.082 - 748.983: 0.0313% ( 5) 00:39:50.471 748.983 - 752.884: 0.0357% ( 4) 00:39:50.471 752.884 - 756.785: 0.0421% ( 6) 00:39:50.471 756.785 - 760.686: 0.0465% ( 4) 00:39:50.471 760.686 - 764.587: 0.0616% ( 14) 00:39:50.471 764.587 - 768.488: 0.0702% ( 8) 00:39:50.471 768.488 - 772.389: 0.0864% ( 15) 00:39:50.471 772.389 - 776.290: 0.1016% ( 14) 00:39:50.471 776.290 - 780.190: 0.1329% ( 29) 00:39:50.471 780.190 - 784.091: 0.1567% ( 22) 00:39:50.471 784.091 - 787.992: 0.1815% ( 23) 00:39:50.471 787.992 - 791.893: 0.2204% ( 36) 00:39:50.471 791.893 - 795.794: 0.2539% ( 31) 00:39:50.471 795.794 - 799.695: 0.3004% ( 43) 00:39:50.471 799.695 - 803.596: 0.3361% ( 33) 00:39:50.471 803.596 - 807.497: 0.3912% ( 51) 00:39:50.471 807.497 - 811.398: 0.4387% ( 44) 00:39:50.471 811.398 - 815.299: 0.5154% ( 71) 00:39:50.471 815.299 - 819.200: 0.5749% ( 55) 00:39:50.471 819.200 - 823.101: 0.6537% ( 73) 00:39:50.471 823.101 - 827.002: 0.7229% ( 64) 00:39:50.471 827.002 - 830.903: 0.8223% ( 92) 00:39:50.471 830.903 - 834.804: 0.9217% ( 92) 00:39:50.471 834.804 - 838.705: 1.0222% ( 93) 00:39:50.471 838.705 - 842.606: 1.1324% ( 102) 00:39:50.472 842.606 - 846.507: 1.2416% ( 101) 00:39:50.472 846.507 - 850.408: 1.3637% ( 113) 00:39:50.472 850.408 - 854.309: 1.4869% ( 114) 00:39:50.472 854.309 - 858.210: 1.6046% ( 109) 00:39:50.472 858.210 - 862.110: 1.7386% ( 124) 00:39:50.472 862.110 - 866.011: 1.8910% ( 141) 00:39:50.472 866.011 - 869.912: 2.0282% ( 127) 00:39:50.472 869.912 - 873.813: 2.1827% ( 143) 00:39:50.472 873.813 - 877.714: 2.3740% ( 177) 00:39:50.472 877.714 - 881.615: 2.6042% ( 213) 00:39:50.472 881.615 - 885.516: 2.8311% ( 210) 00:39:50.472 885.516 - 889.417: 3.0947% ( 244) 00:39:50.472 889.417 - 893.318: 3.3562% ( 242) 00:39:50.472 893.318 - 897.219: 3.6512% ( 273) 00:39:50.472 897.219 - 901.120: 3.9916% ( 315) 00:39:50.472 901.120 - 905.021: 4.3568% ( 338) 00:39:50.472 905.021 - 908.922: 4.7221% ( 338) 00:39:50.472 908.922 - 912.823: 5.1748% ( 419) 00:39:50.472 912.823 - 916.724: 5.6017% ( 395) 00:39:50.472 916.724 - 920.625: 6.0112% ( 379) 00:39:50.472 920.625 - 924.526: 6.4942% ( 447) 00:39:50.472 924.526 - 928.427: 6.9513% ( 423) 00:39:50.472 928.427 - 932.328: 7.4267% ( 440) 00:39:50.472 932.328 - 936.229: 7.8979% ( 436) 00:39:50.472 936.229 - 940.130: 8.3539% ( 422) 00:39:50.472 940.130 - 944.030: 8.8304% ( 441) 00:39:50.472 944.030 - 947.931: 9.2464% ( 385) 00:39:50.472 947.931 - 951.832: 9.7251% ( 443) 00:39:50.472 951.832 - 955.733: 10.1962% ( 436) 00:39:50.472 955.733 - 959.634: 10.6306% ( 402) 00:39:50.472 959.634 - 963.535: 11.0823% ( 418) 00:39:50.472 963.535 - 967.436: 11.5059% ( 392) 00:39:50.472 967.436 - 971.337: 11.9554% ( 416) 00:39:50.472 971.337 - 975.238: 12.3660% ( 380) 00:39:50.472 975.238 - 979.139: 12.7583% ( 363) 00:39:50.472 979.139 - 983.040: 13.1829% ( 393) 00:39:50.472 983.040 - 986.941: 13.5676% ( 356) 00:39:50.472 986.941 - 990.842: 13.9663% ( 369) 00:39:50.472 990.842 - 994.743: 14.3575% ( 362) 00:39:50.472 994.743 - 998.644: 14.7551% ( 368) 00:39:50.472 998.644 - 1006.446: 15.5602% ( 745) 00:39:50.472 1006.446 - 1014.248: 16.3263% ( 709) 00:39:50.472 1014.248 - 1022.050: 17.1259% ( 740) 00:39:50.472 1022.050 - 1029.851: 17.8629% ( 682) 00:39:50.472 1029.851 - 1037.653: 18.6592% ( 737) 00:39:50.472 1037.653 - 1045.455: 19.4470% ( 729) 00:39:50.472 1045.455 - 1053.257: 20.2098% ( 706) 00:39:50.472 1053.257 - 1061.059: 21.0159% ( 746) 00:39:50.472 1061.059 - 1068.861: 21.7713% ( 699) 00:39:50.472 1068.861 - 1076.663: 22.5849% ( 753) 00:39:50.472 1076.663 - 1084.465: 23.3608% ( 718) 00:39:50.472 1084.465 - 1092.267: 24.1896% ( 767) 00:39:50.472 1092.267 - 1100.069: 25.0108% ( 760) 00:39:50.472 1100.069 - 1107.870: 25.8169% ( 746) 00:39:50.472 1107.870 - 1115.672: 26.6414% ( 763) 00:39:50.472 1115.672 - 1123.474: 27.4626% ( 760) 00:39:50.472 1123.474 - 1131.276: 28.2763% ( 753) 00:39:50.472 1131.276 - 1139.078: 29.1375% ( 797) 00:39:50.472 1139.078 - 1146.880: 29.9393% ( 742) 00:39:50.472 1146.880 - 1154.682: 30.7897% ( 787) 00:39:50.472 1154.682 - 1162.484: 31.6131% ( 762) 00:39:50.472 1162.484 - 1170.286: 32.4300% ( 756) 00:39:50.472 1170.286 - 1178.088: 33.2696% ( 777) 00:39:50.472 1178.088 - 1185.890: 34.1005% ( 769) 00:39:50.472 1185.890 - 1193.691: 34.9142% ( 753) 00:39:50.472 1193.691 - 1201.493: 35.7538% ( 777) 00:39:50.472 1201.493 - 1209.295: 36.5729% ( 758) 00:39:50.472 1209.295 - 1217.097: 37.3941% ( 760) 00:39:50.472 1217.097 - 1224.899: 38.2315% ( 775) 00:39:50.472 1224.899 - 1232.701: 39.0355% ( 744) 00:39:50.472 1232.701 - 1240.503: 39.8610% ( 764) 00:39:50.472 1240.503 - 1248.305: 40.6866% ( 764) 00:39:50.472 1248.305 - 1256.107: 41.4927% ( 746) 00:39:50.472 1256.107 - 1263.909: 42.3312% ( 776) 00:39:50.472 1263.909 - 1271.710: 43.1427% ( 751) 00:39:50.472 1271.710 - 1279.512: 43.9629% ( 759) 00:39:50.472 1279.512 - 1287.314: 44.7863% ( 762) 00:39:50.472 1287.314 - 1295.116: 45.6043% ( 757) 00:39:50.472 1295.116 - 1302.918: 46.4179% ( 753) 00:39:50.472 1302.918 - 1310.720: 47.2424% ( 763) 00:39:50.472 1310.720 - 1318.522: 48.0647% ( 761) 00:39:50.472 1318.522 - 1326.324: 48.8676% ( 743) 00:39:50.472 1326.324 - 1334.126: 49.6877% ( 759) 00:39:50.472 1334.126 - 1341.928: 50.5187% ( 769) 00:39:50.472 1341.928 - 1349.730: 51.3237% ( 745) 00:39:50.472 1349.730 - 1357.531: 52.1644% ( 778) 00:39:50.472 1357.531 - 1365.333: 52.9867% ( 761) 00:39:50.472 1365.333 - 1373.135: 53.8090% ( 761) 00:39:50.472 1373.135 - 1380.937: 54.6508% ( 779) 00:39:50.472 1380.937 - 1388.739: 55.4817% ( 769) 00:39:50.472 1388.739 - 1396.541: 56.3029% ( 760) 00:39:50.472 1396.541 - 1404.343: 57.1469% ( 781) 00:39:50.472 1404.343 - 1412.145: 57.9541% ( 747) 00:39:50.472 1412.145 - 1419.947: 58.8066% ( 789) 00:39:50.472 1419.947 - 1427.749: 59.6214% ( 754) 00:39:50.472 1427.749 - 1435.550: 60.4426% ( 760) 00:39:50.472 1435.550 - 1443.352: 61.2800% ( 775) 00:39:50.472 1443.352 - 1451.154: 62.0959% ( 755) 00:39:50.472 1451.154 - 1458.956: 62.9301% ( 772) 00:39:50.472 1458.956 - 1466.758: 63.7556% ( 764) 00:39:50.472 1466.758 - 1474.560: 64.5801% ( 763) 00:39:50.472 1474.560 - 1482.362: 65.4165% ( 774) 00:39:50.472 1482.362 - 1490.164: 66.2496% ( 771) 00:39:50.472 1490.164 - 1497.966: 67.0989% ( 786) 00:39:50.472 1497.966 - 1505.768: 67.9288% ( 768) 00:39:50.472 1505.768 - 1513.570: 68.7468% ( 757) 00:39:50.472 1513.570 - 1521.371: 69.5723% ( 764) 00:39:50.472 1521.371 - 1529.173: 70.4087% ( 774) 00:39:50.472 1529.173 - 1536.975: 71.2407% ( 770) 00:39:50.472 1536.975 - 1544.777: 72.0922% ( 788) 00:39:50.472 1544.777 - 1552.579: 72.9167% ( 763) 00:39:50.472 1552.579 - 1560.381: 73.7411% ( 763) 00:39:50.472 1560.381 - 1568.183: 74.5861% ( 782) 00:39:50.472 1568.183 - 1575.985: 75.4020% ( 755) 00:39:50.472 1575.985 - 1583.787: 76.2740% ( 807) 00:39:50.472 1583.787 - 1591.589: 77.0628% ( 730) 00:39:50.472 1591.589 - 1599.390: 77.9121% ( 786) 00:39:50.472 1599.390 - 1607.192: 78.7506% ( 776) 00:39:50.472 1607.192 - 1614.994: 79.5881% ( 775) 00:39:50.472 1614.994 - 1622.796: 80.4331% ( 782) 00:39:50.472 1622.796 - 1630.598: 81.2414% ( 748) 00:39:50.472 1630.598 - 1638.400: 82.0853% ( 781) 00:39:50.472 1638.400 - 1646.202: 82.9141% ( 767) 00:39:50.472 1646.202 - 1654.004: 83.7256% ( 751) 00:39:50.472 1654.004 - 1661.806: 84.5544% ( 767) 00:39:50.472 1661.806 - 1669.608: 85.3151% ( 704) 00:39:50.472 1669.608 - 1677.410: 86.0953% ( 722) 00:39:50.472 1677.410 - 1685.211: 86.7879% ( 641) 00:39:50.472 1685.211 - 1693.013: 87.4643% ( 626) 00:39:50.472 1693.013 - 1700.815: 88.0587% ( 550) 00:39:50.472 1700.815 - 1708.617: 88.5503% ( 455) 00:39:50.472 1708.617 - 1716.419: 89.0204% ( 435) 00:39:50.472 1716.419 - 1724.221: 89.4126% ( 363) 00:39:50.472 1724.221 - 1732.023: 89.7800% ( 340) 00:39:50.472 1732.023 - 1739.825: 90.1236% ( 318) 00:39:50.472 1739.825 - 1747.627: 90.4175% ( 272) 00:39:50.472 1747.627 - 1755.429: 90.7417% ( 300) 00:39:50.472 1755.429 - 1763.230: 90.9967% ( 236) 00:39:50.472 1763.230 - 1771.032: 91.2928% ( 274) 00:39:50.472 1771.032 - 1778.834: 91.5575% ( 245) 00:39:50.472 1778.834 - 1786.636: 91.8309% ( 253) 00:39:50.472 1786.636 - 1794.438: 92.0881% ( 238) 00:39:50.472 1794.438 - 1802.240: 92.3409% ( 234) 00:39:50.472 1802.240 - 1810.042: 92.5808% ( 222) 00:39:50.472 1810.042 - 1817.844: 92.8207% ( 222) 00:39:50.472 1817.844 - 1825.646: 93.0336% ( 197) 00:39:50.472 1825.646 - 1833.448: 93.2573% ( 207) 00:39:50.472 1833.448 - 1841.250: 93.4399% ( 169) 00:39:50.472 1841.250 - 1849.051: 93.6322% ( 178) 00:39:50.472 1849.051 - 1856.853: 93.7986% ( 154) 00:39:50.472 1856.853 - 1864.655: 93.9521% ( 142) 00:39:50.472 1864.655 - 1872.457: 94.1044% ( 141) 00:39:50.472 1872.457 - 1880.259: 94.2244% ( 111) 00:39:50.472 1880.259 - 1888.061: 94.3530% ( 119) 00:39:50.472 1888.061 - 1895.863: 94.4675% ( 106) 00:39:50.472 1895.863 - 1903.665: 94.5647% ( 90) 00:39:50.472 1903.665 - 1911.467: 94.6555% ( 84) 00:39:50.472 1911.467 - 1919.269: 94.7430% ( 81) 00:39:50.472 1919.269 - 1927.070: 94.8273% ( 78) 00:39:50.472 1927.070 - 1934.872: 94.9051% ( 72) 00:39:50.472 1934.872 - 1942.674: 94.9689% ( 59) 00:39:50.472 1942.674 - 1950.476: 95.0413% ( 67) 00:39:50.472 1950.476 - 1958.278: 95.1083% ( 62) 00:39:50.472 1958.278 - 1966.080: 95.1731% ( 60) 00:39:50.472 1966.080 - 1973.882: 95.2390% ( 61) 00:39:50.472 1973.882 - 1981.684: 95.2920% ( 49) 00:39:50.472 1981.684 - 1989.486: 95.3482% ( 52) 00:39:50.472 1989.486 - 1997.288: 95.3979% ( 46) 00:39:50.472 1997.288 - 2012.891: 95.4962% ( 91) 00:39:50.472 2012.891 - 2028.495: 95.5848% ( 82) 00:39:50.472 2028.495 - 2044.099: 95.6691% ( 78) 00:39:50.472 2044.099 - 2059.703: 95.7361% ( 62) 00:39:50.472 2059.703 - 2075.307: 95.8009% ( 60) 00:39:50.472 2075.307 - 2090.910: 95.8625% ( 57) 00:39:50.472 2090.910 - 2106.514: 95.9176% ( 51) 00:39:50.472 2106.514 - 2122.118: 95.9835% ( 61) 00:39:50.472 2122.118 - 2137.722: 96.0365% ( 49) 00:39:50.472 2137.722 - 2153.326: 96.0959% ( 55) 00:39:50.472 2153.326 - 2168.930: 96.1445% ( 45) 00:39:50.472 2168.930 - 2184.533: 96.1986% ( 50) 00:39:50.472 2184.533 - 2200.137: 96.2537% ( 51) 00:39:50.472 2200.137 - 2215.741: 96.3185% ( 60) 00:39:50.472 2215.741 - 2231.345: 96.3693% ( 47) 00:39:50.472 2231.345 - 2246.949: 96.4287% ( 55) 00:39:50.472 2246.949 - 2262.552: 96.4860% ( 53) 00:39:50.472 2262.552 - 2278.156: 96.5411% ( 51) 00:39:50.472 2278.156 - 2293.760: 96.6016% ( 56) 00:39:50.472 2293.760 - 2309.364: 96.6654% ( 59) 00:39:50.472 2309.364 - 2324.968: 96.7259% ( 56) 00:39:50.472 2324.968 - 2340.571: 96.7864% ( 56) 00:39:50.472 2340.571 - 2356.175: 96.8447% ( 54) 00:39:50.472 2356.175 - 2371.779: 96.9063% ( 57) 00:39:50.472 2371.779 - 2387.383: 96.9701% ( 59) 00:39:50.472 2387.383 - 2402.987: 97.0306% ( 56) 00:39:50.472 2402.987 - 2418.590: 97.0933% ( 58) 00:39:50.472 2418.590 - 2434.194: 97.1635% ( 65) 00:39:50.472 2434.194 - 2449.798: 97.2229% ( 55) 00:39:50.472 2449.798 - 2465.402: 97.2781% ( 51) 00:39:50.473 2465.402 - 2481.006: 97.3375% ( 55) 00:39:50.473 2481.006 - 2496.610: 97.4002% ( 58) 00:39:50.473 2496.610 - 2512.213: 97.4574% ( 53) 00:39:50.473 2512.213 - 2527.817: 97.5201% ( 58) 00:39:50.473 2527.817 - 2543.421: 97.5817% ( 57) 00:39:50.473 2543.421 - 2559.025: 97.6433% ( 57) 00:39:50.473 2559.025 - 2574.629: 97.6995% ( 52) 00:39:50.473 2574.629 - 2590.232: 97.7600% ( 56) 00:39:50.473 2590.232 - 2605.836: 97.8194% ( 55) 00:39:50.473 2605.836 - 2621.440: 97.8778% ( 54) 00:39:50.473 2621.440 - 2637.044: 97.9361% ( 54) 00:39:50.473 2637.044 - 2652.648: 97.9945% ( 54) 00:39:50.473 2652.648 - 2668.251: 98.0517% ( 53) 00:39:50.473 2668.251 - 2683.855: 98.1133% ( 57) 00:39:50.473 2683.855 - 2699.459: 98.1738% ( 56) 00:39:50.473 2699.459 - 2715.063: 98.2311% ( 53) 00:39:50.473 2715.063 - 2730.667: 98.2873% ( 52) 00:39:50.473 2730.667 - 2746.270: 98.3467% ( 55) 00:39:50.473 2746.270 - 2761.874: 98.4116% ( 60) 00:39:50.473 2761.874 - 2777.478: 98.4699% ( 54) 00:39:50.473 2777.478 - 2793.082: 98.5293% ( 55) 00:39:50.473 2793.082 - 2808.686: 98.5877% ( 54) 00:39:50.473 2808.686 - 2824.290: 98.6428% ( 51) 00:39:50.473 2824.290 - 2839.893: 98.7012% ( 54) 00:39:50.473 2839.893 - 2855.497: 98.7595% ( 54) 00:39:50.473 2855.497 - 2871.101: 98.8168% ( 53) 00:39:50.473 2871.101 - 2886.705: 98.8686% ( 48) 00:39:50.473 2886.705 - 2902.309: 98.9173% ( 45) 00:39:50.473 2902.309 - 2917.912: 98.9659% ( 45) 00:39:50.473 2917.912 - 2933.516: 99.0113% ( 42) 00:39:50.473 2933.516 - 2949.120: 99.0534% ( 39) 00:39:50.473 2949.120 - 2964.724: 99.0902% ( 34) 00:39:50.473 2964.724 - 2980.328: 99.1258% ( 33) 00:39:50.473 2980.328 - 2995.931: 99.1658% ( 37) 00:39:50.473 2995.931 - 3011.535: 99.1982% ( 30) 00:39:50.473 3011.535 - 3027.139: 99.2252% ( 25) 00:39:50.473 3027.139 - 3042.743: 99.2501% ( 23) 00:39:50.473 3042.743 - 3058.347: 99.2739% ( 22) 00:39:50.473 3058.347 - 3073.950: 99.2966% ( 21) 00:39:50.473 3073.950 - 3089.554: 99.3149% ( 17) 00:39:50.473 3089.554 - 3105.158: 99.3322% ( 16) 00:39:50.473 3105.158 - 3120.762: 99.3506% ( 17) 00:39:50.473 3120.762 - 3136.366: 99.3668% ( 15) 00:39:50.473 3136.366 - 3151.970: 99.3808% ( 13) 00:39:50.473 3151.970 - 3167.573: 99.3981% ( 16) 00:39:50.473 3167.573 - 3183.177: 99.4100% ( 11) 00:39:50.473 3183.177 - 3198.781: 99.4219% ( 11) 00:39:50.473 3198.781 - 3214.385: 99.4316% ( 9) 00:39:50.473 3214.385 - 3229.989: 99.4403% ( 8) 00:39:50.473 3229.989 - 3245.592: 99.4478% ( 7) 00:39:50.473 3245.592 - 3261.196: 99.4554% ( 7) 00:39:50.473 3261.196 - 3276.800: 99.4619% ( 6) 00:39:50.473 3276.800 - 3292.404: 99.4684% ( 6) 00:39:50.473 3292.404 - 3308.008: 99.4748% ( 6) 00:39:50.473 3308.008 - 3323.611: 99.4813% ( 6) 00:39:50.473 3323.611 - 3339.215: 99.4889% ( 7) 00:39:50.473 3339.215 - 3354.819: 99.4954% ( 6) 00:39:50.473 3354.819 - 3370.423: 99.5008% ( 5) 00:39:50.473 3370.423 - 3386.027: 99.5040% ( 3) 00:39:50.473 3386.027 - 3401.630: 99.5083% ( 4) 00:39:50.473 3401.630 - 3417.234: 99.5127% ( 4) 00:39:50.473 3417.234 - 3432.838: 99.5181% ( 5) 00:39:50.473 3432.838 - 3448.442: 99.5224% ( 4) 00:39:50.473 3448.442 - 3464.046: 99.5256% ( 3) 00:39:50.473 3464.046 - 3479.650: 99.5310% ( 5) 00:39:50.473 3479.650 - 3495.253: 99.5354% ( 4) 00:39:50.473 3495.253 - 3510.857: 99.5397% ( 4) 00:39:50.473 3510.857 - 3526.461: 99.5440% ( 4) 00:39:50.473 3526.461 - 3542.065: 99.5494% ( 5) 00:39:50.473 3542.065 - 3557.669: 99.5548% ( 5) 00:39:50.473 3557.669 - 3573.272: 99.5602% ( 5) 00:39:50.473 3573.272 - 3588.876: 99.5656% ( 5) 00:39:50.473 3588.876 - 3604.480: 99.5699% ( 4) 00:39:50.473 3604.480 - 3620.084: 99.5743% ( 4) 00:39:50.473 3620.084 - 3635.688: 99.5818% ( 7) 00:39:50.473 3635.688 - 3651.291: 99.5872% ( 5) 00:39:50.473 3651.291 - 3666.895: 99.5926% ( 5) 00:39:50.473 3666.895 - 3682.499: 99.5980% ( 5) 00:39:50.473 3682.499 - 3698.103: 99.6034% ( 5) 00:39:50.473 3698.103 - 3713.707: 99.6067% ( 3) 00:39:50.473 3713.707 - 3729.310: 99.6132% ( 6) 00:39:50.473 3729.310 - 3744.914: 99.6186% ( 5) 00:39:50.473 3744.914 - 3760.518: 99.6240% ( 5) 00:39:50.473 3760.518 - 3776.122: 99.6304% ( 6) 00:39:50.473 3776.122 - 3791.726: 99.6358% ( 5) 00:39:50.473 3791.726 - 3807.330: 99.6413% ( 5) 00:39:50.473 3807.330 - 3822.933: 99.6467% ( 5) 00:39:50.473 3822.933 - 3838.537: 99.6521% ( 5) 00:39:50.473 3838.537 - 3854.141: 99.6575% ( 5) 00:39:50.473 3854.141 - 3869.745: 99.6629% ( 5) 00:39:50.473 3869.745 - 3885.349: 99.6693% ( 6) 00:39:50.473 3885.349 - 3900.952: 99.6747% ( 5) 00:39:50.473 3900.952 - 3916.556: 99.6791% ( 4) 00:39:50.473 3916.556 - 3932.160: 99.6845% ( 5) 00:39:50.473 3932.160 - 3947.764: 99.6899% ( 5) 00:39:50.473 3947.764 - 3963.368: 99.6942% ( 4) 00:39:50.473 3963.368 - 3978.971: 99.6996% ( 5) 00:39:50.473 3978.971 - 3994.575: 99.7050% ( 5) 00:39:50.473 3994.575 - 4025.783: 99.7169% ( 11) 00:39:50.473 4025.783 - 4056.990: 99.7288% ( 11) 00:39:50.473 4056.990 - 4088.198: 99.7385% ( 9) 00:39:50.473 4088.198 - 4119.406: 99.7493% ( 10) 00:39:50.473 4119.406 - 4150.613: 99.7601% ( 10) 00:39:50.473 4150.613 - 4181.821: 99.7731% ( 12) 00:39:50.473 4181.821 - 4213.029: 99.7817% ( 8) 00:39:50.473 4213.029 - 4244.236: 99.7893% ( 7) 00:39:50.473 4244.236 - 4275.444: 99.7979% ( 8) 00:39:50.473 4275.444 - 4306.651: 99.8033% ( 5) 00:39:50.473 4306.651 - 4337.859: 99.8098% ( 6) 00:39:50.473 4337.859 - 4369.067: 99.8174% ( 7) 00:39:50.473 4369.067 - 4400.274: 99.8217% ( 4) 00:39:50.473 4400.274 - 4431.482: 99.8260% ( 4) 00:39:50.473 4431.482 - 4462.690: 99.8293% ( 3) 00:39:50.473 4462.690 - 4493.897: 99.8347% ( 5) 00:39:50.473 4493.897 - 4525.105: 99.8401% ( 5) 00:39:50.473 4525.105 - 4556.312: 99.8466% ( 6) 00:39:50.473 4556.312 - 4587.520: 99.8520% ( 5) 00:39:50.473 4587.520 - 4618.728: 99.8574% ( 5) 00:39:50.473 4618.728 - 4649.935: 99.8649% ( 7) 00:39:50.473 4649.935 - 4681.143: 99.8714% ( 6) 00:39:50.473 4681.143 - 4712.350: 99.8747% ( 3) 00:39:50.473 4712.350 - 4743.558: 99.8801% ( 5) 00:39:50.473 4743.558 - 4774.766: 99.8844% ( 4) 00:39:50.473 4774.766 - 4805.973: 99.8876% ( 3) 00:39:50.473 4805.973 - 4837.181: 99.8909% ( 3) 00:39:50.473 4837.181 - 4868.389: 99.8952% ( 4) 00:39:50.473 4868.389 - 4899.596: 99.8995% ( 4) 00:39:50.473 4899.596 - 4930.804: 99.9027% ( 3) 00:39:50.473 4930.804 - 4962.011: 99.9082% ( 5) 00:39:50.473 4962.011 - 4993.219: 99.9125% ( 4) 00:39:50.473 4993.219 - 5024.427: 99.9157% ( 3) 00:39:50.473 5024.427 - 5055.634: 99.9190% ( 3) 00:39:50.473 5055.634 - 5086.842: 99.9211% ( 2) 00:39:50.473 5086.842 - 5118.050: 99.9222% ( 1) 00:39:50.473 5118.050 - 5149.257: 99.9244% ( 2) 00:39:50.473 5149.257 - 5180.465: 99.9276% ( 3) 00:39:50.473 5180.465 - 5211.672: 99.9319% ( 4) 00:39:50.473 5211.672 - 5242.880: 99.9362% ( 4) 00:39:50.473 5242.880 - 5274.088: 99.9406% ( 4) 00:39:50.473 5274.088 - 5305.295: 99.9449% ( 4) 00:39:50.473 5305.295 - 5336.503: 99.9492% ( 4) 00:39:50.473 5336.503 - 5367.710: 99.9525% ( 3) 00:39:50.473 5367.710 - 5398.918: 99.9568% ( 4) 00:39:50.473 5398.918 - 5430.126: 99.9622% ( 5) 00:39:50.473 5430.126 - 5461.333: 99.9654% ( 3) 00:39:50.473 5461.333 - 5492.541: 99.9687% ( 3) 00:39:50.473 5492.541 - 5523.749: 99.9708% ( 2) 00:39:50.473 5523.749 - 5554.956: 99.9719% ( 1) 00:39:50.473 5554.956 - 5586.164: 99.9741% ( 2) 00:39:50.473 5586.164 - 5617.371: 99.9762% ( 2) 00:39:50.473 5617.371 - 5648.579: 99.9773% ( 1) 00:39:50.473 5648.579 - 5679.787: 99.9795% ( 2) 00:39:50.473 5679.787 - 5710.994: 99.9816% ( 2) 00:39:50.473 5710.994 - 5742.202: 99.9827% ( 1) 00:39:50.473 5742.202 - 5773.410: 99.9849% ( 2) 00:39:50.473 5773.410 - 5804.617: 99.9870% ( 2) 00:39:50.473 5804.617 - 5835.825: 99.9881% ( 1) 00:39:50.473 5835.825 - 5867.032: 99.9892% ( 1) 00:39:50.473 5867.032 - 5898.240: 99.9914% ( 2) 00:39:50.473 5898.240 - 5929.448: 99.9935% ( 2) 00:39:50.473 5929.448 - 5960.655: 99.9946% ( 1) 00:39:50.473 5960.655 - 5991.863: 99.9968% ( 2) 00:39:50.473 5991.863 - 6023.070: 99.9978% ( 1) 00:39:50.473 6023.070 - 6054.278: 99.9989% ( 1) 00:39:50.473 6085.486 - 6116.693: 100.0000% ( 1) 00:39:50.473 00:39:50.473 07:50:24 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:39:51.849 Initializing NVMe Controllers 00:39:51.849 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:39:51.849 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:39:51.849 Initialization complete. Launching workers. 00:39:51.849 ======================================================== 00:39:51.849 Latency(us) 00:39:51.849 Device Information : IOPS MiB/s Average min max 00:39:51.849 PCIE (0000:00:10.0) NSID 1 from core 0: 84289.60 987.77 1518.02 498.72 11039.12 00:39:51.849 ======================================================== 00:39:51.849 Total : 84289.60 987.77 1518.02 498.72 11039.12 00:39:51.849 00:39:51.849 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:39:51.849 ================================================================================= 00:39:51.849 1.00000% : 951.832us 00:39:51.849 10.00000% : 1224.899us 00:39:51.849 25.00000% : 1357.531us 00:39:51.849 50.00000% : 1482.362us 00:39:51.849 75.00000% : 1630.598us 00:39:51.849 90.00000% : 1825.646us 00:39:51.849 95.00000% : 1973.882us 00:39:51.849 98.00000% : 2168.930us 00:39:51.850 99.00000% : 2371.779us 00:39:51.850 99.50000% : 2605.836us 00:39:51.850 99.90000% : 9237.455us 00:39:51.850 99.99000% : 10735.421us 00:39:51.850 99.99900% : 11047.497us 00:39:51.850 99.99990% : 11047.497us 00:39:51.850 99.99999% : 11047.497us 00:39:51.850 00:39:51.850 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:39:51.850 ============================================================================== 00:39:51.850 Range in us Cumulative IO count 00:39:51.850 497.371 - 499.322: 0.0012% ( 1) 00:39:51.850 542.232 - 546.133: 0.0024% ( 1) 00:39:51.850 553.935 - 557.836: 0.0036% ( 1) 00:39:51.850 585.143 - 589.044: 0.0047% ( 1) 00:39:51.850 589.044 - 592.945: 0.0059% ( 1) 00:39:51.850 600.747 - 604.648: 0.0071% ( 1) 00:39:51.850 604.648 - 608.549: 0.0107% ( 3) 00:39:51.850 608.549 - 612.450: 0.0119% ( 1) 00:39:51.850 620.251 - 624.152: 0.0166% ( 4) 00:39:51.850 624.152 - 628.053: 0.0178% ( 1) 00:39:51.850 628.053 - 631.954: 0.0190% ( 1) 00:39:51.850 631.954 - 635.855: 0.0202% ( 1) 00:39:51.850 635.855 - 639.756: 0.0213% ( 1) 00:39:51.850 639.756 - 643.657: 0.0225% ( 1) 00:39:51.850 643.657 - 647.558: 0.0237% ( 1) 00:39:51.850 651.459 - 655.360: 0.0249% ( 1) 00:39:51.850 655.360 - 659.261: 0.0261% ( 1) 00:39:51.850 659.261 - 663.162: 0.0284% ( 2) 00:39:51.850 667.063 - 670.964: 0.0308% ( 2) 00:39:51.850 670.964 - 674.865: 0.0320% ( 1) 00:39:51.850 678.766 - 682.667: 0.0356% ( 3) 00:39:51.850 682.667 - 686.568: 0.0391% ( 3) 00:39:51.850 686.568 - 690.469: 0.0403% ( 1) 00:39:51.850 690.469 - 694.370: 0.0427% ( 2) 00:39:51.850 694.370 - 698.270: 0.0522% ( 8) 00:39:51.850 698.270 - 702.171: 0.0557% ( 3) 00:39:51.850 702.171 - 706.072: 0.0605% ( 4) 00:39:51.850 706.072 - 709.973: 0.0616% ( 1) 00:39:51.850 709.973 - 713.874: 0.0640% ( 2) 00:39:51.850 713.874 - 717.775: 0.0652% ( 1) 00:39:51.850 717.775 - 721.676: 0.0688% ( 3) 00:39:51.850 721.676 - 725.577: 0.0723% ( 3) 00:39:51.850 725.577 - 729.478: 0.0759% ( 3) 00:39:51.850 729.478 - 733.379: 0.0853% ( 8) 00:39:51.850 733.379 - 737.280: 0.0901% ( 4) 00:39:51.850 737.280 - 741.181: 0.0936% ( 3) 00:39:51.850 741.181 - 745.082: 0.0960% ( 2) 00:39:51.850 752.884 - 756.785: 0.0972% ( 1) 00:39:51.850 756.785 - 760.686: 0.1008% ( 3) 00:39:51.850 760.686 - 764.587: 0.1043% ( 3) 00:39:51.850 764.587 - 768.488: 0.1067% ( 2) 00:39:51.850 768.488 - 772.389: 0.1150% ( 7) 00:39:51.850 772.389 - 776.290: 0.1162% ( 1) 00:39:51.850 776.290 - 780.190: 0.1233% ( 6) 00:39:51.850 780.190 - 784.091: 0.1280% ( 4) 00:39:51.850 784.091 - 787.992: 0.1316% ( 3) 00:39:51.850 787.992 - 791.893: 0.1351% ( 3) 00:39:51.850 791.893 - 795.794: 0.1399% ( 4) 00:39:51.850 795.794 - 799.695: 0.1446% ( 4) 00:39:51.850 799.695 - 803.596: 0.1494% ( 4) 00:39:51.850 803.596 - 807.497: 0.1553% ( 5) 00:39:51.850 807.497 - 811.398: 0.1612% ( 5) 00:39:51.850 811.398 - 815.299: 0.1660% ( 4) 00:39:51.850 815.299 - 819.200: 0.1719% ( 5) 00:39:51.850 819.200 - 823.101: 0.1837% ( 10) 00:39:51.850 823.101 - 827.002: 0.1908% ( 6) 00:39:51.850 827.002 - 830.903: 0.1968% ( 5) 00:39:51.850 830.903 - 834.804: 0.2051% ( 7) 00:39:51.850 834.804 - 838.705: 0.2169% ( 10) 00:39:51.850 838.705 - 842.606: 0.2276% ( 9) 00:39:51.850 842.606 - 846.507: 0.2347% ( 6) 00:39:51.850 846.507 - 850.408: 0.2454% ( 9) 00:39:51.850 850.408 - 854.309: 0.2643% ( 16) 00:39:51.850 854.309 - 858.210: 0.2904% ( 22) 00:39:51.850 858.210 - 862.110: 0.3046% ( 12) 00:39:51.850 862.110 - 866.011: 0.3461% ( 35) 00:39:51.850 866.011 - 869.912: 0.3651% ( 16) 00:39:51.850 869.912 - 873.813: 0.3793% ( 12) 00:39:51.850 873.813 - 877.714: 0.4149% ( 30) 00:39:51.850 877.714 - 881.615: 0.4386% ( 20) 00:39:51.850 881.615 - 885.516: 0.4564% ( 15) 00:39:51.850 885.516 - 889.417: 0.4718% ( 13) 00:39:51.850 889.417 - 893.318: 0.5038% ( 27) 00:39:51.850 893.318 - 897.219: 0.5216% ( 15) 00:39:51.850 897.219 - 901.120: 0.5370% ( 13) 00:39:51.850 901.120 - 905.021: 0.5571% ( 17) 00:39:51.850 905.021 - 908.922: 0.5702% ( 11) 00:39:51.850 908.922 - 912.823: 0.5986% ( 24) 00:39:51.850 912.823 - 916.724: 0.6306% ( 27) 00:39:51.850 916.724 - 920.625: 0.6531% ( 19) 00:39:51.850 920.625 - 924.526: 0.6970% ( 37) 00:39:51.850 924.526 - 928.427: 0.7326% ( 30) 00:39:51.850 928.427 - 932.328: 0.7681% ( 30) 00:39:51.850 932.328 - 936.229: 0.8404% ( 61) 00:39:51.850 936.229 - 940.130: 0.8736% ( 28) 00:39:51.850 940.130 - 944.030: 0.9495% ( 64) 00:39:51.850 944.030 - 947.931: 0.9827% ( 28) 00:39:51.850 947.931 - 951.832: 1.0230% ( 34) 00:39:51.850 951.832 - 955.733: 1.0822% ( 50) 00:39:51.850 955.733 - 959.634: 1.1166% ( 29) 00:39:51.850 959.634 - 963.535: 1.1628% ( 39) 00:39:51.850 963.535 - 967.436: 1.2020% ( 33) 00:39:51.850 967.436 - 971.337: 1.2932% ( 77) 00:39:51.850 971.337 - 975.238: 1.3359% ( 36) 00:39:51.850 975.238 - 979.139: 1.3845% ( 41) 00:39:51.850 979.139 - 983.040: 1.4307% ( 39) 00:39:51.850 983.040 - 986.941: 1.4924% ( 52) 00:39:51.850 986.941 - 990.842: 1.5303% ( 32) 00:39:51.850 990.842 - 994.743: 1.6370% ( 90) 00:39:51.850 994.743 - 998.644: 1.6820% ( 38) 00:39:51.850 998.644 - 1006.446: 1.7911% ( 92) 00:39:51.850 1006.446 - 1014.248: 1.8942% ( 87) 00:39:51.850 1014.248 - 1022.050: 2.0045% ( 93) 00:39:51.850 1022.050 - 1029.851: 2.1443% ( 118) 00:39:51.850 1029.851 - 1037.653: 2.2878% ( 121) 00:39:51.850 1037.653 - 1045.455: 2.4075% ( 101) 00:39:51.850 1045.455 - 1053.257: 2.5426% ( 114) 00:39:51.850 1053.257 - 1061.059: 2.6991% ( 132) 00:39:51.850 1061.059 - 1068.861: 2.8544% ( 131) 00:39:51.850 1068.861 - 1076.663: 2.9895% ( 114) 00:39:51.850 1076.663 - 1084.465: 3.1815% ( 162) 00:39:51.850 1084.465 - 1092.267: 3.3759% ( 164) 00:39:51.850 1092.267 - 1100.069: 3.6047% ( 193) 00:39:51.850 1100.069 - 1107.870: 3.8193% ( 181) 00:39:51.850 1107.870 - 1115.672: 4.0670% ( 209) 00:39:51.850 1115.672 - 1123.474: 4.3136% ( 208) 00:39:51.850 1123.474 - 1131.276: 4.5933% ( 236) 00:39:51.850 1131.276 - 1139.078: 4.8932% ( 253) 00:39:51.850 1139.078 - 1146.880: 5.1967% ( 256) 00:39:51.850 1146.880 - 1154.682: 5.5309% ( 282) 00:39:51.850 1154.682 - 1162.484: 5.9482% ( 352) 00:39:51.850 1162.484 - 1170.286: 6.3832% ( 367) 00:39:51.850 1170.286 - 1178.088: 6.7815% ( 336) 00:39:51.850 1178.088 - 1185.890: 7.2746% ( 416) 00:39:51.850 1185.890 - 1193.691: 7.7630% ( 412) 00:39:51.850 1193.691 - 1201.493: 8.2620% ( 421) 00:39:51.850 1201.493 - 1209.295: 8.8156% ( 467) 00:39:51.850 1209.295 - 1217.097: 9.4332% ( 521) 00:39:51.850 1217.097 - 1224.899: 10.0330% ( 506) 00:39:51.850 1224.899 - 1232.701: 10.7454% ( 601) 00:39:51.850 1232.701 - 1240.503: 11.3345% ( 497) 00:39:51.850 1240.503 - 1248.305: 11.9568% ( 525) 00:39:51.850 1248.305 - 1256.107: 12.6467% ( 582) 00:39:51.850 1256.107 - 1263.909: 13.5322% ( 747) 00:39:51.850 1263.909 - 1271.710: 14.3750% ( 711) 00:39:51.850 1271.710 - 1279.512: 15.2320% ( 723) 00:39:51.850 1279.512 - 1287.314: 16.1246% ( 753) 00:39:51.850 1287.314 - 1295.116: 17.0729% ( 800) 00:39:51.850 1295.116 - 1302.918: 18.0697% ( 841) 00:39:51.850 1302.918 - 1310.720: 19.0026% ( 787) 00:39:51.850 1310.720 - 1318.522: 20.0754% ( 905) 00:39:51.850 1318.522 - 1326.324: 21.0296% ( 805) 00:39:51.850 1326.324 - 1334.126: 22.1616% ( 955) 00:39:51.850 1334.126 - 1341.928: 23.1882% ( 866) 00:39:51.850 1341.928 - 1349.730: 24.3119% ( 948) 00:39:51.850 1349.730 - 1357.531: 25.5447% ( 1040) 00:39:51.850 1357.531 - 1365.333: 26.7834% ( 1045) 00:39:51.850 1365.333 - 1373.135: 28.1501% ( 1153) 00:39:51.850 1373.135 - 1380.937: 29.7172% ( 1322) 00:39:51.850 1380.937 - 1388.739: 31.1669% ( 1223) 00:39:51.850 1388.739 - 1396.541: 32.7517% ( 1337) 00:39:51.850 1396.541 - 1404.343: 34.4112% ( 1400) 00:39:51.850 1404.343 - 1412.145: 36.0364% ( 1371) 00:39:51.850 1412.145 - 1419.947: 37.6876% ( 1393) 00:39:51.850 1419.947 - 1427.749: 39.2238% ( 1296) 00:39:51.850 1427.749 - 1435.550: 40.8347% ( 1359) 00:39:51.850 1435.550 - 1443.352: 42.3722% ( 1297) 00:39:51.850 1443.352 - 1451.154: 43.9191% ( 1305) 00:39:51.850 1451.154 - 1458.956: 45.5489% ( 1375) 00:39:51.850 1458.956 - 1466.758: 47.2464% ( 1432) 00:39:51.850 1466.758 - 1474.560: 49.0114% ( 1489) 00:39:51.850 1474.560 - 1482.362: 50.8475% ( 1549) 00:39:51.850 1482.362 - 1490.164: 52.6374% ( 1510) 00:39:51.850 1490.164 - 1497.966: 54.3171% ( 1417) 00:39:51.850 1497.966 - 1505.768: 55.8415% ( 1286) 00:39:51.850 1505.768 - 1513.570: 57.2936% ( 1225) 00:39:51.850 1513.570 - 1521.371: 59.1178% ( 1539) 00:39:51.850 1521.371 - 1529.173: 60.6695% ( 1309) 00:39:51.850 1529.173 - 1536.975: 62.0753% ( 1186) 00:39:51.850 1536.975 - 1544.777: 63.4243% ( 1138) 00:39:51.850 1544.777 - 1552.579: 64.7400% ( 1110) 00:39:51.850 1552.579 - 1560.381: 66.0582% ( 1112) 00:39:51.850 1560.381 - 1568.183: 67.3467% ( 1087) 00:39:51.850 1568.183 - 1575.985: 68.3673% ( 861) 00:39:51.850 1575.985 - 1583.787: 69.4080% ( 878) 00:39:51.850 1583.787 - 1591.589: 70.3634% ( 806) 00:39:51.850 1591.589 - 1599.390: 71.4492% ( 916) 00:39:51.850 1599.390 - 1607.192: 72.4580% ( 851) 00:39:51.850 1607.192 - 1614.994: 73.3980% ( 793) 00:39:51.850 1614.994 - 1622.796: 74.2870% ( 750) 00:39:51.850 1622.796 - 1630.598: 75.1298% ( 711) 00:39:51.850 1630.598 - 1638.400: 75.8493% ( 607) 00:39:51.850 1638.400 - 1646.202: 76.6257% ( 655) 00:39:51.850 1646.202 - 1654.004: 77.4116% ( 663) 00:39:51.850 1654.004 - 1661.806: 78.1762% ( 645) 00:39:51.850 1661.806 - 1669.608: 78.8637% ( 580) 00:39:51.851 1669.608 - 1677.410: 79.6164% ( 635) 00:39:51.851 1677.410 - 1685.211: 80.4035% ( 664) 00:39:51.851 1685.211 - 1693.013: 81.2273% ( 695) 00:39:51.851 1693.013 - 1700.815: 82.0097% ( 660) 00:39:51.851 1700.815 - 1708.617: 82.8015% ( 668) 00:39:51.851 1708.617 - 1716.419: 83.4760% ( 569) 00:39:51.851 1716.419 - 1724.221: 84.1125% ( 537) 00:39:51.851 1724.221 - 1732.023: 84.7894% ( 571) 00:39:51.851 1732.023 - 1739.825: 85.3263% ( 453) 00:39:51.851 1739.825 - 1747.627: 85.8621% ( 452) 00:39:51.851 1747.627 - 1755.429: 86.4678% ( 511) 00:39:51.851 1755.429 - 1763.230: 86.9017% ( 366) 00:39:51.851 1763.230 - 1771.032: 87.4375% ( 452) 00:39:51.851 1771.032 - 1778.834: 87.8666% ( 362) 00:39:51.851 1778.834 - 1786.636: 88.2743% ( 344) 00:39:51.851 1786.636 - 1794.438: 88.6904% ( 351) 00:39:51.851 1794.438 - 1802.240: 89.1764% ( 410) 00:39:51.851 1802.240 - 1810.042: 89.5771% ( 338) 00:39:51.851 1810.042 - 1817.844: 89.9943% ( 352) 00:39:51.851 1817.844 - 1825.646: 90.3665% ( 314) 00:39:51.851 1825.646 - 1833.448: 90.7541% ( 327) 00:39:51.851 1833.448 - 1841.250: 91.1429% ( 328) 00:39:51.851 1841.250 - 1849.051: 91.5104% ( 310) 00:39:51.851 1849.051 - 1856.853: 91.8506% ( 287) 00:39:51.851 1856.853 - 1864.655: 92.0675% ( 183) 00:39:51.851 1864.655 - 1872.457: 92.3698% ( 255) 00:39:51.851 1872.457 - 1880.259: 92.7159% ( 292) 00:39:51.851 1880.259 - 1888.061: 92.9542% ( 201) 00:39:51.851 1888.061 - 1895.863: 93.2268% ( 230) 00:39:51.851 1895.863 - 1903.665: 93.4639% ( 200) 00:39:51.851 1903.665 - 1911.467: 93.6642% ( 169) 00:39:51.851 1911.467 - 1919.269: 93.8278% ( 138) 00:39:51.851 1919.269 - 1927.070: 94.0305% ( 171) 00:39:51.851 1927.070 - 1934.872: 94.1917% ( 136) 00:39:51.851 1934.872 - 1942.674: 94.3719% ( 152) 00:39:51.851 1942.674 - 1950.476: 94.5627% ( 161) 00:39:51.851 1950.476 - 1958.278: 94.8247% ( 221) 00:39:51.851 1958.278 - 1966.080: 94.9942% ( 143) 00:39:51.851 1966.080 - 1973.882: 95.1756% ( 153) 00:39:51.851 1973.882 - 1981.684: 95.3806% ( 173) 00:39:51.851 1981.684 - 1989.486: 95.5904% ( 177) 00:39:51.851 1989.486 - 1997.288: 95.8014% ( 178) 00:39:51.851 1997.288 - 2012.891: 96.1381% ( 284) 00:39:51.851 2012.891 - 2028.495: 96.4427% ( 257) 00:39:51.851 2028.495 - 2044.099: 96.6750% ( 196) 00:39:51.851 2044.099 - 2059.703: 96.9299% ( 215) 00:39:51.851 2059.703 - 2075.307: 97.1575% ( 192) 00:39:51.851 2075.307 - 2090.910: 97.3068% ( 126) 00:39:51.851 2090.910 - 2106.514: 97.5190% ( 179) 00:39:51.851 2106.514 - 2122.118: 97.6767% ( 133) 00:39:51.851 2122.118 - 2137.722: 97.8035% ( 107) 00:39:51.851 2137.722 - 2153.326: 97.9078% ( 88) 00:39:51.851 2153.326 - 2168.930: 98.0370% ( 109) 00:39:51.851 2168.930 - 2184.533: 98.1212% ( 71) 00:39:51.851 2184.533 - 2200.137: 98.2077% ( 73) 00:39:51.851 2200.137 - 2215.741: 98.3037% ( 81) 00:39:51.851 2215.741 - 2231.345: 98.3915% ( 74) 00:39:51.851 2231.345 - 2246.949: 98.4697% ( 66) 00:39:51.851 2246.949 - 2262.552: 98.5491% ( 67) 00:39:51.851 2262.552 - 2278.156: 98.6416% ( 78) 00:39:51.851 2278.156 - 2293.760: 98.7151% ( 62) 00:39:51.851 2293.760 - 2309.364: 98.7814% ( 56) 00:39:51.851 2309.364 - 2324.968: 98.8324% ( 43) 00:39:51.851 2324.968 - 2340.571: 98.8929% ( 51) 00:39:51.851 2340.571 - 2356.175: 98.9604% ( 57) 00:39:51.851 2356.175 - 2371.779: 99.0268% ( 56) 00:39:51.851 2371.779 - 2387.383: 99.0932% ( 56) 00:39:51.851 2387.383 - 2402.987: 99.1655% ( 61) 00:39:51.851 2402.987 - 2418.590: 99.2022% ( 31) 00:39:51.851 2418.590 - 2434.194: 99.2366% ( 29) 00:39:51.851 2434.194 - 2449.798: 99.2888% ( 44) 00:39:51.851 2449.798 - 2465.402: 99.3149% ( 22) 00:39:51.851 2465.402 - 2481.006: 99.3350% ( 17) 00:39:51.851 2481.006 - 2496.610: 99.3658% ( 26) 00:39:51.851 2496.610 - 2512.213: 99.3860% ( 17) 00:39:51.851 2512.213 - 2527.817: 99.4132% ( 23) 00:39:51.851 2527.817 - 2543.421: 99.4310% ( 15) 00:39:51.851 2543.421 - 2559.025: 99.4441% ( 11) 00:39:51.851 2559.025 - 2574.629: 99.4583% ( 12) 00:39:51.851 2574.629 - 2590.232: 99.4796% ( 18) 00:39:51.851 2590.232 - 2605.836: 99.5081% ( 24) 00:39:51.851 2605.836 - 2621.440: 99.5294% ( 18) 00:39:51.851 2621.440 - 2637.044: 99.5590% ( 25) 00:39:51.851 2637.044 - 2652.648: 99.5780% ( 16) 00:39:51.851 2652.648 - 2668.251: 99.5982% ( 17) 00:39:51.851 2668.251 - 2683.855: 99.6349% ( 31) 00:39:51.851 2683.855 - 2699.459: 99.6444% ( 8) 00:39:51.851 2699.459 - 2715.063: 99.6622% ( 15) 00:39:51.851 2715.063 - 2730.667: 99.6752% ( 11) 00:39:51.851 2730.667 - 2746.270: 99.6847% ( 8) 00:39:51.851 2746.270 - 2761.874: 99.6930% ( 7) 00:39:51.851 2761.874 - 2777.478: 99.7001% ( 6) 00:39:51.851 2777.478 - 2793.082: 99.7048% ( 4) 00:39:51.851 2793.082 - 2808.686: 99.7120% ( 6) 00:39:51.851 2808.686 - 2824.290: 99.7179% ( 5) 00:39:51.851 2824.290 - 2839.893: 99.7238% ( 5) 00:39:51.851 2839.893 - 2855.497: 99.7262% ( 2) 00:39:51.851 2855.497 - 2871.101: 99.7297% ( 3) 00:39:51.851 2871.101 - 2886.705: 99.7333% ( 3) 00:39:51.851 2886.705 - 2902.309: 99.7357% ( 2) 00:39:51.851 2902.309 - 2917.912: 99.7368% ( 1) 00:39:51.851 2917.912 - 2933.516: 99.7380% ( 1) 00:39:51.851 2933.516 - 2949.120: 99.7392% ( 1) 00:39:51.851 2949.120 - 2964.724: 99.7404% ( 1) 00:39:51.851 2964.724 - 2980.328: 99.7416% ( 1) 00:39:51.851 2980.328 - 2995.931: 99.7428% ( 1) 00:39:51.851 2995.931 - 3011.535: 99.7451% ( 2) 00:39:51.851 3011.535 - 3027.139: 99.7475% ( 2) 00:39:51.851 3027.139 - 3042.743: 99.7546% ( 6) 00:39:51.851 3042.743 - 3058.347: 99.7629% ( 7) 00:39:51.851 3058.347 - 3073.950: 99.7665% ( 3) 00:39:51.851 3073.950 - 3089.554: 99.7700% ( 3) 00:39:51.851 3089.554 - 3105.158: 99.7712% ( 1) 00:39:51.851 3105.158 - 3120.762: 99.7724% ( 1) 00:39:51.851 3120.762 - 3136.366: 99.7736% ( 1) 00:39:51.851 3136.366 - 3151.970: 99.7760% ( 2) 00:39:51.851 3151.970 - 3167.573: 99.7783% ( 2) 00:39:51.851 3167.573 - 3183.177: 99.7807% ( 2) 00:39:51.851 3198.781 - 3214.385: 99.7819% ( 1) 00:39:51.851 3214.385 - 3229.989: 99.7831% ( 1) 00:39:51.851 3245.592 - 3261.196: 99.7843% ( 1) 00:39:51.851 3261.196 - 3276.800: 99.7854% ( 1) 00:39:51.851 3276.800 - 3292.404: 99.7866% ( 1) 00:39:51.851 3292.404 - 3308.008: 99.7878% ( 1) 00:39:51.851 3308.008 - 3323.611: 99.7890% ( 1) 00:39:51.851 3323.611 - 3339.215: 99.7902% ( 1) 00:39:51.851 3339.215 - 3354.819: 99.7926% ( 2) 00:39:51.851 3354.819 - 3370.423: 99.7937% ( 1) 00:39:51.851 3386.027 - 3401.630: 99.7949% ( 1) 00:39:51.851 3417.234 - 3432.838: 99.7961% ( 1) 00:39:51.851 3448.442 - 3464.046: 99.7973% ( 1) 00:39:51.851 3573.272 - 3588.876: 99.7985% ( 1) 00:39:51.851 3682.499 - 3698.103: 99.7997% ( 1) 00:39:51.851 3698.103 - 3713.707: 99.8020% ( 2) 00:39:51.851 3900.952 - 3916.556: 99.8044% ( 2) 00:39:51.851 3963.368 - 3978.971: 99.8056% ( 1) 00:39:51.851 4056.990 - 4088.198: 99.8068% ( 1) 00:39:51.851 4119.406 - 4150.613: 99.8080% ( 1) 00:39:51.851 4150.613 - 4181.821: 99.8092% ( 1) 00:39:51.851 4181.821 - 4213.029: 99.8103% ( 1) 00:39:51.851 4275.444 - 4306.651: 99.8115% ( 1) 00:39:51.851 4431.482 - 4462.690: 99.8127% ( 1) 00:39:51.851 4493.897 - 4525.105: 99.8151% ( 2) 00:39:51.851 4556.312 - 4587.520: 99.8175% ( 2) 00:39:51.851 4899.596 - 4930.804: 99.8186% ( 1) 00:39:51.851 4930.804 - 4962.011: 99.8198% ( 1) 00:39:51.851 5086.842 - 5118.050: 99.8210% ( 1) 00:39:51.851 5118.050 - 5149.257: 99.8222% ( 1) 00:39:51.851 5274.088 - 5305.295: 99.8246% ( 2) 00:39:51.851 5305.295 - 5336.503: 99.8258% ( 1) 00:39:51.851 5336.503 - 5367.710: 99.8269% ( 1) 00:39:51.851 5461.333 - 5492.541: 99.8293% ( 2) 00:39:51.851 5523.749 - 5554.956: 99.8305% ( 1) 00:39:51.851 5554.956 - 5586.164: 99.8329% ( 2) 00:39:51.851 5617.371 - 5648.579: 99.8340% ( 1) 00:39:51.851 5648.579 - 5679.787: 99.8352% ( 1) 00:39:51.851 5679.787 - 5710.994: 99.8364% ( 1) 00:39:51.851 5773.410 - 5804.617: 99.8388% ( 2) 00:39:51.851 5835.825 - 5867.032: 99.8400% ( 1) 00:39:51.851 5960.655 - 5991.863: 99.8412% ( 1) 00:39:51.851 5991.863 - 6023.070: 99.8423% ( 1) 00:39:51.851 6054.278 - 6085.486: 99.8447% ( 2) 00:39:51.851 6085.486 - 6116.693: 99.8459% ( 1) 00:39:51.851 6179.109 - 6210.316: 99.8483% ( 2) 00:39:51.851 6366.354 - 6397.562: 99.8495% ( 1) 00:39:51.851 6397.562 - 6428.770: 99.8506% ( 1) 00:39:51.851 6491.185 - 6522.392: 99.8518% ( 1) 00:39:51.851 6647.223 - 6678.430: 99.8530% ( 1) 00:39:51.851 6678.430 - 6709.638: 99.8554% ( 2) 00:39:51.851 6709.638 - 6740.846: 99.8566% ( 1) 00:39:51.851 6803.261 - 6834.469: 99.8578% ( 1) 00:39:51.851 6896.884 - 6928.091: 99.8589% ( 1) 00:39:51.851 7052.922 - 7084.130: 99.8601% ( 1) 00:39:51.851 7146.545 - 7177.752: 99.8625% ( 2) 00:39:51.851 7427.413 - 7458.621: 99.8637% ( 1) 00:39:51.851 7614.659 - 7645.867: 99.8649% ( 1) 00:39:51.851 8238.811 - 8301.227: 99.8661% ( 1) 00:39:51.851 8363.642 - 8426.057: 99.8672% ( 1) 00:39:51.851 8426.057 - 8488.472: 99.8684% ( 1) 00:39:51.851 8488.472 - 8550.888: 99.8696% ( 1) 00:39:51.851 8550.888 - 8613.303: 99.8720% ( 2) 00:39:51.851 8613.303 - 8675.718: 99.8732% ( 1) 00:39:51.851 8738.133 - 8800.549: 99.8744% ( 1) 00:39:51.851 8800.549 - 8862.964: 99.8791% ( 4) 00:39:51.851 8862.964 - 8925.379: 99.8826% ( 3) 00:39:51.851 9050.210 - 9112.625: 99.8862% ( 3) 00:39:51.851 9112.625 - 9175.040: 99.8933% ( 6) 00:39:51.851 9175.040 - 9237.455: 99.9052% ( 10) 00:39:51.851 9237.455 - 9299.870: 99.9206% ( 13) 00:39:51.851 9299.870 - 9362.286: 99.9218% ( 1) 00:39:51.851 9362.286 - 9424.701: 99.9241% ( 2) 00:39:51.851 9424.701 - 9487.116: 99.9277% ( 3) 00:39:51.851 9487.116 - 9549.531: 99.9336% ( 5) 00:39:51.851 9549.531 - 9611.947: 99.9395% ( 5) 00:39:51.851 9611.947 - 9674.362: 99.9419% ( 2) 00:39:51.851 9674.362 - 9736.777: 99.9443% ( 2) 00:39:51.851 9861.608 - 9924.023: 99.9467% ( 2) 00:39:51.852 9924.023 - 9986.438: 99.9502% ( 3) 00:39:51.852 9986.438 - 10048.853: 99.9526% ( 2) 00:39:51.852 10048.853 - 10111.269: 99.9538% ( 1) 00:39:51.852 10236.099 - 10298.514: 99.9585% ( 4) 00:39:51.852 10298.514 - 10360.930: 99.9597% ( 1) 00:39:51.852 10360.930 - 10423.345: 99.9633% ( 3) 00:39:51.852 10423.345 - 10485.760: 99.9668% ( 3) 00:39:51.852 10548.175 - 10610.590: 99.9727% ( 5) 00:39:51.852 10610.590 - 10673.006: 99.9822% ( 8) 00:39:51.852 10673.006 - 10735.421: 99.9905% ( 7) 00:39:51.852 10735.421 - 10797.836: 99.9917% ( 1) 00:39:51.852 10860.251 - 10922.667: 99.9929% ( 1) 00:39:51.852 10922.667 - 10985.082: 99.9976% ( 4) 00:39:51.852 10985.082 - 11047.497: 100.0000% ( 2) 00:39:51.852 00:39:51.852 07:50:25 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:39:51.852 00:39:51.852 real 0m2.652s 00:39:51.852 user 0m2.195s 00:39:51.852 sys 0m0.289s 00:39:51.852 07:50:25 nvme.nvme_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:51.852 07:50:25 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:39:51.852 ************************************ 00:39:51.852 END TEST nvme_perf 00:39:51.852 ************************************ 00:39:51.852 07:50:25 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:39:51.852 07:50:25 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:39:51.852 07:50:25 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:51.852 07:50:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:39:51.852 ************************************ 00:39:51.852 START TEST nvme_hello_world 00:39:51.852 ************************************ 00:39:51.852 07:50:25 nvme.nvme_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:39:52.111 Initializing NVMe Controllers 00:39:52.111 Attached to 0000:00:10.0 00:39:52.111 Namespace ID: 1 size: 5GB 00:39:52.111 Initialization complete. 00:39:52.111 INFO: using host memory buffer for IO 00:39:52.111 Hello world! 00:39:52.111 00:39:52.111 real 0m0.299s 00:39:52.111 user 0m0.069s 00:39:52.111 sys 0m0.155s 00:39:52.111 07:50:25 nvme.nvme_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:52.111 ************************************ 00:39:52.111 07:50:25 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:39:52.111 END TEST nvme_hello_world 00:39:52.111 ************************************ 00:39:52.111 07:50:25 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:39:52.111 07:50:25 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:52.111 07:50:25 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:52.111 07:50:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:39:52.111 ************************************ 00:39:52.111 START TEST nvme_sgl 00:39:52.111 ************************************ 00:39:52.111 07:50:25 nvme.nvme_sgl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:39:52.371 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:39:52.371 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:39:52.371 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:39:52.371 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:39:52.371 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:39:52.371 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:39:52.371 NVMe Readv/Writev Request test 00:39:52.371 Attached to 0000:00:10.0 00:39:52.371 0000:00:10.0: build_io_request_2 test passed 00:39:52.371 0000:00:10.0: build_io_request_4 test passed 00:39:52.371 0000:00:10.0: build_io_request_5 test passed 00:39:52.371 0000:00:10.0: build_io_request_6 test passed 00:39:52.371 0000:00:10.0: build_io_request_7 test passed 00:39:52.371 0000:00:10.0: build_io_request_10 test passed 00:39:52.371 Cleaning up... 00:39:52.371 00:39:52.371 real 0m0.317s 00:39:52.371 user 0m0.114s 00:39:52.371 sys 0m0.135s 00:39:52.371 07:50:26 nvme.nvme_sgl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:52.371 07:50:26 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:39:52.371 ************************************ 00:39:52.371 END TEST nvme_sgl 00:39:52.371 ************************************ 00:39:52.371 07:50:26 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:39:52.371 07:50:26 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:52.371 07:50:26 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:52.371 07:50:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:39:52.371 ************************************ 00:39:52.371 START TEST nvme_e2edp 00:39:52.371 ************************************ 00:39:52.371 07:50:26 nvme.nvme_e2edp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:39:52.940 NVMe Write/Read with End-to-End data protection test 00:39:52.940 Attached to 0000:00:10.0 00:39:52.940 Cleaning up... 00:39:52.940 00:39:52.940 real 0m0.279s 00:39:52.940 user 0m0.091s 00:39:52.940 sys 0m0.136s 00:39:52.940 07:50:26 nvme.nvme_e2edp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:52.940 07:50:26 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:39:52.940 ************************************ 00:39:52.940 END TEST nvme_e2edp 00:39:52.940 ************************************ 00:39:52.940 07:50:26 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:39:52.940 07:50:26 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:52.940 07:50:26 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:52.940 07:50:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:39:52.940 ************************************ 00:39:52.940 START TEST nvme_reserve 00:39:52.940 ************************************ 00:39:52.940 07:50:26 nvme.nvme_reserve -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:39:53.201 ===================================================== 00:39:53.201 NVMe Controller at PCI bus 0, device 16, function 0 00:39:53.201 ===================================================== 00:39:53.201 Reservations: Not Supported 00:39:53.201 Reservation test passed 00:39:53.201 00:39:53.201 real 0m0.292s 00:39:53.201 user 0m0.078s 00:39:53.201 sys 0m0.153s 00:39:53.201 07:50:26 nvme.nvme_reserve -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:53.201 ************************************ 00:39:53.201 07:50:26 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:39:53.201 END TEST nvme_reserve 00:39:53.201 ************************************ 00:39:53.201 07:50:26 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:39:53.201 07:50:26 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:53.201 07:50:26 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:53.201 07:50:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:39:53.201 ************************************ 00:39:53.201 START TEST nvme_err_injection 00:39:53.201 ************************************ 00:39:53.201 07:50:26 nvme.nvme_err_injection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:39:53.499 NVMe Error Injection test 00:39:53.499 Attached to 0000:00:10.0 00:39:53.499 0000:00:10.0: get features failed as expected 00:39:53.499 0000:00:10.0: get features successfully as expected 00:39:53.499 0000:00:10.0: read failed as expected 00:39:53.499 0000:00:10.0: read successfully as expected 00:39:53.499 Cleaning up... 00:39:53.499 00:39:53.499 real 0m0.308s 00:39:53.499 user 0m0.090s 00:39:53.499 sys 0m0.143s 00:39:53.499 07:50:27 nvme.nvme_err_injection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:53.499 07:50:27 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:39:53.499 ************************************ 00:39:53.499 END TEST nvme_err_injection 00:39:53.499 ************************************ 00:39:53.499 07:50:27 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:39:53.499 07:50:27 nvme -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:39:53.500 07:50:27 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:53.500 07:50:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:39:53.500 ************************************ 00:39:53.500 START TEST nvme_overhead 00:39:53.500 ************************************ 00:39:53.500 07:50:27 nvme.nvme_overhead -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:39:54.908 Initializing NVMe Controllers 00:39:54.908 Attached to 0000:00:10.0 00:39:54.908 Initialization complete. Launching workers. 00:39:54.908 submit (in ns) avg, min, max = 13879.4, 10924.8, 83962.9 00:39:54.908 complete (in ns) avg, min, max = 9795.2, 7923.8, 108796.2 00:39:54.908 00:39:54.908 Submit histogram 00:39:54.908 ================ 00:39:54.908 Range in us Cumulative Count 00:39:54.908 10.910 - 10.971: 0.0142% ( 1) 00:39:54.908 11.032 - 11.093: 0.0284% ( 1) 00:39:54.908 11.154 - 11.215: 0.0426% ( 1) 00:39:54.908 11.337 - 11.398: 0.0567% ( 1) 00:39:54.908 11.398 - 11.459: 0.0709% ( 1) 00:39:54.908 11.459 - 11.520: 0.0851% ( 1) 00:39:54.908 11.520 - 11.581: 0.1418% ( 4) 00:39:54.908 11.581 - 11.642: 0.5106% ( 26) 00:39:54.908 11.642 - 11.703: 1.1773% ( 47) 00:39:54.908 11.703 - 11.764: 2.0993% ( 65) 00:39:54.908 11.764 - 11.825: 3.1631% ( 75) 00:39:54.908 11.825 - 11.886: 5.5035% ( 165) 00:39:54.908 11.886 - 11.947: 11.4468% ( 419) 00:39:54.909 11.947 - 12.008: 20.4823% ( 637) 00:39:54.909 12.008 - 12.069: 30.9787% ( 740) 00:39:54.909 12.069 - 12.130: 42.1560% ( 788) 00:39:54.909 12.130 - 12.190: 50.3688% ( 579) 00:39:54.909 12.190 - 12.251: 56.3830% ( 424) 00:39:54.909 12.251 - 12.312: 61.2340% ( 342) 00:39:54.909 12.312 - 12.373: 64.9787% ( 264) 00:39:54.909 12.373 - 12.434: 67.6454% ( 188) 00:39:54.909 12.434 - 12.495: 69.0071% ( 96) 00:39:54.909 12.495 - 12.556: 70.1702% ( 82) 00:39:54.909 12.556 - 12.617: 70.8085% ( 45) 00:39:54.909 12.617 - 12.678: 71.3617% ( 39) 00:39:54.909 12.678 - 12.739: 71.5603% ( 14) 00:39:54.909 12.739 - 12.800: 71.7305% ( 12) 00:39:54.909 12.800 - 12.861: 71.9149% ( 13) 00:39:54.909 12.861 - 12.922: 71.9716% ( 4) 00:39:54.909 12.922 - 12.983: 72.0142% ( 3) 00:39:54.909 12.983 - 13.044: 72.0284% ( 1) 00:39:54.909 13.044 - 13.105: 72.0426% ( 1) 00:39:54.909 13.349 - 13.410: 72.0567% ( 1) 00:39:54.909 13.531 - 13.592: 72.0709% ( 1) 00:39:54.909 13.592 - 13.653: 72.0851% ( 1) 00:39:54.909 13.653 - 13.714: 72.0993% ( 1) 00:39:54.909 13.897 - 13.958: 72.1135% ( 1) 00:39:54.909 13.958 - 14.019: 72.1277% ( 1) 00:39:54.909 14.141 - 14.202: 72.1418% ( 1) 00:39:54.909 14.263 - 14.324: 72.1560% ( 1) 00:39:54.909 14.446 - 14.507: 72.1702% ( 1) 00:39:54.909 14.690 - 14.750: 72.1844% ( 1) 00:39:54.909 14.750 - 14.811: 72.2128% ( 2) 00:39:54.909 14.933 - 14.994: 72.2270% ( 1) 00:39:54.909 14.994 - 15.055: 72.2411% ( 1) 00:39:54.909 15.116 - 15.177: 72.2553% ( 1) 00:39:54.909 15.177 - 15.238: 72.2695% ( 1) 00:39:54.909 15.238 - 15.299: 72.2837% ( 1) 00:39:54.909 15.421 - 15.482: 72.2979% ( 1) 00:39:54.909 15.726 - 15.848: 72.3404% ( 3) 00:39:54.909 16.213 - 16.335: 72.3546% ( 1) 00:39:54.909 16.335 - 16.457: 72.3688% ( 1) 00:39:54.909 16.457 - 16.579: 72.3830% ( 1) 00:39:54.909 16.579 - 16.701: 72.4113% ( 2) 00:39:54.909 16.823 - 16.945: 72.4681% ( 4) 00:39:54.909 17.067 - 17.189: 72.4823% ( 1) 00:39:54.909 17.310 - 17.432: 72.4965% ( 1) 00:39:54.909 17.554 - 17.676: 72.5957% ( 7) 00:39:54.909 17.676 - 17.798: 73.7730% ( 83) 00:39:54.909 17.798 - 17.920: 80.0000% ( 439) 00:39:54.909 17.920 - 18.042: 87.8298% ( 552) 00:39:54.909 18.042 - 18.164: 92.9645% ( 362) 00:39:54.909 18.164 - 18.286: 96.0426% ( 217) 00:39:54.909 18.286 - 18.408: 97.7730% ( 122) 00:39:54.909 18.408 - 18.530: 98.3262% ( 39) 00:39:54.909 18.530 - 18.651: 98.5532% ( 16) 00:39:54.909 18.651 - 18.773: 98.6525% ( 7) 00:39:54.909 18.773 - 18.895: 98.7092% ( 4) 00:39:54.909 18.895 - 19.017: 98.7234% ( 1) 00:39:54.909 19.139 - 19.261: 98.7376% ( 1) 00:39:54.909 19.261 - 19.383: 98.7518% ( 1) 00:39:54.909 19.505 - 19.627: 98.7660% ( 1) 00:39:54.909 19.627 - 19.749: 98.7943% ( 2) 00:39:54.909 19.992 - 20.114: 98.8085% ( 1) 00:39:54.909 20.236 - 20.358: 98.8227% ( 1) 00:39:54.909 20.358 - 20.480: 98.8652% ( 3) 00:39:54.909 20.602 - 20.724: 98.8794% ( 1) 00:39:54.909 20.724 - 20.846: 98.8936% ( 1) 00:39:54.909 21.090 - 21.211: 98.9078% ( 1) 00:39:54.909 21.577 - 21.699: 98.9220% ( 1) 00:39:54.909 21.699 - 21.821: 98.9362% ( 1) 00:39:54.909 21.943 - 22.065: 98.9504% ( 1) 00:39:54.909 22.187 - 22.309: 98.9645% ( 1) 00:39:54.909 22.309 - 22.430: 98.9787% ( 1) 00:39:54.909 22.430 - 22.552: 98.9929% ( 1) 00:39:54.909 22.552 - 22.674: 99.0071% ( 1) 00:39:54.909 22.674 - 22.796: 99.0213% ( 1) 00:39:54.909 22.796 - 22.918: 99.0355% ( 1) 00:39:54.909 22.918 - 23.040: 99.0496% ( 1) 00:39:54.909 23.040 - 23.162: 99.0638% ( 1) 00:39:54.909 23.162 - 23.284: 99.0922% ( 2) 00:39:54.909 23.284 - 23.406: 99.1064% ( 1) 00:39:54.909 23.406 - 23.528: 99.1206% ( 1) 00:39:54.909 23.650 - 23.771: 99.1348% ( 1) 00:39:54.909 23.771 - 23.893: 99.1489% ( 1) 00:39:54.909 23.893 - 24.015: 99.1915% ( 3) 00:39:54.909 24.015 - 24.137: 99.3191% ( 9) 00:39:54.909 24.137 - 24.259: 99.3901% ( 5) 00:39:54.909 24.259 - 24.381: 99.4894% ( 7) 00:39:54.909 24.381 - 24.503: 99.5461% ( 4) 00:39:54.909 24.503 - 24.625: 99.5887% ( 3) 00:39:54.909 24.625 - 24.747: 99.6170% ( 2) 00:39:54.909 24.869 - 24.990: 99.6312% ( 1) 00:39:54.909 24.990 - 25.112: 99.6454% ( 1) 00:39:54.909 25.478 - 25.600: 99.6596% ( 1) 00:39:54.909 25.600 - 25.722: 99.6738% ( 1) 00:39:54.909 25.844 - 25.966: 99.6879% ( 1) 00:39:54.909 26.331 - 26.453: 99.7163% ( 2) 00:39:54.909 26.453 - 26.575: 99.7305% ( 1) 00:39:54.909 26.697 - 26.819: 99.7447% ( 1) 00:39:54.909 27.429 - 27.550: 99.7589% ( 1) 00:39:54.909 27.794 - 27.916: 99.7730% ( 1) 00:39:54.909 28.160 - 28.282: 99.7872% ( 1) 00:39:54.909 28.404 - 28.526: 99.8014% ( 1) 00:39:54.909 29.013 - 29.135: 99.8156% ( 1) 00:39:54.909 30.232 - 30.354: 99.8298% ( 1) 00:39:54.909 32.427 - 32.670: 99.8440% ( 1) 00:39:54.909 36.084 - 36.328: 99.8582% ( 1) 00:39:54.909 37.303 - 37.547: 99.8723% ( 1) 00:39:54.909 39.010 - 39.253: 99.8865% ( 1) 00:39:54.909 39.985 - 40.229: 99.9007% ( 1) 00:39:54.909 42.423 - 42.667: 99.9149% ( 1) 00:39:54.909 45.592 - 45.836: 99.9291% ( 1) 00:39:54.909 57.051 - 57.295: 99.9433% ( 1) 00:39:54.909 66.316 - 66.804: 99.9574% ( 1) 00:39:54.909 66.804 - 67.291: 99.9716% ( 1) 00:39:54.909 70.217 - 70.705: 99.9858% ( 1) 00:39:54.909 83.870 - 84.358: 100.0000% ( 1) 00:39:54.909 00:39:54.909 Complete histogram 00:39:54.909 ================== 00:39:54.909 Range in us Cumulative Count 00:39:54.909 7.924 - 7.985: 0.2411% ( 17) 00:39:54.909 7.985 - 8.046: 0.7092% ( 33) 00:39:54.909 8.046 - 8.107: 1.0638% ( 25) 00:39:54.909 8.107 - 8.168: 2.3830% ( 93) 00:39:54.909 8.168 - 8.229: 13.4894% ( 783) 00:39:54.909 8.229 - 8.290: 31.3617% ( 1260) 00:39:54.909 8.290 - 8.350: 39.2482% ( 556) 00:39:54.909 8.350 - 8.411: 50.6525% ( 804) 00:39:54.909 8.411 - 8.472: 59.7305% ( 640) 00:39:54.909 8.472 - 8.533: 64.0000% ( 301) 00:39:54.909 8.533 - 8.594: 67.1773% ( 224) 00:39:54.909 8.594 - 8.655: 68.9929% ( 128) 00:39:54.909 8.655 - 8.716: 69.6454% ( 46) 00:39:54.909 8.716 - 8.777: 69.9007% ( 18) 00:39:54.909 8.777 - 8.838: 70.0993% ( 14) 00:39:54.909 8.838 - 8.899: 70.2128% ( 8) 00:39:54.909 8.899 - 8.960: 70.4965% ( 20) 00:39:54.909 8.960 - 9.021: 70.6241% ( 9) 00:39:54.909 9.021 - 9.082: 70.9504% ( 23) 00:39:54.909 9.082 - 9.143: 71.3475% ( 28) 00:39:54.909 9.143 - 9.204: 71.5035% ( 11) 00:39:54.909 9.204 - 9.265: 71.6170% ( 8) 00:39:54.909 9.265 - 9.326: 71.7021% ( 6) 00:39:54.909 9.326 - 9.387: 71.7305% ( 2) 00:39:54.909 9.448 - 9.509: 71.7447% ( 1) 00:39:54.909 9.630 - 9.691: 71.7589% ( 1) 00:39:54.909 9.935 - 9.996: 71.8156% ( 4) 00:39:54.909 9.996 - 10.057: 71.8440% ( 2) 00:39:54.909 10.057 - 10.118: 71.8582% ( 1) 00:39:54.909 10.118 - 10.179: 71.8723% ( 1) 00:39:54.909 10.362 - 10.423: 71.8865% ( 1) 00:39:54.909 10.423 - 10.484: 71.9291% ( 3) 00:39:54.909 10.484 - 10.545: 71.9433% ( 1) 00:39:54.909 10.606 - 10.667: 71.9716% ( 2) 00:39:54.909 10.910 - 10.971: 71.9858% ( 1) 00:39:54.909 10.971 - 11.032: 72.0000% ( 1) 00:39:54.909 11.520 - 11.581: 72.0142% ( 1) 00:39:54.909 11.642 - 11.703: 72.0284% ( 1) 00:39:54.909 11.764 - 11.825: 72.0426% ( 1) 00:39:54.909 12.069 - 12.130: 72.0709% ( 2) 00:39:54.909 12.190 - 12.251: 72.0851% ( 1) 00:39:54.909 12.312 - 12.373: 72.0993% ( 1) 00:39:54.909 12.373 - 12.434: 72.1277% ( 2) 00:39:54.909 12.495 - 12.556: 72.1560% ( 2) 00:39:54.909 12.556 - 12.617: 72.1702% ( 1) 00:39:54.909 12.617 - 12.678: 72.1986% ( 2) 00:39:54.909 12.678 - 12.739: 72.3121% ( 8) 00:39:54.909 12.739 - 12.800: 73.2482% ( 66) 00:39:54.909 12.800 - 12.861: 78.4965% ( 370) 00:39:54.909 12.861 - 12.922: 85.4752% ( 492) 00:39:54.909 12.922 - 12.983: 88.6950% ( 227) 00:39:54.909 12.983 - 13.044: 93.1773% ( 316) 00:39:54.909 13.044 - 13.105: 95.5319% ( 166) 00:39:54.909 13.105 - 13.166: 96.7943% ( 89) 00:39:54.909 13.166 - 13.227: 97.4326% ( 45) 00:39:54.909 13.227 - 13.288: 97.7589% ( 23) 00:39:54.909 13.288 - 13.349: 97.9149% ( 11) 00:39:54.909 13.349 - 13.410: 97.9858% ( 5) 00:39:54.909 13.410 - 13.470: 98.0142% ( 2) 00:39:54.909 13.592 - 13.653: 98.0284% ( 1) 00:39:54.909 13.653 - 13.714: 98.0426% ( 1) 00:39:54.909 13.714 - 13.775: 98.0567% ( 1) 00:39:54.909 13.775 - 13.836: 98.0709% ( 1) 00:39:54.909 13.836 - 13.897: 98.1418% ( 5) 00:39:54.909 13.897 - 13.958: 98.1844% ( 3) 00:39:54.909 13.958 - 14.019: 98.2270% ( 3) 00:39:54.909 14.019 - 14.080: 98.2553% ( 2) 00:39:54.909 14.080 - 14.141: 98.4397% ( 13) 00:39:54.909 14.141 - 14.202: 98.4823% ( 3) 00:39:54.909 14.202 - 14.263: 98.5248% ( 3) 00:39:54.909 14.263 - 14.324: 98.5816% ( 4) 00:39:54.909 14.324 - 14.385: 98.5957% ( 1) 00:39:54.909 14.446 - 14.507: 98.6099% ( 1) 00:39:54.909 14.507 - 14.568: 98.6241% ( 1) 00:39:54.909 14.629 - 14.690: 98.6383% ( 1) 00:39:54.909 14.690 - 14.750: 98.6525% ( 1) 00:39:54.909 14.994 - 15.055: 98.6667% ( 1) 00:39:54.909 15.238 - 15.299: 98.6950% ( 2) 00:39:54.910 15.299 - 15.360: 98.7234% ( 2) 00:39:54.910 15.482 - 15.543: 98.7376% ( 1) 00:39:54.910 15.726 - 15.848: 98.7518% ( 1) 00:39:54.910 15.970 - 16.091: 98.7660% ( 1) 00:39:54.910 16.823 - 16.945: 98.7801% ( 1) 00:39:54.910 16.945 - 17.067: 98.7943% ( 1) 00:39:54.910 17.067 - 17.189: 98.8085% ( 1) 00:39:54.910 17.554 - 17.676: 98.8227% ( 1) 00:39:54.910 17.676 - 17.798: 98.8511% ( 2) 00:39:54.910 18.042 - 18.164: 98.8652% ( 1) 00:39:54.910 18.651 - 18.773: 98.8794% ( 1) 00:39:54.910 18.895 - 19.017: 98.8936% ( 1) 00:39:54.910 19.017 - 19.139: 98.9078% ( 1) 00:39:54.910 19.261 - 19.383: 98.9220% ( 1) 00:39:54.910 19.992 - 20.114: 98.9504% ( 2) 00:39:54.910 20.114 - 20.236: 99.0213% ( 5) 00:39:54.910 20.236 - 20.358: 99.1773% ( 11) 00:39:54.910 20.358 - 20.480: 99.4326% ( 18) 00:39:54.910 20.480 - 20.602: 99.5745% ( 10) 00:39:54.910 20.602 - 20.724: 99.6312% ( 4) 00:39:54.910 20.724 - 20.846: 99.6738% ( 3) 00:39:54.910 20.846 - 20.968: 99.6879% ( 1) 00:39:54.910 20.968 - 21.090: 99.7163% ( 2) 00:39:54.910 21.090 - 21.211: 99.7305% ( 1) 00:39:54.910 21.333 - 21.455: 99.7447% ( 1) 00:39:54.910 21.577 - 21.699: 99.7589% ( 1) 00:39:54.910 21.699 - 21.821: 99.7730% ( 1) 00:39:54.910 21.943 - 22.065: 99.8014% ( 2) 00:39:54.910 26.210 - 26.331: 99.8156% ( 1) 00:39:54.910 27.307 - 27.429: 99.8298% ( 1) 00:39:54.910 28.648 - 28.770: 99.8440% ( 1) 00:39:54.910 29.623 - 29.745: 99.8582% ( 1) 00:39:54.910 33.158 - 33.402: 99.8865% ( 2) 00:39:54.910 33.402 - 33.646: 99.9007% ( 1) 00:39:54.910 36.084 - 36.328: 99.9149% ( 1) 00:39:54.910 41.691 - 41.935: 99.9291% ( 1) 00:39:54.910 46.811 - 47.055: 99.9433% ( 1) 00:39:54.910 47.055 - 47.299: 99.9574% ( 1) 00:39:54.910 68.754 - 69.242: 99.9716% ( 1) 00:39:54.910 78.994 - 79.482: 99.9858% ( 1) 00:39:54.910 108.739 - 109.227: 100.0000% ( 1) 00:39:54.910 00:39:54.910 00:39:54.910 real 0m1.300s 00:39:54.910 user 0m1.093s 00:39:54.910 sys 0m0.141s 00:39:54.910 07:50:28 nvme.nvme_overhead -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:54.910 ************************************ 00:39:54.910 07:50:28 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:39:54.910 END TEST nvme_overhead 00:39:54.910 ************************************ 00:39:54.910 07:50:28 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:39:54.910 07:50:28 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:39:54.910 07:50:28 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:54.910 07:50:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:39:54.910 ************************************ 00:39:54.910 START TEST nvme_arbitration 00:39:54.910 ************************************ 00:39:54.910 07:50:28 nvme.nvme_arbitration -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:39:58.190 Initializing NVMe Controllers 00:39:58.190 Attached to 0000:00:10.0 00:39:58.190 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:39:58.190 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:39:58.190 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:39:58.190 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:39:58.190 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:39:58.190 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:39:58.190 Initialization complete. Launching workers. 00:39:58.190 Starting thread on core 1 with urgent priority queue 00:39:58.190 Starting thread on core 2 with urgent priority queue 00:39:58.190 Starting thread on core 0 with urgent priority queue 00:39:58.190 Starting thread on core 3 with urgent priority queue 00:39:58.190 QEMU NVMe Ctrl (12340 ) core 0: 6913.67 IO/s 14.46 secs/100000 ios 00:39:58.190 QEMU NVMe Ctrl (12340 ) core 1: 6839.67 IO/s 14.62 secs/100000 ios 00:39:58.190 QEMU NVMe Ctrl (12340 ) core 2: 3805.33 IO/s 26.28 secs/100000 ios 00:39:58.190 QEMU NVMe Ctrl (12340 ) core 3: 3910.33 IO/s 25.57 secs/100000 ios 00:39:58.190 ======================================================== 00:39:58.190 00:39:58.190 00:39:58.190 real 0m3.334s 00:39:58.190 user 0m9.116s 00:39:58.190 sys 0m0.145s 00:39:58.190 07:50:32 nvme.nvme_arbitration -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:58.190 07:50:32 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:39:58.190 ************************************ 00:39:58.190 END TEST nvme_arbitration 00:39:58.190 ************************************ 00:39:58.190 07:50:32 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:39:58.190 07:50:32 nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:39:58.190 07:50:32 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:58.190 07:50:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:39:58.190 ************************************ 00:39:58.190 START TEST nvme_single_aen 00:39:58.190 ************************************ 00:39:58.190 07:50:32 nvme.nvme_single_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:39:58.459 Asynchronous Event Request test 00:39:58.459 Attached to 0000:00:10.0 00:39:58.459 Reset controller to setup AER completions for this process 00:39:58.459 Registering asynchronous event callbacks... 00:39:58.459 Getting orig temperature thresholds of all controllers 00:39:58.459 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:39:58.459 Setting all controllers temperature threshold low to trigger AER 00:39:58.459 Waiting for all controllers temperature threshold to be set lower 00:39:58.459 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:39:58.459 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:39:58.459 Waiting for all controllers to trigger AER and reset threshold 00:39:58.459 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:39:58.459 Cleaning up... 00:39:58.720 00:39:58.720 real 0m0.276s 00:39:58.720 user 0m0.070s 00:39:58.720 sys 0m0.108s 00:39:58.720 07:50:32 nvme.nvme_single_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:58.720 07:50:32 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:39:58.720 ************************************ 00:39:58.720 END TEST nvme_single_aen 00:39:58.720 ************************************ 00:39:58.720 07:50:32 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:39:58.720 07:50:32 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:58.720 07:50:32 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:58.720 07:50:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:39:58.720 ************************************ 00:39:58.720 START TEST nvme_doorbell_aers 00:39:58.720 ************************************ 00:39:58.720 07:50:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1121 -- # nvme_doorbell_aers 00:39:58.720 07:50:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:39:58.720 07:50:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:39:58.720 07:50:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:39:58.720 07:50:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:39:58.720 07:50:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # bdfs=() 00:39:58.720 07:50:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # local bdfs 00:39:58.720 07:50:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:39:58.720 07:50:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:39:58.720 07:50:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:39:58.720 07:50:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:39:58.720 07:50:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:39:58.720 07:50:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:39:58.720 07:50:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:39:58.978 [2024-07-12 07:50:32.684941] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 178819) is not found. Dropping the request. 00:40:08.957 Executing: test_write_invalid_db 00:40:08.957 Waiting for AER completion... 00:40:08.957 Failure: test_write_invalid_db 00:40:08.957 00:40:08.957 Executing: test_invalid_db_write_overflow_sq 00:40:08.957 Waiting for AER completion... 00:40:08.957 Failure: test_invalid_db_write_overflow_sq 00:40:08.957 00:40:08.957 Executing: test_invalid_db_write_overflow_cq 00:40:08.957 Waiting for AER completion... 00:40:08.957 Failure: test_invalid_db_write_overflow_cq 00:40:08.957 00:40:08.957 00:40:08.957 real 0m10.114s 00:40:08.957 user 0m7.499s 00:40:08.957 sys 0m2.544s 00:40:08.957 07:50:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:08.957 ************************************ 00:40:08.957 07:50:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:40:08.957 END TEST nvme_doorbell_aers 00:40:08.957 ************************************ 00:40:08.957 07:50:42 nvme -- nvme/nvme.sh@97 -- # uname 00:40:08.957 07:50:42 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:40:08.957 07:50:42 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:40:08.957 07:50:42 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:40:08.957 07:50:42 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:08.957 07:50:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:08.957 ************************************ 00:40:08.957 START TEST nvme_multi_aen 00:40:08.957 ************************************ 00:40:08.957 07:50:42 nvme.nvme_multi_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:40:08.957 [2024-07-12 07:50:42.762602] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 178819) is not found. Dropping the request. 00:40:08.957 [2024-07-12 07:50:42.762768] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 178819) is not found. Dropping the request. 00:40:08.957 [2024-07-12 07:50:42.762843] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 178819) is not found. Dropping the request. 00:40:08.957 Child process pid: 179017 00:40:09.216 [Child] Asynchronous Event Request test 00:40:09.216 [Child] Attached to 0000:00:10.0 00:40:09.216 [Child] Registering asynchronous event callbacks... 00:40:09.216 [Child] Getting orig temperature thresholds of all controllers 00:40:09.216 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:40:09.216 [Child] Waiting for all controllers to trigger AER and reset threshold 00:40:09.216 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:40:09.216 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:40:09.216 [Child] Cleaning up... 00:40:09.475 Asynchronous Event Request test 00:40:09.475 Attached to 0000:00:10.0 00:40:09.475 Reset controller to setup AER completions for this process 00:40:09.475 Registering asynchronous event callbacks... 00:40:09.475 Getting orig temperature thresholds of all controllers 00:40:09.475 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:40:09.475 Setting all controllers temperature threshold low to trigger AER 00:40:09.475 Waiting for all controllers temperature threshold to be set lower 00:40:09.475 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:40:09.475 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:40:09.475 Waiting for all controllers to trigger AER and reset threshold 00:40:09.475 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:40:09.475 Cleaning up... 00:40:09.475 00:40:09.475 real 0m0.592s 00:40:09.475 user 0m0.201s 00:40:09.475 sys 0m0.242s 00:40:09.475 07:50:43 nvme.nvme_multi_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:09.475 ************************************ 00:40:09.475 END TEST nvme_multi_aen 00:40:09.475 07:50:43 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:40:09.475 ************************************ 00:40:09.475 07:50:43 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:40:09.475 07:50:43 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:40:09.475 07:50:43 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:09.475 07:50:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:09.475 ************************************ 00:40:09.475 START TEST nvme_startup 00:40:09.475 ************************************ 00:40:09.475 07:50:43 nvme.nvme_startup -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:40:09.733 Initializing NVMe Controllers 00:40:09.733 Attached to 0000:00:10.0 00:40:09.733 Initialization complete. 00:40:09.733 Time used:237519.781 (us). 00:40:09.733 00:40:09.733 real 0m0.333s 00:40:09.733 user 0m0.101s 00:40:09.733 sys 0m0.134s 00:40:09.733 07:50:43 nvme.nvme_startup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:09.733 07:50:43 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:40:09.733 ************************************ 00:40:09.733 END TEST nvme_startup 00:40:09.733 ************************************ 00:40:09.733 07:50:43 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:40:09.733 07:50:43 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:09.733 07:50:43 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:09.733 07:50:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:09.991 ************************************ 00:40:09.992 START TEST nvme_multi_secondary 00:40:09.992 ************************************ 00:40:09.992 07:50:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@1121 -- # nvme_multi_secondary 00:40:09.992 07:50:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=179084 00:40:09.992 07:50:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:40:09.992 07:50:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=179085 00:40:09.992 07:50:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:40:09.992 07:50:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:40:13.272 Initializing NVMe Controllers 00:40:13.272 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:13.272 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:40:13.272 Initialization complete. Launching workers. 00:40:13.272 ======================================================== 00:40:13.272 Latency(us) 00:40:13.272 Device Information : IOPS MiB/s Average min max 00:40:13.272 PCIE (0000:00:10.0) NSID 1 from core 1: 36441.93 142.35 438.76 171.07 1582.25 00:40:13.272 ======================================================== 00:40:13.272 Total : 36441.93 142.35 438.76 171.07 1582.25 00:40:13.272 00:40:13.272 Initializing NVMe Controllers 00:40:13.272 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:13.272 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:40:13.272 Initialization complete. Launching workers. 00:40:13.272 ======================================================== 00:40:13.272 Latency(us) 00:40:13.272 Device Information : IOPS MiB/s Average min max 00:40:13.272 PCIE (0000:00:10.0) NSID 1 from core 2: 15751.54 61.53 1015.60 171.63 24872.24 00:40:13.272 ======================================================== 00:40:13.272 Total : 15751.54 61.53 1015.60 171.63 24872.24 00:40:13.272 00:40:13.531 07:50:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 179084 00:40:15.431 Initializing NVMe Controllers 00:40:15.431 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:15.431 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:40:15.431 Initialization complete. Launching workers. 00:40:15.431 ======================================================== 00:40:15.431 Latency(us) 00:40:15.431 Device Information : IOPS MiB/s Average min max 00:40:15.431 PCIE (0000:00:10.0) NSID 1 from core 0: 41862.80 163.53 381.89 131.96 2555.88 00:40:15.431 ======================================================== 00:40:15.431 Total : 41862.80 163.53 381.89 131.96 2555.88 00:40:15.431 00:40:15.431 07:50:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 179085 00:40:15.431 07:50:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=179158 00:40:15.431 07:50:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:40:15.431 07:50:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=179159 00:40:15.431 07:50:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:40:15.431 07:50:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:40:18.713 Initializing NVMe Controllers 00:40:18.713 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:18.713 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:40:18.713 Initialization complete. Launching workers. 00:40:18.713 ======================================================== 00:40:18.713 Latency(us) 00:40:18.713 Device Information : IOPS MiB/s Average min max 00:40:18.713 PCIE (0000:00:10.0) NSID 1 from core 0: 35710.63 139.49 447.79 166.84 3870.92 00:40:18.713 ======================================================== 00:40:18.713 Total : 35710.63 139.49 447.79 166.84 3870.92 00:40:18.713 00:40:18.973 Initializing NVMe Controllers 00:40:18.973 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:18.973 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:40:18.973 Initialization complete. Launching workers. 00:40:18.973 ======================================================== 00:40:18.973 Latency(us) 00:40:18.973 Device Information : IOPS MiB/s Average min max 00:40:18.973 PCIE (0000:00:10.0) NSID 1 from core 1: 37147.00 145.11 430.42 151.43 3487.99 00:40:18.973 ======================================================== 00:40:18.973 Total : 37147.00 145.11 430.42 151.43 3487.99 00:40:18.973 00:40:21.523 Initializing NVMe Controllers 00:40:21.523 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:21.523 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:40:21.523 Initialization complete. Launching workers. 00:40:21.523 ======================================================== 00:40:21.523 Latency(us) 00:40:21.523 Device Information : IOPS MiB/s Average min max 00:40:21.523 PCIE (0000:00:10.0) NSID 1 from core 2: 17637.96 68.90 906.53 109.15 28495.39 00:40:21.523 ======================================================== 00:40:21.523 Total : 17637.96 68.90 906.53 109.15 28495.39 00:40:21.523 00:40:21.523 07:50:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 179158 00:40:21.523 07:50:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 179159 00:40:21.523 00:40:21.523 real 0m11.287s 00:40:21.523 user 0m18.563s 00:40:21.523 sys 0m0.943s 00:40:21.523 07:50:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:21.523 07:50:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:40:21.523 ************************************ 00:40:21.523 END TEST nvme_multi_secondary 00:40:21.523 ************************************ 00:40:21.523 07:50:54 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:40:21.523 07:50:54 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:40:21.523 07:50:54 nvme -- common/autotest_common.sh@1085 -- # [[ -e /proc/178386 ]] 00:40:21.523 07:50:54 nvme -- common/autotest_common.sh@1086 -- # kill 178386 00:40:21.523 07:50:54 nvme -- common/autotest_common.sh@1087 -- # wait 178386 00:40:21.523 [2024-07-12 07:50:54.958754] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179016) is not found. Dropping the request. 00:40:21.523 [2024-07-12 07:50:54.959009] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179016) is not found. Dropping the request. 00:40:21.523 [2024-07-12 07:50:54.959090] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179016) is not found. Dropping the request. 00:40:21.523 [2024-07-12 07:50:54.959173] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179016) is not found. Dropping the request. 00:40:21.523 07:50:55 nvme -- common/autotest_common.sh@1089 -- # rm -f /var/run/spdk_stub0 00:40:21.523 07:50:55 nvme -- common/autotest_common.sh@1093 -- # echo 2 00:40:21.523 07:50:55 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:40:21.523 07:50:55 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:21.523 07:50:55 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:21.523 07:50:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:21.523 ************************************ 00:40:21.523 START TEST bdev_nvme_reset_stuck_adm_cmd 00:40:21.523 ************************************ 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:40:21.523 * Looking for test storage... 00:40:21.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # bdfs=() 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # local bdfs 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:40:21.523 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:40:21.524 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:40:21.524 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:40:21.524 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=179307 00:40:21.524 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:40:21.524 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:40:21.524 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 179307 00:40:21.524 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@827 -- # '[' -z 179307 ']' 00:40:21.524 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:21.524 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:21.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:21.524 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:21.524 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:21.524 07:50:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:40:21.782 [2024-07-12 07:50:55.471226] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:21.782 [2024-07-12 07:50:55.471516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179307 ] 00:40:22.041 [2024-07-12 07:50:55.680157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:22.041 [2024-07-12 07:50:55.755101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:22.041 [2024-07-12 07:50:55.755159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:22.041 [2024-07-12 07:50:55.755226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:22.041 [2024-07-12 07:50:55.755225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:22.609 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:22.609 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # return 0 00:40:22.609 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:40:22.609 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.609 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:40:22.609 nvme0n1 00:40:22.609 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.609 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:40:22.609 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_mmaBE.txt 00:40:22.609 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:40:22.609 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:22.609 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:40:22.609 true 00:40:22.609 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:22.867 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:40:22.867 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720770656 00:40:22.867 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:40:22.867 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=179335 00:40:22.867 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:40:22.867 07:50:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:40:24.772 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:40:24.772 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.772 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:40:24.772 [2024-07-12 07:50:58.505677] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:40:24.772 [2024-07-12 07:50:58.506140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:24.773 [2024-07-12 07:50:58.506215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:40:24.773 [2024-07-12 07:50:58.506280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:24.773 [2024-07-12 07:50:58.508398] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.773 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 179335 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 179335 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 179335 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_mmaBE.txt 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_mmaBE.txt 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 179307 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@946 -- # '[' -z 179307 ']' 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # kill -0 179307 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # uname 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 179307 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 179307' 00:40:24.773 killing process with pid 179307 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@965 -- # kill 179307 00:40:24.773 07:50:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # wait 179307 00:40:25.710 07:50:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:40:25.710 07:50:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:40:25.710 00:40:25.710 real 0m4.153s 00:40:25.710 user 0m14.275s 00:40:25.710 sys 0m0.706s 00:40:25.710 07:50:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:25.710 07:50:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:40:25.710 ************************************ 00:40:25.710 END TEST bdev_nvme_reset_stuck_adm_cmd 00:40:25.710 ************************************ 00:40:25.710 07:50:59 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:40:25.710 07:50:59 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:40:25.710 07:50:59 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:25.710 07:50:59 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:25.710 07:50:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:25.710 ************************************ 00:40:25.710 START TEST nvme_fio 00:40:25.710 ************************************ 00:40:25.710 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1121 -- # nvme_fio_test 00:40:25.710 07:50:59 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:40:25.710 07:50:59 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:40:25.710 07:50:59 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:40:25.710 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:25.710 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # local bdfs 00:40:25.710 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:25.710 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:40:25.710 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:40:25.710 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:40:25.710 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:40:25.710 07:50:59 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:40:25.710 07:50:59 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:40:25.710 07:50:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:40:25.710 07:50:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:40:25.710 07:50:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:40:25.970 07:50:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:40:25.970 07:50:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:40:26.230 07:50:59 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:40:26.230 07:50:59 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:40:26.230 07:50:59 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:40:26.489 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:40:26.489 fio-3.35 00:40:26.489 Starting 1 thread 00:40:29.780 00:40:29.780 test: (groupid=0, jobs=1): err= 0: pid=179470: Fri Jul 12 07:51:03 2024 00:40:29.780 read: IOPS=19.4k, BW=75.6MiB/s (79.3MB/s)(151MiB/2001msec) 00:40:29.780 slat (usec): min=4, max=118, avg= 5.73, stdev= 2.37 00:40:29.780 clat (usec): min=392, max=11703, avg=3289.35, stdev=364.86 00:40:29.780 lat (usec): min=397, max=11821, avg=3295.08, stdev=365.36 00:40:29.780 clat percentiles (usec): 00:40:29.780 | 1.00th=[ 2900], 5.00th=[ 2999], 10.00th=[ 3064], 20.00th=[ 3097], 00:40:29.780 | 30.00th=[ 3163], 40.00th=[ 3195], 50.00th=[ 3228], 60.00th=[ 3261], 00:40:29.780 | 70.00th=[ 3294], 80.00th=[ 3359], 90.00th=[ 3523], 95.00th=[ 4015], 00:40:29.780 | 99.00th=[ 4359], 99.50th=[ 4490], 99.90th=[ 7701], 99.95th=[ 9634], 00:40:29.780 | 99.99th=[11469] 00:40:29.780 bw ( KiB/s): min=71984, max=80024, per=99.35%, avg=76930.67, stdev=4328.57, samples=3 00:40:29.780 iops : min=17996, max=20006, avg=19232.67, stdev=1082.14, samples=3 00:40:29.780 write: IOPS=19.3k, BW=75.5MiB/s (79.1MB/s)(151MiB/2001msec); 0 zone resets 00:40:29.780 slat (usec): min=4, max=541, avg= 6.03, stdev= 3.92 00:40:29.780 clat (usec): min=216, max=11554, avg=3305.29, stdev=375.09 00:40:29.780 lat (usec): min=222, max=11591, avg=3311.32, stdev=375.57 00:40:29.780 clat percentiles (usec): 00:40:29.780 | 1.00th=[ 2900], 5.00th=[ 3032], 10.00th=[ 3064], 20.00th=[ 3130], 00:40:29.780 | 30.00th=[ 3163], 40.00th=[ 3195], 50.00th=[ 3228], 60.00th=[ 3261], 00:40:29.780 | 70.00th=[ 3326], 80.00th=[ 3392], 90.00th=[ 3556], 95.00th=[ 4047], 00:40:29.780 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 8029], 99.95th=[ 9896], 00:40:29.780 | 99.99th=[11338] 00:40:29.780 bw ( KiB/s): min=71912, max=80184, per=99.63%, avg=77008.00, stdev=4457.72, samples=3 00:40:29.780 iops : min=17978, max=20046, avg=19252.00, stdev=1114.43, samples=3 00:40:29.780 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:40:29.780 lat (msec) : 2=0.09%, 4=94.51%, 10=5.31%, 20=0.04% 00:40:29.780 cpu : usr=99.80%, sys=0.00%, ctx=28, majf=0, minf=39 00:40:29.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:40:29.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:29.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:29.780 issued rwts: total=38737,38667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:29.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:29.780 00:40:29.780 Run status group 0 (all jobs): 00:40:29.780 READ: bw=75.6MiB/s (79.3MB/s), 75.6MiB/s-75.6MiB/s (79.3MB/s-79.3MB/s), io=151MiB (159MB), run=2001-2001msec 00:40:29.780 WRITE: bw=75.5MiB/s (79.1MB/s), 75.5MiB/s-75.5MiB/s (79.1MB/s-79.1MB/s), io=151MiB (158MB), run=2001-2001msec 00:40:30.038 ----------------------------------------------------- 00:40:30.038 Suppressions used: 00:40:30.038 count bytes template 00:40:30.038 1 32 /usr/src/fio/parse.c 00:40:30.038 ----------------------------------------------------- 00:40:30.038 00:40:30.038 07:51:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:40:30.038 07:51:03 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:40:30.038 ************************************ 00:40:30.038 END TEST nvme_fio 00:40:30.038 ************************************ 00:40:30.038 00:40:30.038 real 0m4.297s 00:40:30.038 user 0m3.544s 00:40:30.038 sys 0m0.423s 00:40:30.038 07:51:03 nvme.nvme_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:30.038 07:51:03 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:40:30.038 00:40:30.038 real 0m45.899s 00:40:30.038 user 1m57.906s 00:40:30.039 sys 0m10.705s 00:40:30.039 07:51:03 nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:30.039 ************************************ 00:40:30.039 07:51:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:30.039 END TEST nvme 00:40:30.039 ************************************ 00:40:30.039 07:51:03 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:40:30.039 07:51:03 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:40:30.039 07:51:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:30.039 07:51:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:30.039 07:51:03 -- common/autotest_common.sh@10 -- # set +x 00:40:30.039 ************************************ 00:40:30.039 START TEST nvme_scc 00:40:30.039 ************************************ 00:40:30.039 07:51:03 nvme_scc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:40:30.297 * Looking for test storage... 00:40:30.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:40:30.297 07:51:03 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:40:30.297 07:51:03 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:40:30.297 07:51:03 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:40:30.297 07:51:03 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:40:30.297 07:51:03 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:30.297 07:51:03 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:30.297 07:51:03 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:30.297 07:51:03 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:30.297 07:51:03 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:30.297 07:51:03 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:30.297 07:51:03 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:30.297 07:51:03 nvme_scc -- paths/export.sh@5 -- # export PATH 00:40:30.297 07:51:03 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:30.297 07:51:03 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:40:30.297 07:51:03 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:40:30.297 07:51:03 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:40:30.297 07:51:03 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:40:30.297 07:51:03 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:40:30.297 07:51:03 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:40:30.297 07:51:03 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:40:30.297 07:51:03 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:40:30.297 07:51:03 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:40:30.297 07:51:03 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:30.297 07:51:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:40:30.297 07:51:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:40:30.297 07:51:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:40:30.297 07:51:03 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:40:30.556 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:40:30.556 Waiting for block devices as requested 00:40:30.817 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:40:30.817 07:51:04 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:40:30.817 07:51:04 nvme_scc -- scripts/common.sh@15 -- # local i 00:40:30.817 07:51:04 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:40:30.817 07:51:04 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:40:30.817 07:51:04 nvme_scc -- scripts/common.sh@24 -- # return 0 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.817 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:40:30.818 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.819 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@18 -- # shift 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:30.820 07:51:04 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:40:31.080 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:40:31.081 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:40:31.082 07:51:04 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@206 -- # echo nvme0 00:40:31.082 07:51:04 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:40:31.082 07:51:04 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:40:31.082 07:51:04 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:40:31.082 07:51:04 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:31.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:40:31.649 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:40:33.554 07:51:07 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:40:33.554 07:51:07 nvme_scc -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:40:33.554 07:51:07 nvme_scc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:33.554 07:51:07 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:40:33.554 ************************************ 00:40:33.554 START TEST nvme_simple_copy 00:40:33.554 ************************************ 00:40:33.554 07:51:07 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:40:33.813 Initializing NVMe Controllers 00:40:33.813 Attaching to 0000:00:10.0 00:40:33.813 Controller supports SCC. Attached to 0000:00:10.0 00:40:33.813 Namespace ID: 1 size: 5GB 00:40:33.813 Initialization complete. 00:40:33.813 00:40:33.813 Controller QEMU NVMe Ctrl (12340 ) 00:40:33.813 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:40:33.813 Namespace Block Size:4096 00:40:33.813 Writing LBAs 0 to 63 with Random Data 00:40:33.813 Copied LBAs from 0 - 63 to the Destination LBA 256 00:40:33.813 LBAs matching Written Data: 64 00:40:33.813 00:40:33.813 real 0m0.315s 00:40:33.813 user 0m0.108s 00:40:33.813 sys 0m0.109s 00:40:33.813 07:51:07 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:33.813 07:51:07 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:40:33.813 ************************************ 00:40:33.813 END TEST nvme_simple_copy 00:40:33.813 ************************************ 00:40:33.813 ************************************ 00:40:33.813 END TEST nvme_scc 00:40:33.813 ************************************ 00:40:33.813 00:40:33.813 real 0m3.772s 00:40:33.813 user 0m0.793s 00:40:33.813 sys 0m2.883s 00:40:33.813 07:51:07 nvme_scc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:33.813 07:51:07 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:40:33.813 07:51:07 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:40:33.813 07:51:07 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:40:33.813 07:51:07 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:40:33.813 07:51:07 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:40:33.813 07:51:07 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:40:33.813 07:51:07 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:40:33.813 07:51:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:33.813 07:51:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:33.813 07:51:07 -- common/autotest_common.sh@10 -- # set +x 00:40:33.813 ************************************ 00:40:33.813 START TEST nvme_rpc 00:40:33.813 ************************************ 00:40:33.813 07:51:07 nvme_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:40:34.073 * Looking for test storage... 00:40:34.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:40:34.073 07:51:07 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:34.073 07:51:07 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:40:34.073 07:51:07 nvme_rpc -- common/autotest_common.sh@1520 -- # bdfs=() 00:40:34.073 07:51:07 nvme_rpc -- common/autotest_common.sh@1520 -- # local bdfs 00:40:34.073 07:51:07 nvme_rpc -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:40:34.073 07:51:07 nvme_rpc -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:40:34.073 07:51:07 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:34.073 07:51:07 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:40:34.073 07:51:07 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:34.073 07:51:07 nvme_rpc -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:40:34.073 07:51:07 nvme_rpc -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:40:34.074 07:51:07 nvme_rpc -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:40:34.074 07:51:07 nvme_rpc -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:40:34.074 07:51:07 nvme_rpc -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:40:34.074 07:51:07 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:40:34.074 07:51:07 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=179967 00:40:34.074 07:51:07 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:40:34.074 07:51:07 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:40:34.074 07:51:07 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 179967 00:40:34.074 07:51:07 nvme_rpc -- common/autotest_common.sh@827 -- # '[' -z 179967 ']' 00:40:34.074 07:51:07 nvme_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:34.074 07:51:07 nvme_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:34.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:34.074 07:51:07 nvme_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:34.074 07:51:07 nvme_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:34.074 07:51:07 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:34.074 [2024-07-12 07:51:07.940054] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:34.074 [2024-07-12 07:51:07.940315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179967 ] 00:40:34.333 [2024-07-12 07:51:08.106909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:34.593 [2024-07-12 07:51:08.216917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:34.593 [2024-07-12 07:51:08.216918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:35.166 07:51:08 nvme_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:35.166 07:51:08 nvme_rpc -- common/autotest_common.sh@860 -- # return 0 00:40:35.166 07:51:08 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:40:35.461 Nvme0n1 00:40:35.461 07:51:09 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:40:35.461 07:51:09 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:40:35.747 request: 00:40:35.747 { 00:40:35.747 "filename": "non_existing_file", 00:40:35.747 "bdev_name": "Nvme0n1", 00:40:35.747 "method": "bdev_nvme_apply_firmware", 00:40:35.747 "req_id": 1 00:40:35.747 } 00:40:35.747 Got JSON-RPC error response 00:40:35.747 response: 00:40:35.747 { 00:40:35.747 "code": -32603, 00:40:35.747 "message": "open file failed." 00:40:35.747 } 00:40:35.747 07:51:09 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:40:35.747 07:51:09 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:40:35.747 07:51:09 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:40:36.006 07:51:09 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:40:36.006 07:51:09 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 179967 00:40:36.006 07:51:09 nvme_rpc -- common/autotest_common.sh@946 -- # '[' -z 179967 ']' 00:40:36.006 07:51:09 nvme_rpc -- common/autotest_common.sh@950 -- # kill -0 179967 00:40:36.006 07:51:09 nvme_rpc -- common/autotest_common.sh@951 -- # uname 00:40:36.006 07:51:09 nvme_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:36.006 07:51:09 nvme_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 179967 00:40:36.006 07:51:09 nvme_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:36.006 07:51:09 nvme_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:36.006 killing process with pid 179967 00:40:36.006 07:51:09 nvme_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 179967' 00:40:36.006 07:51:09 nvme_rpc -- common/autotest_common.sh@965 -- # kill 179967 00:40:36.006 07:51:09 nvme_rpc -- common/autotest_common.sh@970 -- # wait 179967 00:40:36.574 00:40:36.574 real 0m2.744s 00:40:36.574 user 0m5.035s 00:40:36.574 sys 0m0.931s 00:40:36.574 ************************************ 00:40:36.574 END TEST nvme_rpc 00:40:36.574 ************************************ 00:40:36.574 07:51:10 nvme_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:36.574 07:51:10 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:36.833 07:51:10 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:40:36.833 07:51:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:36.833 07:51:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:36.833 07:51:10 -- common/autotest_common.sh@10 -- # set +x 00:40:36.833 ************************************ 00:40:36.833 START TEST nvme_rpc_timeouts 00:40:36.833 ************************************ 00:40:36.833 07:51:10 nvme_rpc_timeouts -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:40:36.833 * Looking for test storage... 00:40:36.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:40:36.833 07:51:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:36.833 07:51:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_180029 00:40:36.833 07:51:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_180029 00:40:36.833 07:51:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=180064 00:40:36.833 07:51:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:40:36.834 07:51:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:40:36.834 07:51:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 180064 00:40:36.834 07:51:10 nvme_rpc_timeouts -- common/autotest_common.sh@827 -- # '[' -z 180064 ']' 00:40:36.834 07:51:10 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:36.834 07:51:10 nvme_rpc_timeouts -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:36.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:36.834 07:51:10 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:36.834 07:51:10 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:36.834 07:51:10 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:40:36.834 [2024-07-12 07:51:10.670885] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:36.834 [2024-07-12 07:51:10.671157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180064 ] 00:40:37.093 [2024-07-12 07:51:10.829589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:37.093 [2024-07-12 07:51:10.910922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:37.093 [2024-07-12 07:51:10.910926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:38.029 07:51:11 nvme_rpc_timeouts -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:38.029 07:51:11 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # return 0 00:40:38.029 Checking default timeout settings: 00:40:38.029 07:51:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:40:38.029 07:51:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:40:38.288 Making settings changes with rpc: 00:40:38.288 07:51:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:40:38.288 07:51:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:40:38.546 Check default vs. modified settings: 00:40:38.546 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:40:38.546 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_180029 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_180029 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:40:38.806 Setting action_on_timeout is changed as expected. 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_180029 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_180029 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:40:38.806 Setting timeout_us is changed as expected. 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_180029 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_180029 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:40:38.806 Setting timeout_admin_us is changed as expected. 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_180029 /tmp/settings_modified_180029 00:40:38.806 07:51:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 180064 00:40:38.806 07:51:12 nvme_rpc_timeouts -- common/autotest_common.sh@946 -- # '[' -z 180064 ']' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # kill -0 180064 00:40:38.806 07:51:12 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # uname 00:40:38.806 07:51:12 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 180064 00:40:38.806 07:51:12 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:38.806 07:51:12 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:38.806 killing process with pid 180064 00:40:38.806 07:51:12 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # echo 'killing process with pid 180064' 00:40:38.806 07:51:12 nvme_rpc_timeouts -- common/autotest_common.sh@965 -- # kill 180064 00:40:38.806 07:51:12 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # wait 180064 00:40:39.742 RPC TIMEOUT SETTING TEST PASSED. 00:40:39.742 07:51:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:40:39.742 ************************************ 00:40:39.742 END TEST nvme_rpc_timeouts 00:40:39.742 ************************************ 00:40:39.742 00:40:39.742 real 0m2.846s 00:40:39.742 user 0m5.488s 00:40:39.742 sys 0m0.821s 00:40:39.742 07:51:13 nvme_rpc_timeouts -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:39.742 07:51:13 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:40:39.742 07:51:13 -- spdk/autotest.sh@243 -- # uname -s 00:40:39.742 07:51:13 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:40:39.742 07:51:13 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:40:39.742 07:51:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:39.742 07:51:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:39.742 07:51:13 -- common/autotest_common.sh@10 -- # set +x 00:40:39.742 ************************************ 00:40:39.742 START TEST sw_hotplug 00:40:39.742 ************************************ 00:40:39.742 07:51:13 sw_hotplug -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:40:39.742 * Looking for test storage... 00:40:39.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:40:39.742 07:51:13 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:40.310 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:40:40.310 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:40:41.245 07:51:14 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # hotplug_wait=6 00:40:41.245 07:51:14 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # hotplug_events=3 00:40:41.245 07:51:14 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvmes=($(nvme_in_userspace)) 00:40:41.245 07:51:14 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvme_in_userspace 00:40:41.245 07:51:14 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:40:41.245 07:51:14 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:40:41.245 07:51:14 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:40:41.245 07:51:14 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@230 -- # local class 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@15 -- # local i 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@325 -- # (( 1 )) 00:40:41.245 07:51:15 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:40:41.245 07:51:15 sw_hotplug -- nvme/sw_hotplug.sh@127 -- # nvme_count=1 00:40:41.245 07:51:15 sw_hotplug -- nvme/sw_hotplug.sh@128 -- # nvmes=("${nvmes[@]::nvme_count}") 00:40:41.245 07:51:15 sw_hotplug -- nvme/sw_hotplug.sh@130 -- # xtrace_disable 00:40:41.245 07:51:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:41.505 07:51:15 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # run_hotplug 00:40:41.505 07:51:15 sw_hotplug -- nvme/sw_hotplug.sh@65 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:40:41.505 07:51:15 sw_hotplug -- nvme/sw_hotplug.sh@73 -- # hotplug_pid=180351 00:40:41.505 07:51:15 sw_hotplug -- nvme/sw_hotplug.sh@75 -- # debug_remove_attach_helper 3 6 false 00:40:41.505 07:51:15 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:40:41.505 07:51:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning 00:40:41.505 07:51:15 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 false 00:40:41.505 07:51:15 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:40:41.505 07:51:15 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:40:41.505 07:51:15 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:40:41.505 07:51:15 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 false 00:40:41.505 07:51:15 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:40:41.505 07:51:15 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:40:41.505 07:51:15 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=false 00:40:41.505 07:51:15 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:40:41.505 07:51:15 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:40:41.505 Initializing NVMe Controllers 00:40:41.505 Attaching to 0000:00:10.0 00:40:41.505 Attached to 0000:00:10.0 00:40:41.505 Initialization complete. Starting I/O... 00:40:41.505 QEMU NVMe Ctrl (12340 ): 2 I/Os completed (+2) 00:40:41.505 00:40:42.883 QEMU NVMe Ctrl (12340 ): 2042 I/Os completed (+2040) 00:40:42.883 00:40:43.819 QEMU NVMe Ctrl (12340 ): 4694 I/Os completed (+2652) 00:40:43.819 00:40:44.754 QEMU NVMe Ctrl (12340 ): 7714 I/Os completed (+3020) 00:40:44.754 00:40:45.688 QEMU NVMe Ctrl (12340 ): 10714 I/Os completed (+3000) 00:40:45.688 00:40:46.625 QEMU NVMe Ctrl (12340 ): 13546 I/Os completed (+2832) 00:40:46.625 00:40:47.564 07:51:21 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:40:47.564 07:51:21 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:40:47.564 07:51:21 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:40:47.564 [2024-07-12 07:51:21.178982] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:40:47.564 Controller removed: QEMU NVMe Ctrl (12340 ) 00:40:47.564 [2024-07-12 07:51:21.180174] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:47.564 [2024-07-12 07:51:21.180255] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:47.564 [2024-07-12 07:51:21.180274] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:47.564 [2024-07-12 07:51:21.180292] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:47.564 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:40:47.565 [2024-07-12 07:51:21.182246] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:47.565 [2024-07-12 07:51:21.182289] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:47.565 [2024-07-12 07:51:21.182305] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:47.565 [2024-07-12 07:51:21.182320] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:47.565 07:51:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:40:47.565 07:51:21 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:40:47.565 07:51:21 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:40:47.565 07:51:21 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:40:47.565 07:51:21 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:40:47.565 00:40:47.565 07:51:21 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:40:47.565 07:51:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:40:47.565 07:51:21 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:40:47.565 Attaching to 0000:00:10.0 00:40:47.565 Attached to 0000:00:10.0 00:40:48.503 QEMU NVMe Ctrl (12340 ): 3269 I/Os completed (+3269) 00:40:48.503 00:40:49.879 QEMU NVMe Ctrl (12340 ): 6849 I/Os completed (+3580) 00:40:49.879 00:40:50.813 QEMU NVMe Ctrl (12340 ): 10445 I/Os completed (+3596) 00:40:50.813 00:40:51.747 QEMU NVMe Ctrl (12340 ): 14078 I/Os completed (+3633) 00:40:51.747 00:40:52.686 QEMU NVMe Ctrl (12340 ): 17671 I/Os completed (+3593) 00:40:52.686 00:40:53.627 QEMU NVMe Ctrl (12340 ): 21227 I/Os completed (+3556) 00:40:53.627 00:40:53.627 07:51:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:40:53.627 07:51:27 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:40:53.627 07:51:27 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:40:53.627 07:51:27 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:40:53.627 [2024-07-12 07:51:27.449717] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:40:53.627 Controller removed: QEMU NVMe Ctrl (12340 ) 00:40:53.627 [2024-07-12 07:51:27.450854] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:53.627 [2024-07-12 07:51:27.450894] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:53.627 [2024-07-12 07:51:27.450911] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:53.627 [2024-07-12 07:51:27.450926] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:53.627 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:40:53.627 [2024-07-12 07:51:27.452484] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:53.627 [2024-07-12 07:51:27.452512] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:53.627 [2024-07-12 07:51:27.452526] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:53.627 [2024-07-12 07:51:27.452540] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:53.627 07:51:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:40:53.627 07:51:27 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:40:53.886 07:51:27 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:40:53.886 07:51:27 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:40:53.886 07:51:27 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:40:53.886 07:51:27 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:40:53.886 07:51:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:40:53.886 07:51:27 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:40:53.886 Attaching to 0000:00:10.0 00:40:53.886 Attached to 0000:00:10.0 00:40:54.825 QEMU NVMe Ctrl (12340 ): 2282 I/Os completed (+2282) 00:40:54.825 00:40:55.763 QEMU NVMe Ctrl (12340 ): 5922 I/Os completed (+3640) 00:40:55.763 00:40:56.700 QEMU NVMe Ctrl (12340 ): 9522 I/Os completed (+3600) 00:40:56.700 00:40:57.638 QEMU NVMe Ctrl (12340 ): 13070 I/Os completed (+3548) 00:40:57.638 00:40:58.578 QEMU NVMe Ctrl (12340 ): 16622 I/Os completed (+3552) 00:40:58.578 00:40:59.514 QEMU NVMe Ctrl (12340 ): 20226 I/Os completed (+3604) 00:40:59.514 00:41:00.081 07:51:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:41:00.081 07:51:33 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:41:00.081 07:51:33 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:41:00.081 07:51:33 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:41:00.081 [2024-07-12 07:51:33.722609] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:41:00.081 Controller removed: QEMU NVMe Ctrl (12340 ) 00:41:00.081 [2024-07-12 07:51:33.723920] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:00.081 [2024-07-12 07:51:33.723967] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:00.081 [2024-07-12 07:51:33.723985] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:00.081 [2024-07-12 07:51:33.724375] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:00.081 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:41:00.081 [2024-07-12 07:51:33.725872] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:00.081 [2024-07-12 07:51:33.725910] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:00.081 [2024-07-12 07:51:33.725926] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:00.081 [2024-07-12 07:51:33.725942] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:00.081 07:51:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:41:00.081 07:51:33 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:41:00.081 07:51:33 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:41:00.082 07:51:33 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:41:00.082 07:51:33 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:41:00.082 07:51:33 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:41:00.340 07:51:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:41:00.340 07:51:33 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:41:00.340 Attaching to 0000:00:10.0 00:41:00.340 Attached to 0000:00:10.0 00:41:00.340 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:41:00.340 [2024-07-12 07:51:34.006196] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:41:06.972 07:51:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:41:06.972 07:51:39 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:41:06.972 07:51:40 sw_hotplug -- common/autotest_common.sh@714 -- # time=24.83 00:41:06.972 07:51:40 sw_hotplug -- common/autotest_common.sh@716 -- # echo 24.83 00:41:06.972 07:51:40 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=24.83 00:41:06.972 07:51:40 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 24.83 1 00:41:06.972 remove_attach_helper took 24.83s to complete (handling 1 nvme drive(s)) 07:51:40 sw_hotplug -- nvme/sw_hotplug.sh@79 -- # sleep 6 00:41:12.242 07:51:46 sw_hotplug -- nvme/sw_hotplug.sh@81 -- # kill -0 180351 00:41:12.242 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 81: kill: (180351) - No such process 00:41:12.242 07:51:46 sw_hotplug -- nvme/sw_hotplug.sh@83 -- # wait 180351 00:41:12.242 07:51:46 sw_hotplug -- nvme/sw_hotplug.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:41:12.242 07:51:46 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # tgt_run_hotplug 00:41:12.242 07:51:46 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # local dev 00:41:12.242 07:51:46 sw_hotplug -- nvme/sw_hotplug.sh@98 -- # spdk_tgt_pid=180701 00:41:12.242 07:51:46 sw_hotplug -- nvme/sw_hotplug.sh@100 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:41:12.242 07:51:46 sw_hotplug -- nvme/sw_hotplug.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:12.242 07:51:46 sw_hotplug -- nvme/sw_hotplug.sh@101 -- # waitforlisten 180701 00:41:12.242 07:51:46 sw_hotplug -- common/autotest_common.sh@827 -- # '[' -z 180701 ']' 00:41:12.242 07:51:46 sw_hotplug -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:12.242 07:51:46 sw_hotplug -- common/autotest_common.sh@832 -- # local max_retries=100 00:41:12.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:12.242 07:51:46 sw_hotplug -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:12.242 07:51:46 sw_hotplug -- common/autotest_common.sh@836 -- # xtrace_disable 00:41:12.242 07:51:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:12.242 [2024-07-12 07:51:46.109521] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:41:12.242 [2024-07-12 07:51:46.109793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180701 ] 00:41:12.501 [2024-07-12 07:51:46.268214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:12.501 [2024-07-12 07:51:46.319692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:13.438 07:51:46 sw_hotplug -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:41:13.438 07:51:46 sw_hotplug -- common/autotest_common.sh@860 -- # return 0 00:41:13.438 07:51:46 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:41:13.438 07:51:46 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme00 -t PCIe -a 0000:00:10.0 00:41:13.438 07:51:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:13.438 07:51:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:13.438 Nvme00n1 00:41:13.438 07:51:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:13.438 07:51:47 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme00n1 6 00:41:13.438 07:51:47 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme00n1 00:41:13.438 07:51:47 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:41:13.438 07:51:47 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:41:13.438 07:51:47 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:41:13.438 07:51:47 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:41:13.438 07:51:47 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:13.438 07:51:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:13.438 07:51:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:13.438 07:51:47 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme00n1 -t 6 00:41:13.438 07:51:47 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:13.438 07:51:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:13.438 [ 00:41:13.438 { 00:41:13.438 "name": "Nvme00n1", 00:41:13.438 "aliases": [ 00:41:13.438 "a5a4f0ed-2edf-48c0-8806-c1b3fc7b551a" 00:41:13.438 ], 00:41:13.438 "product_name": "NVMe disk", 00:41:13.438 "block_size": 4096, 00:41:13.438 "num_blocks": 1310720, 00:41:13.438 "uuid": "a5a4f0ed-2edf-48c0-8806-c1b3fc7b551a", 00:41:13.438 "assigned_rate_limits": { 00:41:13.438 "rw_ios_per_sec": 0, 00:41:13.438 "rw_mbytes_per_sec": 0, 00:41:13.438 "r_mbytes_per_sec": 0, 00:41:13.438 "w_mbytes_per_sec": 0 00:41:13.438 }, 00:41:13.438 "claimed": false, 00:41:13.438 "zoned": false, 00:41:13.438 "supported_io_types": { 00:41:13.438 "read": true, 00:41:13.438 "write": true, 00:41:13.438 "unmap": true, 00:41:13.438 "write_zeroes": true, 00:41:13.438 "flush": true, 00:41:13.438 "reset": true, 00:41:13.438 "compare": true, 00:41:13.438 "compare_and_write": false, 00:41:13.439 "abort": true, 00:41:13.439 "nvme_admin": true, 00:41:13.439 "nvme_io": true 00:41:13.439 }, 00:41:13.439 "driver_specific": { 00:41:13.439 "nvme": [ 00:41:13.439 { 00:41:13.439 "pci_address": "0000:00:10.0", 00:41:13.439 "trid": { 00:41:13.439 "trtype": "PCIe", 00:41:13.439 "traddr": "0000:00:10.0" 00:41:13.439 }, 00:41:13.439 "ctrlr_data": { 00:41:13.439 "cntlid": 0, 00:41:13.439 "vendor_id": "0x1b36", 00:41:13.439 "model_number": "QEMU NVMe Ctrl", 00:41:13.439 "serial_number": "12340", 00:41:13.439 "firmware_revision": "8.0.0", 00:41:13.439 "subnqn": "nqn.2019-08.org.qemu:12340", 00:41:13.439 "oacs": { 00:41:13.439 "security": 0, 00:41:13.439 "format": 1, 00:41:13.439 "firmware": 0, 00:41:13.439 "ns_manage": 1 00:41:13.439 }, 00:41:13.439 "multi_ctrlr": false, 00:41:13.439 "ana_reporting": false 00:41:13.439 }, 00:41:13.439 "vs": { 00:41:13.439 "nvme_version": "1.4" 00:41:13.439 }, 00:41:13.439 "ns_data": { 00:41:13.439 "id": 1, 00:41:13.439 "can_share": false 00:41:13.439 } 00:41:13.439 } 00:41:13.439 ], 00:41:13.439 "mp_policy": "active_passive" 00:41:13.439 } 00:41:13.439 } 00:41:13.439 ] 00:41:13.439 07:51:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:13.439 07:51:47 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:41:13.439 07:51:47 sw_hotplug -- nvme/sw_hotplug.sh@108 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:41:13.439 07:51:47 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:13.439 07:51:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:13.439 07:51:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:13.439 07:51:47 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # debug_remove_attach_helper 3 6 true 00:41:13.439 07:51:47 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:41:13.439 07:51:47 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:41:13.439 07:51:47 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:41:13.439 07:51:47 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:41:13.439 07:51:47 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:41:13.439 07:51:47 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:41:13.439 07:51:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:41:13.439 07:51:47 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:41:13.439 07:51:47 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:41:13.439 07:51:47 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:41:13.439 07:51:47 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:41:20.008 07:51:53 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:41:20.008 07:51:53 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:41:20.008 07:51:53 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:41:20.008 07:51:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:41:20.008 07:51:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:41:20.008 [2024-07-12 07:51:53.180721] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:41:20.008 [2024-07-12 07:51:53.182357] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:20.008 [2024-07-12 07:51:53.182407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:41:20.008 [2024-07-12 07:51:53.182462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:20.008 [2024-07-12 07:51:53.182502] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:20.008 [2024-07-12 07:51:53.182520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:41:20.008 [2024-07-12 07:51:53.182547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:20.008 [2024-07-12 07:51:53.182564] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:20.008 [2024-07-12 07:51:53.182586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:41:20.008 [2024-07-12 07:51:53.182615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:20.008 [2024-07-12 07:51:53.182638] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:20.008 [2024-07-12 07:51:53.182656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:41:20.008 [2024-07-12 07:51:53.182681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:25.283 07:51:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:41:25.283 07:51:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:41:25.283 07:51:59 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:25.283 07:51:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:25.283 07:51:59 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:25.542 07:51:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:41:25.542 07:51:59 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:41:25.542 07:51:59 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:41:25.542 07:51:59 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:41:25.542 07:51:59 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:41:25.542 07:51:59 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:41:25.542 07:51:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:41:25.542 07:51:59 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:41:32.109 07:52:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # true 00:41:32.109 07:52:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:41:32.109 07:52:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:41:32.109 07:52:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:32.109 07:52:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:41:32.109 07:52:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:32.109 07:52:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # sort 00:41:32.109 07:52:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:32.109 07:52:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:41:32.109 07:52:05 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:41:32.109 07:52:05 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:41:32.109 07:52:05 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:41:32.109 [2024-07-12 07:52:05.480945] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:41:32.109 [2024-07-12 07:52:05.482637] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:32.109 [2024-07-12 07:52:05.482700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:41:32.109 [2024-07-12 07:52:05.482723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:32.109 [2024-07-12 07:52:05.482752] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:32.109 [2024-07-12 07:52:05.482778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:41:32.109 [2024-07-12 07:52:05.482803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:32.109 [2024-07-12 07:52:05.482833] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:32.109 [2024-07-12 07:52:05.482874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:41:32.109 [2024-07-12 07:52:05.482899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:32.109 [2024-07-12 07:52:05.482922] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:32.109 [2024-07-12 07:52:05.482941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:41:32.109 [2024-07-12 07:52:05.482966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:32.109 07:52:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:41:32.109 07:52:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:41:38.678 07:52:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:41:38.678 07:52:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:41:38.678 07:52:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:38.678 07:52:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:38.678 07:52:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:38.678 07:52:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:41:38.678 07:52:11 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:41:38.678 07:52:11 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:41:38.678 07:52:11 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:41:38.678 07:52:11 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:41:38.678 07:52:11 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:41:38.678 07:52:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:41:38.678 07:52:11 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:41:43.951 07:52:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # true 00:41:43.951 07:52:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:41:43.951 07:52:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:41:43.951 07:52:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:41:43.951 07:52:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:43.951 07:52:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:43.951 07:52:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # sort 00:41:43.951 07:52:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:43.951 07:52:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:41:43.951 07:52:17 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:41:43.951 07:52:17 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:41:43.951 07:52:17 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:41:44.210 07:52:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:41:44.210 07:52:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:41:44.210 [2024-07-12 07:52:17.881174] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:41:44.210 [2024-07-12 07:52:17.882720] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:44.210 [2024-07-12 07:52:17.882768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:41:44.210 [2024-07-12 07:52:17.882800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:44.210 [2024-07-12 07:52:17.882832] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:44.210 [2024-07-12 07:52:17.882855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:41:44.210 [2024-07-12 07:52:17.882884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:44.210 [2024-07-12 07:52:17.882918] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:44.210 [2024-07-12 07:52:17.882941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:41:44.210 [2024-07-12 07:52:17.882961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:44.210 [2024-07-12 07:52:17.882982] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:44.210 [2024-07-12 07:52:17.883002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:41:44.210 [2024-07-12 07:52:17.883025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:50.781 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:41:50.781 07:52:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:50.781 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:41:50.781 07:52:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:50.781 07:52:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:50.781 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:41:50.781 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:41:50.781 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:41:50.781 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:41:50.781 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:41:50.781 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:41:50.781 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:41:50.781 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # true 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:41:57.352 07:52:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # sort 00:41:57.352 07:52:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:57.352 07:52:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:41:57.352 07:52:30 sw_hotplug -- common/autotest_common.sh@714 -- # time=43.05 00:41:57.352 07:52:30 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.05 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=43.05 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.05 1 00:41:57.352 remove_attach_helper took 43.05s to complete (handling 1 nvme drive(s)) 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:41:57.352 07:52:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:57.352 07:52:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:57.352 07:52:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:41:57.352 07:52:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:57.352 07:52:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:57.352 07:52:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # debug_remove_attach_helper 3 6 true 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:41:57.352 07:52:30 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:41:57.352 07:52:30 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:41:57.352 07:52:30 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:41:57.352 07:52:30 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:41:57.352 07:52:30 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:42:02.631 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:42:02.631 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:42:02.631 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:42:02.631 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:42:02.631 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:42:02.631 [2024-07-12 07:52:36.264997] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:42:02.631 [2024-07-12 07:52:36.266592] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:02.631 [2024-07-12 07:52:36.266634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:42:02.631 [2024-07-12 07:52:36.266662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:02.631 [2024-07-12 07:52:36.266687] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:02.631 [2024-07-12 07:52:36.266710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:42:02.631 [2024-07-12 07:52:36.266756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:02.631 [2024-07-12 07:52:36.266774] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:02.631 [2024-07-12 07:52:36.266796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:42:02.631 [2024-07-12 07:52:36.266813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:02.631 [2024-07-12 07:52:36.266837] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:02.631 [2024-07-12 07:52:36.266854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:42:02.631 [2024-07-12 07:52:36.266897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:09.199 07:52:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:42:09.199 07:52:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:42:09.199 07:52:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:09.199 07:52:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:09.199 07:52:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:09.199 07:52:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:42:09.199 07:52:42 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:42:09.199 07:52:42 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:42:09.199 07:52:42 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:42:09.199 07:52:42 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:42:09.199 07:52:42 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:42:09.199 07:52:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:42:09.199 07:52:42 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:42:15.764 07:52:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # true 00:42:15.764 07:52:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:42:15.764 07:52:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:42:15.764 07:52:48 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:15.764 07:52:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # sort 00:42:15.764 07:52:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:15.764 07:52:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:42:15.764 07:52:48 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:15.764 07:52:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:42:15.764 07:52:48 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:42:15.764 07:52:48 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:42:15.764 07:52:48 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:42:15.764 [2024-07-12 07:52:48.565221] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:42:15.764 [2024-07-12 07:52:48.567020] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:15.764 [2024-07-12 07:52:48.567073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:42:15.764 [2024-07-12 07:52:48.567103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:15.764 [2024-07-12 07:52:48.567130] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:15.764 [2024-07-12 07:52:48.567167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:42:15.764 [2024-07-12 07:52:48.567185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:15.764 [2024-07-12 07:52:48.567222] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:15.764 [2024-07-12 07:52:48.567241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:42:15.764 [2024-07-12 07:52:48.567265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:15.764 [2024-07-12 07:52:48.567287] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:15.764 [2024-07-12 07:52:48.567321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:42:15.764 [2024-07-12 07:52:48.567345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:15.764 07:52:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:42:15.764 07:52:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:42:21.040 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:42:21.040 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:42:21.040 07:52:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:21.040 07:52:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:21.040 07:52:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:21.040 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:42:21.040 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:42:21.040 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:42:21.040 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:42:21.040 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:42:21.040 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:42:21.040 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:42:21.040 07:52:54 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:42:27.614 07:53:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # true 00:42:27.614 07:53:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:42:27.614 07:53:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:42:27.614 07:53:00 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:27.614 07:53:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:27.614 07:53:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # sort 00:42:27.614 07:53:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:42:27.614 07:53:00 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:27.614 07:53:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:42:27.614 07:53:00 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:42:27.614 07:53:00 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:42:27.614 07:53:00 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:42:27.614 [2024-07-12 07:53:00.965487] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:42:27.614 [2024-07-12 07:53:00.966720] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:27.614 [2024-07-12 07:53:00.966761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:42:27.614 [2024-07-12 07:53:00.966781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:27.614 [2024-07-12 07:53:00.966801] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:27.614 [2024-07-12 07:53:00.966833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:42:27.614 [2024-07-12 07:53:00.966851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:27.614 [2024-07-12 07:53:00.966874] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:27.614 [2024-07-12 07:53:00.966900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:42:27.614 [2024-07-12 07:53:00.966913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:27.614 [2024-07-12 07:53:00.966927] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:27.614 [2024-07-12 07:53:00.966940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:42:27.614 [2024-07-12 07:53:00.966953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:27.614 07:53:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:42:27.614 07:53:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:42:34.184 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:42:34.184 07:53:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:42:34.184 07:53:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:34.184 07:53:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:34.184 07:53:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:34.184 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:42:34.184 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:42:34.184 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:42:34.184 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:42:34.184 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:42:34.184 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:42:34.184 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:42:34.184 07:53:07 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:42:39.458 07:53:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # true 00:42:39.458 07:53:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:42:39.458 07:53:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:42:39.458 07:53:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:39.458 07:53:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:42:39.458 07:53:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:39.458 07:53:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # sort 00:42:39.458 07:53:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:39.458 07:53:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:42:39.458 07:53:13 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:42:39.458 07:53:13 sw_hotplug -- common/autotest_common.sh@714 -- # time=43.13 00:42:39.458 07:53:13 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.13 00:42:39.458 07:53:13 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=43.13 00:42:39.458 07:53:13 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.13 1 00:42:39.458 remove_attach_helper took 43.13s to complete (handling 1 nvme drive(s)) 07:53:13 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:42:39.458 07:53:13 sw_hotplug -- nvme/sw_hotplug.sh@118 -- # killprocess 180701 00:42:39.458 07:53:13 sw_hotplug -- common/autotest_common.sh@946 -- # '[' -z 180701 ']' 00:42:39.458 07:53:13 sw_hotplug -- common/autotest_common.sh@950 -- # kill -0 180701 00:42:39.458 07:53:13 sw_hotplug -- common/autotest_common.sh@951 -- # uname 00:42:39.458 07:53:13 sw_hotplug -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:42:39.458 07:53:13 sw_hotplug -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 180701 00:42:39.458 07:53:13 sw_hotplug -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:42:39.458 07:53:13 sw_hotplug -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:42:39.458 killing process with pid 180701 00:42:39.458 07:53:13 sw_hotplug -- common/autotest_common.sh@964 -- # echo 'killing process with pid 180701' 00:42:39.458 07:53:13 sw_hotplug -- common/autotest_common.sh@965 -- # kill 180701 00:42:39.458 07:53:13 sw_hotplug -- common/autotest_common.sh@970 -- # wait 180701 00:42:40.026 ************************************ 00:42:40.026 END TEST sw_hotplug 00:42:40.026 ************************************ 00:42:40.026 00:42:40.026 real 2m0.445s 00:42:40.026 user 1m34.626s 00:42:40.026 sys 0m16.188s 00:42:40.026 07:53:13 sw_hotplug -- common/autotest_common.sh@1122 -- # xtrace_disable 00:42:40.026 07:53:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:40.026 07:53:13 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:42:40.026 07:53:13 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:42:40.026 07:53:13 -- spdk/autotest.sh@260 -- # timing_exit lib 00:42:40.026 07:53:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:40.026 07:53:13 -- common/autotest_common.sh@10 -- # set +x 00:42:40.286 07:53:13 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:42:40.286 07:53:13 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:42:40.286 07:53:13 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:42:40.286 07:53:13 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:42:40.286 07:53:13 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:42:40.286 07:53:13 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:42:40.286 07:53:13 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:42:40.286 07:53:13 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:42:40.286 07:53:13 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:42:40.286 07:53:13 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:42:40.286 07:53:13 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:42:40.286 07:53:13 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:42:40.286 07:53:13 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:42:40.286 07:53:13 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:42:40.286 07:53:13 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:42:40.286 07:53:13 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:42:40.286 07:53:13 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:42:40.286 07:53:13 -- spdk/autotest.sh@375 -- # [[ 1 -eq 1 ]] 00:42:40.286 07:53:13 -- spdk/autotest.sh@376 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:42:40.286 07:53:13 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:42:40.286 07:53:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:42:40.286 07:53:13 -- common/autotest_common.sh@10 -- # set +x 00:42:40.287 ************************************ 00:42:40.287 START TEST blockdev_raid5f 00:42:40.287 ************************************ 00:42:40.287 07:53:13 blockdev_raid5f -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:42:40.287 * Looking for test storage... 00:42:40.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@674 -- # uname -s 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@682 -- # test_type=raid5f 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@683 -- # crypto_device= 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@684 -- # dek= 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@685 -- # env_ctx= 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@690 -- # [[ raid5f == bdev ]] 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@690 -- # [[ raid5f == crypto_* ]] 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=181804 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:42:40.287 07:53:14 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 181804 00:42:40.287 07:53:14 blockdev_raid5f -- common/autotest_common.sh@827 -- # '[' -z 181804 ']' 00:42:40.287 07:53:14 blockdev_raid5f -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:40.287 07:53:14 blockdev_raid5f -- common/autotest_common.sh@832 -- # local max_retries=100 00:42:40.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:40.287 07:53:14 blockdev_raid5f -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:40.287 07:53:14 blockdev_raid5f -- common/autotest_common.sh@836 -- # xtrace_disable 00:42:40.287 07:53:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:40.547 [2024-07-12 07:53:14.181099] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:42:40.547 [2024-07-12 07:53:14.182294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181804 ] 00:42:40.547 [2024-07-12 07:53:14.338137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:40.547 [2024-07-12 07:53:14.385124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:41.485 07:53:15 blockdev_raid5f -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:42:41.485 07:53:15 blockdev_raid5f -- common/autotest_common.sh@860 -- # return 0 00:42:41.485 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:42:41.485 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@726 -- # setup_raid5f_conf 00:42:41.485 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@280 -- # rpc_cmd 00:42:41.485 07:53:15 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:41.485 07:53:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:41.485 Malloc0 00:42:41.485 Malloc1 00:42:41.485 Malloc2 00:42:41.485 07:53:15 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:41.485 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:42:41.485 07:53:15 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:41.485 07:53:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:41.485 07:53:15 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:41.485 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@740 -- # cat 00:42:41.485 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:42:41.485 07:53:15 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:41.485 07:53:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:41.485 07:53:15 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:41.485 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:41.486 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:41.486 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:42:41.486 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:42:41.486 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:41.486 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:42:41.486 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6d1c3ab3-ee4a-4a5b-bfb8-d772a44b8bc7"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6d1c3ab3-ee4a-4a5b-bfb8-d772a44b8bc7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6d1c3ab3-ee4a-4a5b-bfb8-d772a44b8bc7",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4d079d98-13f3-4b5b-bfa8-b38d733b2667",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "c14edaef-dc9d-424f-8567-e86ea85eb096",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c9c75cf6-e564-4e8d-bafb-565d008fe4ec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:42:41.486 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@749 -- # jq -r .name 00:42:41.486 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:42:41.486 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@752 -- # hello_world_bdev=raid5f 00:42:41.486 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:42:41.486 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@754 -- # killprocess 181804 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@946 -- # '[' -z 181804 ']' 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@950 -- # kill -0 181804 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@951 -- # uname 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 181804 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:42:41.486 killing process with pid 181804 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@964 -- # echo 'killing process with pid 181804' 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@965 -- # kill 181804 00:42:41.486 07:53:15 blockdev_raid5f -- common/autotest_common.sh@970 -- # wait 181804 00:42:42.055 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:42.055 07:53:15 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:42:42.055 07:53:15 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:42:42.055 07:53:15 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:42:42.055 07:53:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:42.055 ************************************ 00:42:42.055 START TEST bdev_hello_world 00:42:42.055 ************************************ 00:42:42.055 07:53:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:42:42.055 [2024-07-12 07:53:15.859080] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:42:42.055 [2024-07-12 07:53:15.859239] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181846 ] 00:42:42.314 [2024-07-12 07:53:15.998126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:42.314 [2024-07-12 07:53:16.039976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:42.574 [2024-07-12 07:53:16.243049] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:42:42.574 [2024-07-12 07:53:16.243118] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:42:42.574 [2024-07-12 07:53:16.243165] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:42:42.574 [2024-07-12 07:53:16.243530] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:42:42.574 [2024-07-12 07:53:16.243680] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:42:42.574 [2024-07-12 07:53:16.243704] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:42:42.574 [2024-07-12 07:53:16.243772] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:42:42.574 00:42:42.574 [2024-07-12 07:53:16.243819] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:42:42.834 00:42:42.834 real 0m0.708s 00:42:42.834 user 0m0.385s 00:42:42.834 sys 0m0.200s 00:42:42.834 07:53:16 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:42:42.834 07:53:16 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:42:42.834 ************************************ 00:42:42.834 END TEST bdev_hello_world 00:42:42.834 ************************************ 00:42:42.834 07:53:16 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:42:42.834 07:53:16 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:42:42.834 07:53:16 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:42:42.834 07:53:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:42.834 ************************************ 00:42:42.834 START TEST bdev_bounds 00:42:42.834 ************************************ 00:42:42.834 07:53:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:42:42.834 07:53:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=181878 00:42:42.834 07:53:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:42:42.834 Process bdevio pid: 181878 00:42:42.834 07:53:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 181878' 00:42:42.834 07:53:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 181878 00:42:42.834 07:53:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 181878 ']' 00:42:42.834 07:53:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:42.834 07:53:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:42.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:42.834 07:53:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:42:42.834 07:53:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:42.834 07:53:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:42:42.834 07:53:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:42:42.834 [2024-07-12 07:53:16.672969] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:42:42.834 [2024-07-12 07:53:16.673243] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181878 ] 00:42:43.093 [2024-07-12 07:53:16.836136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:43.093 [2024-07-12 07:53:16.881817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:42:43.093 [2024-07-12 07:53:16.882285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:43.093 [2024-07-12 07:53:16.882286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:42:44.031 07:53:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:42:44.031 07:53:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:42:44.031 07:53:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:42:44.031 I/O targets: 00:42:44.031 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:42:44.031 00:42:44.031 00:42:44.031 CUnit - A unit testing framework for C - Version 2.1-3 00:42:44.031 http://cunit.sourceforge.net/ 00:42:44.031 00:42:44.031 00:42:44.031 Suite: bdevio tests on: raid5f 00:42:44.031 Test: blockdev write read block ...passed 00:42:44.031 Test: blockdev write zeroes read block ...passed 00:42:44.031 Test: blockdev write zeroes read no split ...passed 00:42:44.031 Test: blockdev write zeroes read split ...passed 00:42:44.031 Test: blockdev write zeroes read split partial ...passed 00:42:44.031 Test: blockdev reset ...passed 00:42:44.031 Test: blockdev write read 8 blocks ...passed 00:42:44.031 Test: blockdev write read size > 128k ...passed 00:42:44.031 Test: blockdev write read invalid size ...passed 00:42:44.031 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:44.031 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:44.031 Test: blockdev write read max offset ...passed 00:42:44.031 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:44.031 Test: blockdev writev readv 8 blocks ...passed 00:42:44.031 Test: blockdev writev readv 30 x 1block ...passed 00:42:44.031 Test: blockdev writev readv block ...passed 00:42:44.031 Test: blockdev writev readv size > 128k ...passed 00:42:44.031 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:44.031 Test: blockdev comparev and writev ...passed 00:42:44.031 Test: blockdev nvme passthru rw ...passed 00:42:44.031 Test: blockdev nvme passthru vendor specific ...passed 00:42:44.031 Test: blockdev nvme admin passthru ...passed 00:42:44.031 Test: blockdev copy ...passed 00:42:44.031 00:42:44.031 Run Summary: Type Total Ran Passed Failed Inactive 00:42:44.031 suites 1 1 n/a 0 0 00:42:44.031 tests 23 23 23 0 0 00:42:44.031 asserts 130 130 130 0 n/a 00:42:44.031 00:42:44.031 Elapsed time = 0.343 seconds 00:42:44.031 0 00:42:44.031 07:53:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 181878 00:42:44.031 07:53:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 181878 ']' 00:42:44.031 07:53:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 181878 00:42:44.031 07:53:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:42:44.031 07:53:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:42:44.031 07:53:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 181878 00:42:44.031 07:53:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:42:44.031 07:53:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:42:44.031 07:53:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 181878' 00:42:44.031 killing process with pid 181878 00:42:44.031 07:53:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@965 -- # kill 181878 00:42:44.031 07:53:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # wait 181878 00:42:44.291 07:53:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:42:44.291 00:42:44.291 real 0m1.545s 00:42:44.291 user 0m3.703s 00:42:44.291 sys 0m0.370s 00:42:44.291 07:53:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:42:44.291 ************************************ 00:42:44.291 END TEST bdev_bounds 00:42:44.291 07:53:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:42:44.291 ************************************ 00:42:44.550 07:53:18 blockdev_raid5f -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:42:44.550 07:53:18 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:42:44.550 07:53:18 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:42:44.550 07:53:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:44.550 ************************************ 00:42:44.550 START TEST bdev_nbd 00:42:44.550 ************************************ 00:42:44.550 07:53:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('raid5f') 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=1 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0') 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('raid5f') 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=181935 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 181935 /var/tmp/spdk-nbd.sock 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 181935 ']' 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:42:44.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:42:44.551 07:53:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:42:44.551 [2024-07-12 07:53:18.275388] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:42:44.551 [2024-07-12 07:53:18.276062] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:44.551 [2024-07-12 07:53:18.414307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:44.811 [2024-07-12 07:53:18.455156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:45.379 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:42:45.379 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:42:45.379 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:42:45.379 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:45.379 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:42:45.379 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:42:45.379 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:42:45.379 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:45.379 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:42:45.379 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:42:45.379 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:42:45.379 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:42:45.379 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:42:45.379 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:42:45.379 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:42:45.638 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:42:45.638 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:42:45.638 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:42:45.638 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:42:45.638 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:42:45.638 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:42:45.638 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:42:45.638 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:45.898 1+0 records in 00:42:45.898 1+0 records out 00:42:45.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397295 s, 10.3 MB/s 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:42:45.898 { 00:42:45.898 "nbd_device": "/dev/nbd0", 00:42:45.898 "bdev_name": "raid5f" 00:42:45.898 } 00:42:45.898 ]' 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:42:45.898 { 00:42:45.898 "nbd_device": "/dev/nbd0", 00:42:45.898 "bdev_name": "raid5f" 00:42:45.898 } 00:42:45.898 ]' 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:45.898 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:46.157 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:46.157 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:46.157 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:46.157 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:46.157 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:46.157 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:46.157 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:46.157 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:46.157 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:46.157 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:46.157 07:53:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:46.416 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:42:46.695 /dev/nbd0 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:46.695 1+0 records in 00:42:46.695 1+0 records out 00:42:46.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381947 s, 10.7 MB/s 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:46.695 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:42:46.960 { 00:42:46.960 "nbd_device": "/dev/nbd0", 00:42:46.960 "bdev_name": "raid5f" 00:42:46.960 } 00:42:46.960 ]' 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:42:46.960 { 00:42:46.960 "nbd_device": "/dev/nbd0", 00:42:46.960 "bdev_name": "raid5f" 00:42:46.960 } 00:42:46.960 ]' 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:42:46.960 256+0 records in 00:42:46.960 256+0 records out 00:42:46.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00902596 s, 116 MB/s 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:42:46.960 256+0 records in 00:42:46.960 256+0 records out 00:42:46.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310794 s, 33.7 MB/s 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:46.960 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:42:47.219 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:47.219 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:47.219 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:47.219 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:47.219 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:47.219 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:47.219 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:47.219 07:53:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:47.219 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:47.219 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:47.219 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:47.219 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:47.219 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:47.219 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:47.219 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:47.219 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:47.219 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:47.219 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:47.219 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:42:47.478 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:42:47.737 malloc_lvol_verify 00:42:47.737 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:42:47.996 95cbff9a-1d51-43f1-8890-ee620dc1a26e 00:42:47.996 07:53:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:42:48.255 1f644ade-2908-4bc2-a5d1-da034d411470 00:42:48.255 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:42:48.513 /dev/nbd0 00:42:48.514 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:42:48.514 mke2fs 1.46.5 (30-Dec-2021) 00:42:48.514 00:42:48.514 Filesystem too small for a journal 00:42:48.514 Discarding device blocks: 0/1024 done 00:42:48.514 Creating filesystem with 1024 4k blocks and 1024 inodes 00:42:48.514 00:42:48.514 Allocating group tables: 0/1 done 00:42:48.514 Writing inode tables: 0/1 done 00:42:48.514 Writing superblocks and filesystem accounting information: 0/1 done 00:42:48.514 00:42:48.514 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:42:48.514 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:48.514 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:48.514 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:48.514 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:48.514 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:48.514 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:48.514 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 181935 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 181935 ']' 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 181935 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 181935 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 181935' 00:42:48.772 killing process with pid 181935 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@965 -- # kill 181935 00:42:48.772 07:53:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # wait 181935 00:42:49.030 07:53:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:42:49.030 00:42:49.030 real 0m4.528s 00:42:49.030 user 0m6.527s 00:42:49.030 sys 0m1.406s 00:42:49.030 07:53:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:42:49.030 07:53:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:42:49.030 ************************************ 00:42:49.030 END TEST bdev_nbd 00:42:49.030 ************************************ 00:42:49.030 07:53:22 blockdev_raid5f -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:42:49.030 07:53:22 blockdev_raid5f -- bdev/blockdev.sh@764 -- # '[' raid5f = nvme ']' 00:42:49.030 07:53:22 blockdev_raid5f -- bdev/blockdev.sh@764 -- # '[' raid5f = gpt ']' 00:42:49.030 07:53:22 blockdev_raid5f -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:42:49.030 07:53:22 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:42:49.030 07:53:22 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:42:49.030 07:53:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:42:49.030 ************************************ 00:42:49.030 START TEST bdev_fio 00:42:49.030 ************************************ 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:42:49.030 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid5f]' 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid5f 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:42:49.030 ************************************ 00:42:49.030 START TEST bdev_fio_rw_verify 00:42:49.030 ************************************ 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:42:49.030 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:42:49.288 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:42:49.288 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:42:49.288 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:42:49.288 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:42:49.288 07:53:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:42:49.288 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:42:49.288 fio-3.35 00:42:49.288 Starting 1 thread 00:43:01.502 00:43:01.502 job_raid5f: (groupid=0, jobs=1): err= 0: pid=182157: Fri Jul 12 07:53:33 2024 00:43:01.502 read: IOPS=12.3k, BW=48.2MiB/s (50.5MB/s)(482MiB/10001msec) 00:43:01.502 slat (usec): min=17, max=291, avg=19.44, stdev= 3.00 00:43:01.502 clat (usec): min=11, max=543, avg=132.12, stdev=48.04 00:43:01.502 lat (usec): min=31, max=562, avg=151.57, stdev=48.77 00:43:01.502 clat percentiles (usec): 00:43:01.502 | 50.000th=[ 135], 99.000th=[ 243], 99.900th=[ 326], 99.990th=[ 429], 00:43:01.502 | 99.999th=[ 537] 00:43:01.502 write: IOPS=12.9k, BW=50.3MiB/s (52.8MB/s)(497MiB/9876msec); 0 zone resets 00:43:01.502 slat (usec): min=7, max=317, avg=16.04, stdev= 3.57 00:43:01.502 clat (usec): min=58, max=1744, avg=298.13, stdev=51.99 00:43:01.502 lat (usec): min=73, max=1760, avg=314.17, stdev=53.72 00:43:01.502 clat percentiles (usec): 00:43:01.502 | 50.000th=[ 302], 99.000th=[ 506], 99.900th=[ 717], 99.990th=[ 1221], 00:43:01.502 | 99.999th=[ 1729] 00:43:01.502 bw ( KiB/s): min=42320, max=54792, per=98.78%, avg=50906.11, stdev=3100.87, samples=19 00:43:01.502 iops : min=10580, max=13698, avg=12726.53, stdev=775.22, samples=19 00:43:01.502 lat (usec) : 20=0.01%, 50=0.01%, 100=15.84%, 250=39.48%, 500=44.10% 00:43:01.502 lat (usec) : 750=0.52%, 1000=0.03% 00:43:01.502 lat (msec) : 2=0.02% 00:43:01.502 cpu : usr=99.40%, sys=0.52%, ctx=153, majf=0, minf=11808 00:43:01.502 IO depths : 1=7.6%, 2=19.7%, 4=55.3%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:01.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:01.502 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:01.502 issued rwts: total=123326,127238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:01.502 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:01.502 00:43:01.502 Run status group 0 (all jobs): 00:43:01.502 READ: bw=48.2MiB/s (50.5MB/s), 48.2MiB/s-48.2MiB/s (50.5MB/s-50.5MB/s), io=482MiB (505MB), run=10001-10001msec 00:43:01.502 WRITE: bw=50.3MiB/s (52.8MB/s), 50.3MiB/s-50.3MiB/s (52.8MB/s-52.8MB/s), io=497MiB (521MB), run=9876-9876msec 00:43:01.502 ----------------------------------------------------- 00:43:01.502 Suppressions used: 00:43:01.502 count bytes template 00:43:01.502 1 7 /usr/src/fio/parse.c 00:43:01.502 95 9120 /usr/src/fio/iolog.c 00:43:01.502 1 904 libcrypto.so 00:43:01.502 ----------------------------------------------------- 00:43:01.502 00:43:01.502 ************************************ 00:43:01.502 END TEST bdev_fio_rw_verify 00:43:01.502 ************************************ 00:43:01.502 00:43:01.502 real 0m11.376s 00:43:01.502 user 0m12.311s 00:43:01.502 sys 0m0.668s 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6d1c3ab3-ee4a-4a5b-bfb8-d772a44b8bc7"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6d1c3ab3-ee4a-4a5b-bfb8-d772a44b8bc7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6d1c3ab3-ee4a-4a5b-bfb8-d772a44b8bc7",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4d079d98-13f3-4b5b-bfa8-b38d733b2667",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "c14edaef-dc9d-424f-8567-e86ea85eb096",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c9c75cf6-e564-4e8d-bafb-565d008fe4ec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:43:01.502 /home/vagrant/spdk_repo/spdk 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:43:01.502 00:43:01.502 real 0m11.600s 00:43:01.502 user 0m12.438s 00:43:01.502 sys 0m0.765s 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:01.502 07:53:34 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:43:01.502 ************************************ 00:43:01.502 END TEST bdev_fio 00:43:01.502 ************************************ 00:43:01.502 07:53:34 blockdev_raid5f -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:01.502 07:53:34 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:01.502 07:53:34 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:43:01.502 07:53:34 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:01.502 07:53:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:01.502 ************************************ 00:43:01.502 START TEST bdev_verify 00:43:01.502 ************************************ 00:43:01.502 07:53:34 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:01.502 [2024-07-12 07:53:34.580224] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:43:01.502 [2024-07-12 07:53:34.580761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182313 ] 00:43:01.502 [2024-07-12 07:53:34.740901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:01.502 [2024-07-12 07:53:34.795546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:01.502 [2024-07-12 07:53:34.795575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:01.502 Running I/O for 5 seconds... 00:43:06.771 00:43:06.771 Latency(us) 00:43:06.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:06.771 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:06.771 Verification LBA range: start 0x0 length 0x2000 00:43:06.771 raid5f : 5.01 6630.09 25.90 0.00 0.00 28943.24 205.78 22719.15 00:43:06.771 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:43:06.771 Verification LBA range: start 0x2000 length 0x2000 00:43:06.771 raid5f : 5.02 5378.14 21.01 0.00 0.00 35792.90 140.43 27712.37 00:43:06.771 =================================================================================================================== 00:43:06.771 Total : 12008.23 46.91 0.00 0.00 32014.31 140.43 27712.37 00:43:06.771 ************************************ 00:43:06.771 END TEST bdev_verify 00:43:06.771 ************************************ 00:43:06.771 00:43:06.771 real 0m5.806s 00:43:06.771 user 0m10.818s 00:43:06.771 sys 0m0.233s 00:43:06.771 07:53:40 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:06.771 07:53:40 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:43:06.771 07:53:40 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:06.771 07:53:40 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:43:06.771 07:53:40 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:06.771 07:53:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:06.771 ************************************ 00:43:06.771 START TEST bdev_verify_big_io 00:43:06.771 ************************************ 00:43:06.771 07:53:40 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:06.771 [2024-07-12 07:53:40.445114] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:43:06.771 [2024-07-12 07:53:40.445643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182404 ] 00:43:06.771 [2024-07-12 07:53:40.601833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:06.771 [2024-07-12 07:53:40.651707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:06.771 [2024-07-12 07:53:40.651714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:07.030 Running I/O for 5 seconds... 00:43:12.302 00:43:12.302 Latency(us) 00:43:12.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:12.302 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:43:12.302 Verification LBA range: start 0x0 length 0x200 00:43:12.302 raid5f : 5.25 422.75 26.42 0.00 0.00 7325308.74 164.82 365503.63 00:43:12.302 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:43:12.302 Verification LBA range: start 0x200 length 0x200 00:43:12.302 raid5f : 5.29 348.09 21.76 0.00 0.00 8897144.67 190.17 421427.69 00:43:12.302 =================================================================================================================== 00:43:12.302 Total : 770.84 48.18 0.00 0.00 8037492.46 164.82 421427.69 00:43:12.561 ************************************ 00:43:12.561 END TEST bdev_verify_big_io 00:43:12.561 ************************************ 00:43:12.561 00:43:12.561 real 0m6.050s 00:43:12.561 user 0m11.287s 00:43:12.561 sys 0m0.269s 00:43:12.561 07:53:46 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:12.561 07:53:46 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:43:12.821 07:53:46 blockdev_raid5f -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:12.821 07:53:46 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:43:12.821 07:53:46 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:12.821 07:53:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:12.821 ************************************ 00:43:12.821 START TEST bdev_write_zeroes 00:43:12.821 ************************************ 00:43:12.821 07:53:46 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:12.821 [2024-07-12 07:53:46.553178] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:43:12.821 [2024-07-12 07:53:46.553602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182499 ] 00:43:12.821 [2024-07-12 07:53:46.697667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:13.079 [2024-07-12 07:53:46.739219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:13.079 Running I/O for 1 seconds... 00:43:14.455 00:43:14.455 Latency(us) 00:43:14.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:14.455 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:14.455 raid5f : 1.00 30104.55 117.60 0.00 0.00 4234.91 1232.70 4930.80 00:43:14.455 =================================================================================================================== 00:43:14.455 Total : 30104.55 117.60 0.00 0.00 4234.91 1232.70 4930.80 00:43:14.455 00:43:14.455 real 0m1.728s 00:43:14.455 user 0m1.390s 00:43:14.455 sys 0m0.217s 00:43:14.455 ************************************ 00:43:14.455 END TEST bdev_write_zeroes 00:43:14.455 ************************************ 00:43:14.455 07:53:48 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:14.455 07:53:48 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:43:14.455 07:53:48 blockdev_raid5f -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:14.455 07:53:48 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:43:14.455 07:53:48 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:14.455 07:53:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:14.455 ************************************ 00:43:14.455 START TEST bdev_json_nonenclosed 00:43:14.455 ************************************ 00:43:14.455 07:53:48 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:14.715 [2024-07-12 07:53:48.365126] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:43:14.715 [2024-07-12 07:53:48.365428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182545 ] 00:43:14.715 [2024-07-12 07:53:48.522410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:14.715 [2024-07-12 07:53:48.576031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:14.715 [2024-07-12 07:53:48.576161] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:43:14.715 [2024-07-12 07:53:48.576200] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:14.715 [2024-07-12 07:53:48.576228] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:14.974 ************************************ 00:43:14.974 END TEST bdev_json_nonenclosed 00:43:14.974 ************************************ 00:43:14.974 00:43:14.974 real 0m0.419s 00:43:14.974 user 0m0.165s 00:43:14.974 sys 0m0.154s 00:43:14.974 07:53:48 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:14.974 07:53:48 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:43:14.974 07:53:48 blockdev_raid5f -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:14.974 07:53:48 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:43:14.974 07:53:48 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:14.974 07:53:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:14.974 ************************************ 00:43:14.974 START TEST bdev_json_nonarray 00:43:14.974 ************************************ 00:43:14.974 07:53:48 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:15.233 [2024-07-12 07:53:48.856772] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:43:15.233 [2024-07-12 07:53:48.857091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182567 ] 00:43:15.233 [2024-07-12 07:53:49.010783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:15.233 [2024-07-12 07:53:49.064038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:15.233 [2024-07-12 07:53:49.064165] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:43:15.233 [2024-07-12 07:53:49.064222] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:15.233 [2024-07-12 07:53:49.064249] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:15.493 ************************************ 00:43:15.493 END TEST bdev_json_nonarray 00:43:15.493 ************************************ 00:43:15.493 00:43:15.493 real 0m0.411s 00:43:15.493 user 0m0.152s 00:43:15.493 sys 0m0.159s 00:43:15.493 07:53:49 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:15.493 07:53:49 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:43:15.493 07:53:49 blockdev_raid5f -- bdev/blockdev.sh@787 -- # [[ raid5f == bdev ]] 00:43:15.493 07:53:49 blockdev_raid5f -- bdev/blockdev.sh@794 -- # [[ raid5f == gpt ]] 00:43:15.493 07:53:49 blockdev_raid5f -- bdev/blockdev.sh@798 -- # [[ raid5f == crypto_sw ]] 00:43:15.493 07:53:49 blockdev_raid5f -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:43:15.493 07:53:49 blockdev_raid5f -- bdev/blockdev.sh@811 -- # cleanup 00:43:15.493 07:53:49 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:43:15.493 07:53:49 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:15.493 07:53:49 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:43:15.493 07:53:49 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:43:15.493 07:53:49 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:43:15.493 07:53:49 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:43:15.493 00:43:15.493 real 0m35.304s 00:43:15.493 user 0m49.053s 00:43:15.493 sys 0m4.695s 00:43:15.493 07:53:49 blockdev_raid5f -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:15.493 07:53:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:15.493 ************************************ 00:43:15.493 END TEST blockdev_raid5f 00:43:15.493 ************************************ 00:43:15.493 07:53:49 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:43:15.493 07:53:49 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:43:15.493 07:53:49 -- common/autotest_common.sh@720 -- # xtrace_disable 00:43:15.493 07:53:49 -- common/autotest_common.sh@10 -- # set +x 00:43:15.493 07:53:49 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:43:15.493 07:53:49 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:43:15.493 07:53:49 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:43:15.493 07:53:49 -- common/autotest_common.sh@10 -- # set +x 00:43:18.031 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:18.031 Waiting for block devices as requested 00:43:18.031 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:43:18.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:18.601 Cleaning 00:43:18.601 Removing: /var/run/dpdk/spdk0/config 00:43:18.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:18.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:18.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:18.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:18.601 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:18.601 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:18.601 Removing: /dev/shm/spdk_tgt_trace.pid123211 00:43:18.601 Removing: /var/run/dpdk/spdk0 00:43:18.601 Removing: /var/run/dpdk/spdk_pid123024 00:43:18.601 Removing: /var/run/dpdk/spdk_pid123211 00:43:18.601 Removing: /var/run/dpdk/spdk_pid123448 00:43:18.601 Removing: /var/run/dpdk/spdk_pid123541 00:43:18.601 Removing: /var/run/dpdk/spdk_pid123583 00:43:18.601 Removing: /var/run/dpdk/spdk_pid123705 00:43:18.602 Removing: /var/run/dpdk/spdk_pid123728 00:43:18.602 Removing: /var/run/dpdk/spdk_pid123874 00:43:18.602 Removing: /var/run/dpdk/spdk_pid124118 00:43:18.602 Removing: /var/run/dpdk/spdk_pid124280 00:43:18.602 Removing: /var/run/dpdk/spdk_pid124368 00:43:18.602 Removing: /var/run/dpdk/spdk_pid124455 00:43:18.602 Removing: /var/run/dpdk/spdk_pid124557 00:43:18.602 Removing: /var/run/dpdk/spdk_pid124647 00:43:18.602 Removing: /var/run/dpdk/spdk_pid124697 00:43:18.602 Removing: /var/run/dpdk/spdk_pid124741 00:43:18.602 Removing: /var/run/dpdk/spdk_pid124816 00:43:18.602 Removing: /var/run/dpdk/spdk_pid124942 00:43:18.602 Removing: /var/run/dpdk/spdk_pid125454 00:43:18.602 Removing: /var/run/dpdk/spdk_pid125515 00:43:18.602 Removing: /var/run/dpdk/spdk_pid125577 00:43:18.602 Removing: /var/run/dpdk/spdk_pid125598 00:43:18.602 Removing: /var/run/dpdk/spdk_pid125672 00:43:18.602 Removing: /var/run/dpdk/spdk_pid125693 00:43:18.602 Removing: /var/run/dpdk/spdk_pid125776 00:43:18.602 Removing: /var/run/dpdk/spdk_pid125797 00:43:18.602 Removing: /var/run/dpdk/spdk_pid125848 00:43:18.602 Removing: /var/run/dpdk/spdk_pid125871 00:43:18.602 Removing: /var/run/dpdk/spdk_pid125923 00:43:18.602 Removing: /var/run/dpdk/spdk_pid125946 00:43:18.602 Removing: /var/run/dpdk/spdk_pid126087 00:43:18.602 Removing: /var/run/dpdk/spdk_pid126137 00:43:18.602 Removing: /var/run/dpdk/spdk_pid126173 00:43:18.602 Removing: /var/run/dpdk/spdk_pid126253 00:43:18.602 Removing: /var/run/dpdk/spdk_pid126323 00:43:18.602 Removing: /var/run/dpdk/spdk_pid126355 00:43:18.602 Removing: /var/run/dpdk/spdk_pid126438 00:43:18.602 Removing: /var/run/dpdk/spdk_pid126489 00:43:18.602 Removing: /var/run/dpdk/spdk_pid126530 00:43:18.602 Removing: /var/run/dpdk/spdk_pid126581 00:43:18.602 Removing: /var/run/dpdk/spdk_pid126632 00:43:18.602 Removing: /var/run/dpdk/spdk_pid126671 00:43:18.862 Removing: /var/run/dpdk/spdk_pid126722 00:43:18.862 Removing: /var/run/dpdk/spdk_pid126768 00:43:18.862 Removing: /var/run/dpdk/spdk_pid126821 00:43:18.862 Removing: /var/run/dpdk/spdk_pid126866 00:43:18.862 Removing: /var/run/dpdk/spdk_pid126909 00:43:18.862 Removing: /var/run/dpdk/spdk_pid126960 00:43:18.862 Removing: /var/run/dpdk/spdk_pid126999 00:43:18.862 Removing: /var/run/dpdk/spdk_pid127052 00:43:18.862 Removing: /var/run/dpdk/spdk_pid127104 00:43:18.862 Removing: /var/run/dpdk/spdk_pid127149 00:43:18.862 Removing: /var/run/dpdk/spdk_pid127193 00:43:18.862 Removing: /var/run/dpdk/spdk_pid127242 00:43:18.862 Removing: /var/run/dpdk/spdk_pid127296 00:43:18.862 Removing: /var/run/dpdk/spdk_pid127349 00:43:18.862 Removing: /var/run/dpdk/spdk_pid127388 00:43:18.862 Removing: /var/run/dpdk/spdk_pid127477 00:43:18.862 Removing: /var/run/dpdk/spdk_pid127585 00:43:18.862 Removing: /var/run/dpdk/spdk_pid127756 00:43:18.862 Removing: /var/run/dpdk/spdk_pid127819 00:43:18.862 Removing: /var/run/dpdk/spdk_pid127863 00:43:18.862 Removing: /var/run/dpdk/spdk_pid129054 00:43:18.862 Removing: /var/run/dpdk/spdk_pid129258 00:43:18.862 Removing: /var/run/dpdk/spdk_pid129446 00:43:18.862 Removing: /var/run/dpdk/spdk_pid129552 00:43:18.862 Removing: /var/run/dpdk/spdk_pid129676 00:43:18.862 Removing: /var/run/dpdk/spdk_pid129728 00:43:18.862 Removing: /var/run/dpdk/spdk_pid129761 00:43:18.862 Removing: /var/run/dpdk/spdk_pid129790 00:43:18.862 Removing: /var/run/dpdk/spdk_pid130253 00:43:18.862 Removing: /var/run/dpdk/spdk_pid130343 00:43:18.862 Removing: /var/run/dpdk/spdk_pid130440 00:43:18.862 Removing: /var/run/dpdk/spdk_pid130491 00:43:18.862 Removing: /var/run/dpdk/spdk_pid131752 00:43:18.862 Removing: /var/run/dpdk/spdk_pid132115 00:43:18.862 Removing: /var/run/dpdk/spdk_pid132294 00:43:18.862 Removing: /var/run/dpdk/spdk_pid133209 00:43:18.862 Removing: /var/run/dpdk/spdk_pid133574 00:43:18.862 Removing: /var/run/dpdk/spdk_pid133759 00:43:18.862 Removing: /var/run/dpdk/spdk_pid134678 00:43:18.862 Removing: /var/run/dpdk/spdk_pid135204 00:43:18.862 Removing: /var/run/dpdk/spdk_pid135384 00:43:18.862 Removing: /var/run/dpdk/spdk_pid137504 00:43:18.862 Removing: /var/run/dpdk/spdk_pid137967 00:43:18.862 Removing: /var/run/dpdk/spdk_pid138160 00:43:18.862 Removing: /var/run/dpdk/spdk_pid140265 00:43:18.862 Removing: /var/run/dpdk/spdk_pid140741 00:43:18.862 Removing: /var/run/dpdk/spdk_pid140934 00:43:18.862 Removing: /var/run/dpdk/spdk_pid143054 00:43:18.862 Removing: /var/run/dpdk/spdk_pid143790 00:43:18.862 Removing: /var/run/dpdk/spdk_pid143985 00:43:18.862 Removing: /var/run/dpdk/spdk_pid146336 00:43:18.862 Removing: /var/run/dpdk/spdk_pid146872 00:43:18.862 Removing: /var/run/dpdk/spdk_pid147070 00:43:18.862 Removing: /var/run/dpdk/spdk_pid149450 00:43:18.862 Removing: /var/run/dpdk/spdk_pid149980 00:43:18.862 Removing: /var/run/dpdk/spdk_pid150183 00:43:18.862 Removing: /var/run/dpdk/spdk_pid152550 00:43:18.862 Removing: /var/run/dpdk/spdk_pid153395 00:43:18.862 Removing: /var/run/dpdk/spdk_pid153593 00:43:18.862 Removing: /var/run/dpdk/spdk_pid153794 00:43:18.862 Removing: /var/run/dpdk/spdk_pid154345 00:43:19.122 Removing: /var/run/dpdk/spdk_pid155250 00:43:19.122 Removing: /var/run/dpdk/spdk_pid155723 00:43:19.122 Removing: /var/run/dpdk/spdk_pid156593 00:43:19.122 Removing: /var/run/dpdk/spdk_pid157129 00:43:19.122 Removing: /var/run/dpdk/spdk_pid158050 00:43:19.122 Removing: /var/run/dpdk/spdk_pid158573 00:43:19.122 Removing: /var/run/dpdk/spdk_pid161327 00:43:19.122 Removing: /var/run/dpdk/spdk_pid162039 00:43:19.122 Removing: /var/run/dpdk/spdk_pid162570 00:43:19.122 Removing: /var/run/dpdk/spdk_pid165550 00:43:19.122 Removing: /var/run/dpdk/spdk_pid166378 00:43:19.122 Removing: /var/run/dpdk/spdk_pid166987 00:43:19.122 Removing: /var/run/dpdk/spdk_pid168384 00:43:19.122 Removing: /var/run/dpdk/spdk_pid168894 00:43:19.122 Removing: /var/run/dpdk/spdk_pid170107 00:43:19.122 Removing: /var/run/dpdk/spdk_pid170608 00:43:19.122 Removing: /var/run/dpdk/spdk_pid171812 00:43:19.122 Removing: /var/run/dpdk/spdk_pid172308 00:43:19.122 Removing: /var/run/dpdk/spdk_pid173129 00:43:19.122 Removing: /var/run/dpdk/spdk_pid173165 00:43:19.122 Removing: /var/run/dpdk/spdk_pid173208 00:43:19.122 Removing: /var/run/dpdk/spdk_pid173256 00:43:19.122 Removing: /var/run/dpdk/spdk_pid173378 00:43:19.122 Removing: /var/run/dpdk/spdk_pid173516 00:43:19.122 Removing: /var/run/dpdk/spdk_pid173728 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174015 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174030 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174071 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174095 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174104 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174126 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174144 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174160 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174180 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174199 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174209 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174229 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174248 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174265 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174287 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174295 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174316 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174336 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174351 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174371 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174400 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174424 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174455 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174527 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174560 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174575 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174609 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174622 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174632 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174683 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174698 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174736 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174741 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174759 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174768 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174781 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174794 00:43:19.122 Removing: /var/run/dpdk/spdk_pid174803 00:43:19.392 Removing: /var/run/dpdk/spdk_pid174816 00:43:19.392 Removing: /var/run/dpdk/spdk_pid174855 00:43:19.392 Removing: /var/run/dpdk/spdk_pid174881 00:43:19.392 Removing: /var/run/dpdk/spdk_pid174901 00:43:19.392 Removing: /var/run/dpdk/spdk_pid174937 00:43:19.392 Removing: /var/run/dpdk/spdk_pid174945 00:43:19.392 Removing: /var/run/dpdk/spdk_pid174961 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175006 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175024 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175056 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175067 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175082 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175092 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175104 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175120 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175126 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175142 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175222 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175273 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175374 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175397 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175437 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175488 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175516 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175537 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175559 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175593 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175614 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175692 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175738 00:43:19.392 Removing: /var/run/dpdk/spdk_pid175783 00:43:19.392 Removing: /var/run/dpdk/spdk_pid176047 00:43:19.392 Removing: /var/run/dpdk/spdk_pid176172 00:43:19.392 Removing: /var/run/dpdk/spdk_pid176198 00:43:19.392 Removing: /var/run/dpdk/spdk_pid176295 00:43:19.392 Removing: /var/run/dpdk/spdk_pid176361 00:43:19.392 Removing: /var/run/dpdk/spdk_pid176383 00:43:19.392 Removing: /var/run/dpdk/spdk_pid176622 00:43:19.392 Removing: /var/run/dpdk/spdk_pid176711 00:43:19.392 Removing: /var/run/dpdk/spdk_pid176808 00:43:19.392 Removing: /var/run/dpdk/spdk_pid176852 00:43:19.392 Removing: /var/run/dpdk/spdk_pid176874 00:43:19.392 Removing: /var/run/dpdk/spdk_pid176960 00:43:19.392 Removing: /var/run/dpdk/spdk_pid177385 00:43:19.392 Removing: /var/run/dpdk/spdk_pid177408 00:43:19.392 Removing: /var/run/dpdk/spdk_pid177704 00:43:19.392 Removing: /var/run/dpdk/spdk_pid177795 00:43:19.392 Removing: /var/run/dpdk/spdk_pid177886 00:43:19.392 Removing: /var/run/dpdk/spdk_pid177937 00:43:19.392 Removing: /var/run/dpdk/spdk_pid177960 00:43:19.392 Removing: /var/run/dpdk/spdk_pid177991 00:43:19.392 Removing: /var/run/dpdk/spdk_pid179307 00:43:19.392 Removing: /var/run/dpdk/spdk_pid179429 00:43:19.392 Removing: /var/run/dpdk/spdk_pid179433 00:43:19.392 Removing: /var/run/dpdk/spdk_pid179465 00:43:19.392 Removing: /var/run/dpdk/spdk_pid179967 00:43:19.392 Removing: /var/run/dpdk/spdk_pid180064 00:43:19.392 Removing: /var/run/dpdk/spdk_pid180701 00:43:19.392 Removing: /var/run/dpdk/spdk_pid181804 00:43:19.392 Removing: /var/run/dpdk/spdk_pid181846 00:43:19.392 Removing: /var/run/dpdk/spdk_pid181878 00:43:19.392 Removing: /var/run/dpdk/spdk_pid182143 00:43:19.392 Removing: /var/run/dpdk/spdk_pid182313 00:43:19.392 Removing: /var/run/dpdk/spdk_pid182404 00:43:19.392 Removing: /var/run/dpdk/spdk_pid182499 00:43:19.654 Removing: /var/run/dpdk/spdk_pid182545 00:43:19.654 Removing: /var/run/dpdk/spdk_pid182567 00:43:19.654 Clean 00:43:19.654 07:53:53 -- common/autotest_common.sh@1447 -- # return 0 00:43:19.654 07:53:53 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:43:19.654 07:53:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:19.654 07:53:53 -- common/autotest_common.sh@10 -- # set +x 00:43:19.654 07:53:53 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:43:19.654 07:53:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:19.654 07:53:53 -- common/autotest_common.sh@10 -- # set +x 00:43:19.654 07:53:53 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:43:19.912 07:53:53 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:43:19.912 07:53:53 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:43:19.912 07:53:53 -- spdk/autotest.sh@391 -- # hash lcov 00:43:19.912 07:53:53 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:43:19.912 07:53:53 -- spdk/autotest.sh@393 -- # hostname 00:43:19.912 07:53:53 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:43:19.912 geninfo: WARNING: invalid characters removed from testname! 00:44:06.591 07:54:33 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:06.591 07:54:38 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:08.002 07:54:41 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:11.290 07:54:44 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:13.828 07:54:47 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:17.119 07:54:50 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:19.657 07:54:53 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:19.657 07:54:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:19.657 07:54:53 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:44:19.657 07:54:53 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:19.657 07:54:53 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:19.657 07:54:53 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:19.657 07:54:53 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:19.657 07:54:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:19.657 07:54:53 -- paths/export.sh@5 -- $ export PATH 00:44:19.657 07:54:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:19.657 07:54:53 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:44:19.657 07:54:53 -- common/autobuild_common.sh@437 -- $ date +%s 00:44:19.657 07:54:53 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720770893.XXXXXX 00:44:19.657 07:54:53 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720770893.QBZNJX 00:44:19.657 07:54:53 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:44:19.657 07:54:53 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:44:19.657 07:54:53 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:44:19.657 07:54:53 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:44:19.657 07:54:53 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:44:19.657 07:54:53 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:44:19.657 07:54:53 -- common/autobuild_common.sh@453 -- $ get_config_params 00:44:19.657 07:54:53 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:44:19.657 07:54:53 -- common/autotest_common.sh@10 -- $ set +x 00:44:19.657 07:54:53 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:44:19.657 07:54:53 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:44:19.657 07:54:53 -- pm/common@17 -- $ local monitor 00:44:19.657 07:54:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:19.657 07:54:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:19.657 07:54:53 -- pm/common@21 -- $ date +%s 00:44:19.657 07:54:53 -- pm/common@25 -- $ sleep 1 00:44:19.657 07:54:53 -- pm/common@21 -- $ date +%s 00:44:19.657 07:54:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720770893 00:44:19.657 07:54:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720770893 00:44:19.657 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720770893_collect-vmstat.pm.log 00:44:19.657 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720770893_collect-cpu-load.pm.log 00:44:20.594 07:54:54 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:44:20.594 07:54:54 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:44:20.594 07:54:54 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:44:20.594 07:54:54 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:44:20.594 07:54:54 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:44:20.594 07:54:54 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:44:20.594 07:54:54 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:44:20.594 07:54:54 -- common/autotest_common.sh@720 -- $ xtrace_disable 00:44:20.594 07:54:54 -- common/autotest_common.sh@10 -- $ set +x 00:44:20.594 07:54:54 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:44:20.594 07:54:54 -- spdk/autopackage.sh@36 -- $ [[ -n v22.11.4 ]] 00:44:20.594 07:54:54 -- spdk/autopackage.sh@36 -- $ [[ -e /tmp/spdk-ld-path ]] 00:44:20.594 07:54:54 -- spdk/autopackage.sh@37 -- $ source /tmp/spdk-ld-path 00:44:20.594 07:54:54 -- tmp/spdk-ld-path@1 -- $ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:44:20.594 07:54:54 -- tmp/spdk-ld-path@1 -- $ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:44:20.594 07:54:54 -- tmp/spdk-ld-path@2 -- $ export PKG_CONFIG_PATH= 00:44:20.594 07:54:54 -- tmp/spdk-ld-path@2 -- $ PKG_CONFIG_PATH= 00:44:20.594 07:54:54 -- spdk/autopackage.sh@40 -- $ get_config_params 00:44:20.594 07:54:54 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:44:20.594 07:54:54 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:44:20.594 07:54:54 -- common/autotest_common.sh@10 -- $ set +x 00:44:20.853 07:54:54 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:44:20.853 07:54:54 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --enable-lto --disable-unit-tests 00:44:20.853 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:44:20.853 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:44:20.853 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:44:20.853 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:44:21.422 Using 'verbs' RDMA provider 00:44:37.243 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:44:49.453 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:44:49.453 Creating mk/config.mk...done. 00:44:49.453 Creating mk/cc.flags.mk...done. 00:44:49.453 Type 'make' to build. 00:44:49.453 07:55:23 -- spdk/autopackage.sh@43 -- $ make -j10 00:44:50.021 make[1]: Nothing to be done for 'all'. 00:44:50.280 CC lib/log/log.o 00:44:50.280 CC lib/log/log_flags.o 00:44:50.280 CC lib/log/log_deprecated.o 00:44:50.280 CC lib/ut_mock/mock.o 00:44:50.280 CC lib/ut/ut.o 00:44:50.540 LIB libspdk_ut_mock.a 00:44:50.540 LIB libspdk_ut.a 00:44:50.540 LIB libspdk_log.a 00:44:50.799 CC lib/util/base64.o 00:44:50.799 CC lib/util/cpuset.o 00:44:50.799 CC lib/util/bit_array.o 00:44:50.799 CC lib/util/crc16.o 00:44:50.799 CXX lib/trace_parser/trace.o 00:44:50.799 CC lib/dma/dma.o 00:44:50.799 CC lib/util/crc32.o 00:44:50.799 CC lib/util/crc32c.o 00:44:50.799 CC lib/ioat/ioat.o 00:44:51.058 CC lib/vfio_user/host/vfio_user_pci.o 00:44:51.058 CC lib/vfio_user/host/vfio_user.o 00:44:51.058 CC lib/util/crc32_ieee.o 00:44:51.058 CC lib/util/crc64.o 00:44:51.058 CC lib/util/dif.o 00:44:51.058 LIB libspdk_dma.a 00:44:51.058 CC lib/util/fd.o 00:44:51.058 CC lib/util/file.o 00:44:51.058 LIB libspdk_ioat.a 00:44:51.058 CC lib/util/hexlify.o 00:44:51.058 CC lib/util/iov.o 00:44:51.058 CC lib/util/math.o 00:44:51.058 CC lib/util/pipe.o 00:44:51.058 CC lib/util/strerror_tls.o 00:44:51.317 LIB libspdk_vfio_user.a 00:44:51.317 CC lib/util/string.o 00:44:51.317 CC lib/util/uuid.o 00:44:51.317 CC lib/util/fd_group.o 00:44:51.317 CC lib/util/xor.o 00:44:51.317 CC lib/util/zipf.o 00:44:51.317 LIB libspdk_util.a 00:44:51.575 LIB libspdk_trace_parser.a 00:44:51.575 CC lib/vmd/vmd.o 00:44:51.575 CC lib/vmd/led.o 00:44:51.575 CC lib/conf/conf.o 00:44:51.575 CC lib/rdma/rdma_verbs.o 00:44:51.575 CC lib/env_dpdk/pci.o 00:44:51.576 CC lib/env_dpdk/memory.o 00:44:51.576 CC lib/rdma/common.o 00:44:51.576 CC lib/json/json_parse.o 00:44:51.576 CC lib/env_dpdk/env.o 00:44:51.576 CC lib/idxd/idxd.o 00:44:51.576 CC lib/json/json_util.o 00:44:51.834 LIB libspdk_conf.a 00:44:51.834 CC lib/idxd/idxd_user.o 00:44:51.834 CC lib/env_dpdk/init.o 00:44:51.834 LIB libspdk_rdma.a 00:44:51.834 CC lib/env_dpdk/threads.o 00:44:51.834 CC lib/json/json_write.o 00:44:51.834 CC lib/env_dpdk/pci_ioat.o 00:44:51.834 CC lib/env_dpdk/pci_virtio.o 00:44:51.834 CC lib/env_dpdk/pci_vmd.o 00:44:51.834 LIB libspdk_vmd.a 00:44:51.834 CC lib/env_dpdk/pci_idxd.o 00:44:51.834 LIB libspdk_idxd.a 00:44:51.834 CC lib/env_dpdk/pci_event.o 00:44:51.834 CC lib/env_dpdk/sigbus_handler.o 00:44:51.834 CC lib/env_dpdk/pci_dpdk.o 00:44:51.834 CC lib/env_dpdk/pci_dpdk_2207.o 00:44:51.834 CC lib/env_dpdk/pci_dpdk_2211.o 00:44:51.834 LIB libspdk_json.a 00:44:52.093 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:44:52.093 CC lib/jsonrpc/jsonrpc_server.o 00:44:52.093 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:44:52.093 CC lib/jsonrpc/jsonrpc_client.o 00:44:52.350 LIB libspdk_jsonrpc.a 00:44:52.350 LIB libspdk_env_dpdk.a 00:44:52.609 CC lib/rpc/rpc.o 00:44:52.867 LIB libspdk_rpc.a 00:44:52.867 CC lib/trace/trace_flags.o 00:44:52.867 CC lib/keyring/keyring_rpc.o 00:44:52.867 CC lib/keyring/keyring.o 00:44:52.867 CC lib/trace/trace.o 00:44:52.867 CC lib/trace/trace_rpc.o 00:44:52.867 CC lib/notify/notify.o 00:44:52.867 CC lib/notify/notify_rpc.o 00:44:53.126 LIB libspdk_keyring.a 00:44:53.126 LIB libspdk_notify.a 00:44:53.126 LIB libspdk_trace.a 00:44:53.385 CC lib/thread/thread.o 00:44:53.385 CC lib/thread/iobuf.o 00:44:53.385 CC lib/sock/sock_rpc.o 00:44:53.385 CC lib/sock/sock.o 00:44:53.953 LIB libspdk_sock.a 00:44:54.212 CC lib/nvme/nvme_ctrlr_cmd.o 00:44:54.212 CC lib/nvme/nvme_ns_cmd.o 00:44:54.212 CC lib/nvme/nvme_ctrlr.o 00:44:54.212 CC lib/nvme/nvme_fabric.o 00:44:54.212 CC lib/nvme/nvme_ns.o 00:44:54.212 CC lib/nvme/nvme_qpair.o 00:44:54.212 CC lib/nvme/nvme_pcie_common.o 00:44:54.212 CC lib/nvme/nvme_pcie.o 00:44:54.212 CC lib/nvme/nvme.o 00:44:54.212 LIB libspdk_thread.a 00:44:54.212 CC lib/nvme/nvme_quirks.o 00:44:54.779 CC lib/nvme/nvme_transport.o 00:44:54.779 CC lib/nvme/nvme_discovery.o 00:44:54.779 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:44:54.779 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:44:54.779 CC lib/nvme/nvme_tcp.o 00:44:54.779 CC lib/nvme/nvme_opal.o 00:44:54.779 CC lib/nvme/nvme_io_msg.o 00:44:54.779 CC lib/nvme/nvme_poll_group.o 00:44:55.037 CC lib/accel/accel.o 00:44:55.037 CC lib/accel/accel_rpc.o 00:44:55.037 CC lib/accel/accel_sw.o 00:44:55.037 CC lib/nvme/nvme_zns.o 00:44:55.037 CC lib/nvme/nvme_stubs.o 00:44:55.037 CC lib/nvme/nvme_auth.o 00:44:55.037 CC lib/nvme/nvme_cuse.o 00:44:55.037 CC lib/nvme/nvme_rdma.o 00:44:55.295 CC lib/blob/blobstore.o 00:44:55.295 CC lib/init/json_config.o 00:44:55.295 CC lib/virtio/virtio.o 00:44:55.295 LIB libspdk_accel.a 00:44:55.295 CC lib/virtio/virtio_vhost_user.o 00:44:55.295 CC lib/virtio/virtio_vfio_user.o 00:44:55.295 CC lib/virtio/virtio_pci.o 00:44:55.295 CC lib/init/subsystem.o 00:44:55.554 CC lib/blob/request.o 00:44:55.554 CC lib/init/subsystem_rpc.o 00:44:55.554 CC lib/init/rpc.o 00:44:55.554 CC lib/blob/zeroes.o 00:44:55.554 CC lib/blob/blob_bs_dev.o 00:44:55.554 LIB libspdk_virtio.a 00:44:55.554 LIB libspdk_init.a 00:44:55.554 CC lib/bdev/bdev.o 00:44:55.554 CC lib/bdev/bdev_rpc.o 00:44:55.554 CC lib/bdev/bdev_zone.o 00:44:55.554 CC lib/bdev/part.o 00:44:55.554 CC lib/bdev/scsi_nvme.o 00:44:55.812 LIB libspdk_nvme.a 00:44:55.813 CC lib/event/log_rpc.o 00:44:55.813 CC lib/event/app.o 00:44:55.813 CC lib/event/app_rpc.o 00:44:55.813 CC lib/event/reactor.o 00:44:55.813 CC lib/event/scheduler_static.o 00:44:56.072 LIB libspdk_event.a 00:44:56.331 LIB libspdk_blob.a 00:44:56.589 LIB libspdk_bdev.a 00:44:56.589 CC lib/blobfs/blobfs.o 00:44:56.589 CC lib/blobfs/tree.o 00:44:56.589 CC lib/lvol/lvol.o 00:44:56.589 CC lib/scsi/dev.o 00:44:56.589 CC lib/scsi/lun.o 00:44:56.589 CC lib/scsi/scsi.o 00:44:56.589 CC lib/scsi/port.o 00:44:56.589 CC lib/nvmf/ctrlr.o 00:44:56.589 CC lib/nbd/nbd.o 00:44:56.589 CC lib/nvmf/ctrlr_discovery.o 00:44:56.863 CC lib/ftl/ftl_core.o 00:44:56.863 CC lib/scsi/scsi_bdev.o 00:44:56.863 CC lib/nbd/nbd_rpc.o 00:44:56.863 CC lib/scsi/scsi_pr.o 00:44:56.863 CC lib/scsi/scsi_rpc.o 00:44:56.863 CC lib/scsi/task.o 00:44:56.863 LIB libspdk_blobfs.a 00:44:56.863 CC lib/nvmf/ctrlr_bdev.o 00:44:56.863 CC lib/ftl/ftl_init.o 00:44:56.863 CC lib/ftl/ftl_layout.o 00:44:56.863 LIB libspdk_lvol.a 00:44:57.137 LIB libspdk_nbd.a 00:44:57.137 CC lib/ftl/ftl_debug.o 00:44:57.137 CC lib/ftl/ftl_io.o 00:44:57.137 CC lib/nvmf/subsystem.o 00:44:57.137 CC lib/nvmf/nvmf.o 00:44:57.137 CC lib/nvmf/nvmf_rpc.o 00:44:57.137 LIB libspdk_scsi.a 00:44:57.137 CC lib/nvmf/transport.o 00:44:57.137 CC lib/nvmf/tcp.o 00:44:57.137 CC lib/ftl/ftl_sb.o 00:44:57.137 CC lib/nvmf/stubs.o 00:44:57.137 CC lib/nvmf/mdns_server.o 00:44:57.395 CC lib/nvmf/rdma.o 00:44:57.395 CC lib/iscsi/conn.o 00:44:57.395 CC lib/nvmf/auth.o 00:44:57.395 CC lib/ftl/ftl_l2p.o 00:44:57.395 CC lib/iscsi/init_grp.o 00:44:57.395 CC lib/vhost/vhost.o 00:44:57.395 CC lib/vhost/vhost_rpc.o 00:44:57.395 CC lib/vhost/vhost_scsi.o 00:44:57.395 CC lib/vhost/vhost_blk.o 00:44:57.395 CC lib/ftl/ftl_l2p_flat.o 00:44:57.395 CC lib/iscsi/iscsi.o 00:44:57.653 CC lib/iscsi/md5.o 00:44:57.653 CC lib/iscsi/param.o 00:44:57.653 CC lib/ftl/ftl_nv_cache.o 00:44:57.653 CC lib/vhost/rte_vhost_user.o 00:44:57.653 CC lib/iscsi/portal_grp.o 00:44:57.912 CC lib/iscsi/tgt_node.o 00:44:57.912 CC lib/ftl/ftl_band.o 00:44:57.912 CC lib/ftl/ftl_band_ops.o 00:44:57.912 CC lib/ftl/ftl_writer.o 00:44:57.912 CC lib/iscsi/iscsi_subsystem.o 00:44:57.912 LIB libspdk_nvmf.a 00:44:57.912 CC lib/iscsi/iscsi_rpc.o 00:44:57.912 CC lib/iscsi/task.o 00:44:57.912 CC lib/ftl/ftl_rq.o 00:44:57.912 CC lib/ftl/ftl_reloc.o 00:44:57.912 CC lib/ftl/ftl_l2p_cache.o 00:44:57.912 CC lib/ftl/ftl_p2l.o 00:44:58.170 CC lib/ftl/mngt/ftl_mngt.o 00:44:58.170 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:44:58.170 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:44:58.170 CC lib/ftl/mngt/ftl_mngt_startup.o 00:44:58.170 CC lib/ftl/mngt/ftl_mngt_md.o 00:44:58.170 CC lib/ftl/mngt/ftl_mngt_misc.o 00:44:58.170 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:44:58.170 LIB libspdk_iscsi.a 00:44:58.170 LIB libspdk_vhost.a 00:44:58.171 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:44:58.171 CC lib/ftl/mngt/ftl_mngt_band.o 00:44:58.171 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:44:58.171 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:44:58.171 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:44:58.171 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:44:58.430 CC lib/ftl/utils/ftl_conf.o 00:44:58.430 CC lib/ftl/utils/ftl_md.o 00:44:58.430 CC lib/ftl/utils/ftl_mempool.o 00:44:58.430 CC lib/ftl/utils/ftl_bitmap.o 00:44:58.430 CC lib/ftl/utils/ftl_property.o 00:44:58.430 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:44:58.430 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:44:58.430 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:44:58.430 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:44:58.430 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:44:58.430 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:44:58.430 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:44:58.430 CC lib/ftl/upgrade/ftl_sb_v3.o 00:44:58.430 CC lib/ftl/upgrade/ftl_sb_v5.o 00:44:58.689 CC lib/ftl/nvc/ftl_nvc_dev.o 00:44:58.689 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:44:58.689 CC lib/ftl/base/ftl_base_dev.o 00:44:58.689 CC lib/ftl/base/ftl_base_bdev.o 00:44:58.689 LIB libspdk_ftl.a 00:44:59.258 CC module/env_dpdk/env_dpdk_rpc.o 00:44:59.258 CC module/accel/ioat/accel_ioat.o 00:44:59.258 CC module/accel/error/accel_error.o 00:44:59.258 CC module/scheduler/dynamic/scheduler_dynamic.o 00:44:59.258 CC module/keyring/file/keyring.o 00:44:59.258 CC module/blob/bdev/blob_bdev.o 00:44:59.258 CC module/accel/iaa/accel_iaa.o 00:44:59.258 CC module/accel/dsa/accel_dsa.o 00:44:59.258 CC module/sock/posix/posix.o 00:44:59.258 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:44:59.258 LIB libspdk_env_dpdk_rpc.a 00:44:59.258 CC module/accel/dsa/accel_dsa_rpc.o 00:44:59.258 CC module/accel/error/accel_error_rpc.o 00:44:59.258 LIB libspdk_scheduler_dynamic.a 00:44:59.258 CC module/accel/ioat/accel_ioat_rpc.o 00:44:59.258 CC module/keyring/file/keyring_rpc.o 00:44:59.258 CC module/accel/iaa/accel_iaa_rpc.o 00:44:59.258 LIB libspdk_scheduler_dpdk_governor.a 00:44:59.258 LIB libspdk_blob_bdev.a 00:44:59.258 LIB libspdk_accel_dsa.a 00:44:59.516 LIB libspdk_accel_error.a 00:44:59.516 LIB libspdk_accel_ioat.a 00:44:59.516 LIB libspdk_accel_iaa.a 00:44:59.516 LIB libspdk_keyring_file.a 00:44:59.516 CC module/scheduler/gscheduler/gscheduler.o 00:44:59.516 CC module/keyring/linux/keyring.o 00:44:59.516 CC module/bdev/delay/vbdev_delay.o 00:44:59.516 CC module/bdev/error/vbdev_error.o 00:44:59.516 LIB libspdk_scheduler_gscheduler.a 00:44:59.516 CC module/blobfs/bdev/blobfs_bdev.o 00:44:59.516 CC module/bdev/gpt/gpt.o 00:44:59.516 CC module/bdev/null/bdev_null.o 00:44:59.516 CC module/keyring/linux/keyring_rpc.o 00:44:59.516 LIB libspdk_sock_posix.a 00:44:59.516 CC module/bdev/lvol/vbdev_lvol.o 00:44:59.516 CC module/bdev/gpt/vbdev_gpt.o 00:44:59.516 CC module/bdev/malloc/bdev_malloc.o 00:44:59.516 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:44:59.773 LIB libspdk_keyring_linux.a 00:44:59.773 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:44:59.773 CC module/bdev/null/bdev_null_rpc.o 00:44:59.773 CC module/bdev/delay/vbdev_delay_rpc.o 00:44:59.773 CC module/bdev/error/vbdev_error_rpc.o 00:44:59.773 CC module/bdev/malloc/bdev_malloc_rpc.o 00:44:59.773 LIB libspdk_bdev_gpt.a 00:44:59.773 LIB libspdk_blobfs_bdev.a 00:44:59.773 LIB libspdk_bdev_lvol.a 00:44:59.773 LIB libspdk_bdev_null.a 00:44:59.773 LIB libspdk_bdev_delay.a 00:44:59.773 LIB libspdk_bdev_error.a 00:44:59.773 LIB libspdk_bdev_malloc.a 00:44:59.773 CC module/bdev/nvme/bdev_nvme.o 00:44:59.773 CC module/bdev/nvme/bdev_nvme_rpc.o 00:44:59.773 CC module/bdev/passthru/vbdev_passthru.o 00:45:00.031 CC module/bdev/raid/bdev_raid.o 00:45:00.031 CC module/bdev/zone_block/vbdev_zone_block.o 00:45:00.031 CC module/bdev/split/vbdev_split.o 00:45:00.031 CC module/bdev/aio/bdev_aio.o 00:45:00.031 CC module/bdev/virtio/bdev_virtio_scsi.o 00:45:00.031 CC module/bdev/ftl/bdev_ftl.o 00:45:00.031 CC module/bdev/iscsi/bdev_iscsi.o 00:45:00.031 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:45:00.031 CC module/bdev/split/vbdev_split_rpc.o 00:45:00.031 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:45:00.290 CC module/bdev/ftl/bdev_ftl_rpc.o 00:45:00.290 CC module/bdev/aio/bdev_aio_rpc.o 00:45:00.290 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:45:00.290 LIB libspdk_bdev_passthru.a 00:45:00.290 CC module/bdev/virtio/bdev_virtio_blk.o 00:45:00.290 CC module/bdev/nvme/nvme_rpc.o 00:45:00.290 LIB libspdk_bdev_split.a 00:45:00.290 LIB libspdk_bdev_zone_block.a 00:45:00.290 CC module/bdev/nvme/bdev_mdns_client.o 00:45:00.290 CC module/bdev/nvme/vbdev_opal.o 00:45:00.290 CC module/bdev/nvme/vbdev_opal_rpc.o 00:45:00.290 LIB libspdk_bdev_iscsi.a 00:45:00.290 CC module/bdev/raid/bdev_raid_rpc.o 00:45:00.290 LIB libspdk_bdev_aio.a 00:45:00.290 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:45:00.290 LIB libspdk_bdev_ftl.a 00:45:00.290 CC module/bdev/raid/bdev_raid_sb.o 00:45:00.290 CC module/bdev/raid/raid0.o 00:45:00.290 CC module/bdev/virtio/bdev_virtio_rpc.o 00:45:00.290 CC module/bdev/raid/raid1.o 00:45:00.290 CC module/bdev/raid/concat.o 00:45:00.548 CC module/bdev/raid/raid5f.o 00:45:00.548 LIB libspdk_bdev_virtio.a 00:45:00.548 LIB libspdk_bdev_nvme.a 00:45:00.548 LIB libspdk_bdev_raid.a 00:45:01.116 CC module/event/subsystems/vmd/vmd.o 00:45:01.116 CC module/event/subsystems/vmd/vmd_rpc.o 00:45:01.116 CC module/event/subsystems/iobuf/iobuf.o 00:45:01.116 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:45:01.116 CC module/event/subsystems/scheduler/scheduler.o 00:45:01.116 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:45:01.116 CC module/event/subsystems/keyring/keyring.o 00:45:01.116 CC module/event/subsystems/sock/sock.o 00:45:01.116 LIB libspdk_event_keyring.a 00:45:01.116 LIB libspdk_event_vmd.a 00:45:01.116 LIB libspdk_event_sock.a 00:45:01.116 LIB libspdk_event_scheduler.a 00:45:01.116 LIB libspdk_event_vhost_blk.a 00:45:01.116 LIB libspdk_event_iobuf.a 00:45:01.375 CC module/event/subsystems/accel/accel.o 00:45:01.634 LIB libspdk_event_accel.a 00:45:01.894 CC module/event/subsystems/bdev/bdev.o 00:45:02.154 LIB libspdk_event_bdev.a 00:45:02.415 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:45:02.415 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:45:02.415 CC module/event/subsystems/scsi/scsi.o 00:45:02.415 CC module/event/subsystems/nbd/nbd.o 00:45:02.674 LIB libspdk_event_nbd.a 00:45:02.674 LIB libspdk_event_scsi.a 00:45:02.674 LIB libspdk_event_nvmf.a 00:45:02.933 CC module/event/subsystems/iscsi/iscsi.o 00:45:02.933 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:45:02.933 LIB libspdk_event_iscsi.a 00:45:02.933 LIB libspdk_event_vhost_scsi.a 00:45:03.192 CXX app/trace/trace.o 00:45:03.452 CC examples/nvme/hello_world/hello_world.o 00:45:03.452 CC examples/ioat/perf/perf.o 00:45:03.452 CC examples/sock/hello_world/hello_sock.o 00:45:03.452 CC examples/accel/perf/accel_perf.o 00:45:03.452 CC test/accel/dif/dif.o 00:45:03.452 CC examples/vmd/lsvmd/lsvmd.o 00:45:03.452 CC examples/blob/hello_world/hello_blob.o 00:45:03.452 CC examples/nvmf/nvmf/nvmf.o 00:45:03.452 CC examples/bdev/hello_world/hello_bdev.o 00:45:03.711 LINK lsvmd 00:45:03.711 LINK ioat_perf 00:45:03.711 LINK hello_sock 00:45:03.711 LINK hello_world 00:45:03.711 LINK spdk_trace 00:45:03.711 LINK hello_blob 00:45:03.711 LINK hello_bdev 00:45:03.711 LINK accel_perf 00:45:03.711 LINK nvmf 00:45:03.711 LINK dif 00:45:11.824 CC examples/ioat/verify/verify.o 00:45:12.082 LINK verify 00:45:14.614 CC examples/vmd/led/led.o 00:45:14.614 CC app/trace_record/trace_record.o 00:45:15.181 LINK led 00:45:15.182 LINK spdk_trace_record 00:45:21.744 CC app/nvmf_tgt/nvmf_main.o 00:45:21.744 CC app/iscsi_tgt/iscsi_tgt.o 00:45:21.744 LINK nvmf_tgt 00:45:21.744 CC examples/nvme/reconnect/reconnect.o 00:45:22.002 LINK iscsi_tgt 00:45:23.377 LINK reconnect 00:45:23.945 CC examples/nvme/nvme_manage/nvme_manage.o 00:45:24.882 CC examples/bdev/bdevperf/bdevperf.o 00:45:25.450 LINK nvme_manage 00:45:26.828 LINK bdevperf 00:45:41.698 CC examples/blob/cli/blobcli.o 00:45:41.956 LINK blobcli 00:45:56.895 CC app/spdk_tgt/spdk_tgt.o 00:45:58.265 LINK spdk_tgt 00:46:00.165 CC examples/nvme/arbitration/arbitration.o 00:46:01.541 LINK arbitration 00:46:57.771 CC examples/nvme/hotplug/hotplug.o 00:46:57.771 LINK hotplug 00:46:57.771 CC examples/nvme/cmb_copy/cmb_copy.o 00:46:57.771 LINK cmb_copy 00:46:59.677 CC examples/nvme/abort/abort.o 00:47:01.055 CC test/app/bdev_svc/bdev_svc.o 00:47:01.315 LINK abort 00:47:02.252 LINK bdev_svc 00:47:28.804 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:47:28.804 LINK nvme_fuzz 00:47:29.738 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:47:33.926 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:47:34.184 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:47:34.442 LINK iscsi_fuzz 00:47:35.860 LINK vhost_fuzz 00:47:39.148 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:47:40.083 LINK pmr_persistence 00:47:48.206 CC examples/util/zipf/zipf.o 00:47:48.774 LINK zipf 00:47:50.681 CC examples/thread/thread/thread_ex.o 00:47:51.617 LINK thread 00:47:52.185 CC app/spdk_lspci/spdk_lspci.o 00:47:53.121 LINK spdk_lspci 00:47:53.691 CC app/spdk_nvme_perf/perf.o 00:47:55.070 CC app/spdk_nvme_identify/identify.o 00:47:55.639 LINK spdk_nvme_perf 00:47:57.019 CC examples/idxd/perf/perf.o 00:47:57.278 LINK spdk_nvme_identify 00:47:58.216 LINK idxd_perf 00:47:59.591 CC examples/interrupt_tgt/interrupt_tgt.o 00:48:00.158 LINK interrupt_tgt 00:48:01.094 CC test/app/histogram_perf/histogram_perf.o 00:48:02.031 LINK histogram_perf 00:48:05.322 CC app/spdk_nvme_discover/discovery_aer.o 00:48:06.258 LINK spdk_nvme_discover 00:48:12.828 CC app/spdk_top/spdk_top.o 00:48:14.777 LINK spdk_top 00:48:15.058 CC test/app/jsoncat/jsoncat.o 00:48:15.626 LINK jsoncat 00:48:18.162 CC app/vhost/vhost.o 00:48:19.538 LINK vhost 00:48:29.519 CC app/spdk_dd/spdk_dd.o 00:48:29.519 LINK spdk_dd 00:48:30.893 CC test/app/stub/stub.o 00:48:30.893 CC app/fio/nvme/fio_plugin.o 00:48:31.461 LINK stub 00:48:32.399 LINK spdk_nvme 00:48:34.305 CC app/fio/bdev/fio_plugin.o 00:48:35.683 CC test/bdev/bdevio/bdevio.o 00:48:36.252 LINK spdk_bdev 00:48:37.190 LINK bdevio 00:48:37.448 CC test/blobfs/mkfs/mkfs.o 00:48:38.381 LINK mkfs 00:48:48.354 TEST_HEADER include/spdk/config.h 00:48:48.354 CXX test/cpp_headers/accel.o 00:48:49.294 CXX test/cpp_headers/accel_module.o 00:48:50.231 CXX test/cpp_headers/assert.o 00:48:50.797 CXX test/cpp_headers/barrier.o 00:48:51.732 CXX test/cpp_headers/base64.o 00:48:52.666 CXX test/cpp_headers/bdev.o 00:48:54.043 CXX test/cpp_headers/bdev_module.o 00:48:54.981 CXX test/cpp_headers/bdev_zone.o 00:48:56.359 CXX test/cpp_headers/bit_array.o 00:48:56.942 CC test/dma/test_dma/test_dma.o 00:48:57.214 CXX test/cpp_headers/bit_pool.o 00:48:57.783 CXX test/cpp_headers/blob.o 00:48:58.722 LINK test_dma 00:48:58.722 CXX test/cpp_headers/blob_bdev.o 00:49:00.098 CXX test/cpp_headers/blobfs.o 00:49:01.035 CXX test/cpp_headers/blobfs_bdev.o 00:49:02.413 CXX test/cpp_headers/conf.o 00:49:02.981 CXX test/cpp_headers/config.o 00:49:03.239 CXX test/cpp_headers/cpuset.o 00:49:03.497 CXX test/cpp_headers/crc16.o 00:49:04.064 CXX test/cpp_headers/crc32.o 00:49:04.630 CXX test/cpp_headers/crc64.o 00:49:05.197 CXX test/cpp_headers/dif.o 00:49:05.764 CXX test/cpp_headers/dma.o 00:49:06.023 CXX test/cpp_headers/endian.o 00:49:06.589 CXX test/cpp_headers/env.o 00:49:06.848 CXX test/cpp_headers/env_dpdk.o 00:49:07.416 CXX test/cpp_headers/event.o 00:49:08.796 CXX test/cpp_headers/fd.o 00:49:09.364 CXX test/cpp_headers/fd_group.o 00:49:09.624 CC test/env/mem_callbacks/mem_callbacks.o 00:49:10.192 CXX test/cpp_headers/file.o 00:49:10.452 LINK mem_callbacks 00:49:11.020 CXX test/cpp_headers/ftl.o 00:49:11.279 CXX test/cpp_headers/gpt_spec.o 00:49:12.217 CXX test/cpp_headers/hexlify.o 00:49:12.476 CC test/env/vtophys/vtophys.o 00:49:13.043 LINK vtophys 00:49:13.043 CXX test/cpp_headers/histogram_data.o 00:49:13.979 CXX test/cpp_headers/idxd.o 00:49:15.359 CXX test/cpp_headers/idxd_spec.o 00:49:15.927 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:49:16.186 CXX test/cpp_headers/init.o 00:49:16.754 LINK env_dpdk_post_init 00:49:17.322 CXX test/cpp_headers/ioat.o 00:49:18.260 CXX test/cpp_headers/ioat_spec.o 00:49:19.640 CXX test/cpp_headers/iscsi_spec.o 00:49:20.208 CXX test/cpp_headers/json.o 00:49:21.144 CXX test/cpp_headers/jsonrpc.o 00:49:22.521 CXX test/cpp_headers/keyring.o 00:49:23.459 CXX test/cpp_headers/keyring_module.o 00:49:24.926 CXX test/cpp_headers/likely.o 00:49:25.862 CXX test/cpp_headers/log.o 00:49:27.237 CXX test/cpp_headers/lvol.o 00:49:27.583 CC test/env/memory/memory_ut.o 00:49:28.519 CXX test/cpp_headers/memory.o 00:49:29.453 CXX test/cpp_headers/mmio.o 00:49:30.387 CXX test/cpp_headers/nbd.o 00:49:30.644 LINK memory_ut 00:49:30.644 CXX test/cpp_headers/notify.o 00:49:32.018 CXX test/cpp_headers/nvme.o 00:49:33.395 CXX test/cpp_headers/nvme_intel.o 00:49:33.961 CC test/env/pci/pci_ut.o 00:49:33.961 CXX test/cpp_headers/nvme_ocssd.o 00:49:35.335 CXX test/cpp_headers/nvme_ocssd_spec.o 00:49:35.594 LINK pci_ut 00:49:36.538 CXX test/cpp_headers/nvme_spec.o 00:49:37.482 CXX test/cpp_headers/nvme_zns.o 00:49:38.858 CXX test/cpp_headers/nvmf.o 00:49:40.232 CXX test/cpp_headers/nvmf_cmd.o 00:49:41.604 CXX test/cpp_headers/nvmf_fc_spec.o 00:49:42.979 CC test/event/event_perf/event_perf.o 00:49:42.979 CXX test/cpp_headers/nvmf_spec.o 00:49:43.563 LINK event_perf 00:49:43.821 CXX test/cpp_headers/nvmf_transport.o 00:49:45.195 CXX test/cpp_headers/opal.o 00:49:45.762 CC test/event/reactor/reactor.o 00:49:46.331 CXX test/cpp_headers/opal_spec.o 00:49:46.898 LINK reactor 00:49:47.156 CXX test/cpp_headers/pci_ids.o 00:49:48.533 CXX test/cpp_headers/pipe.o 00:49:49.467 CC test/lvol/esnap/esnap.o 00:49:49.467 CXX test/cpp_headers/queue.o 00:49:49.725 CXX test/cpp_headers/reduce.o 00:49:51.100 CXX test/cpp_headers/rpc.o 00:49:52.474 CXX test/cpp_headers/scheduler.o 00:49:53.853 CXX test/cpp_headers/scsi.o 00:49:55.227 CXX test/cpp_headers/scsi_spec.o 00:49:56.602 CXX test/cpp_headers/sock.o 00:49:57.535 CXX test/cpp_headers/stdinc.o 00:49:58.912 CXX test/cpp_headers/string.o 00:49:59.849 CXX test/cpp_headers/thread.o 00:50:00.784 CXX test/cpp_headers/trace.o 00:50:01.718 CXX test/cpp_headers/trace_parser.o 00:50:01.975 CC test/event/reactor_perf/reactor_perf.o 00:50:02.907 LINK reactor_perf 00:50:02.907 CXX test/cpp_headers/tree.o 00:50:02.907 CXX test/cpp_headers/ublk.o 00:50:03.843 LINK esnap 00:50:03.843 CXX test/cpp_headers/util.o 00:50:04.779 CXX test/cpp_headers/uuid.o 00:50:05.716 CXX test/cpp_headers/version.o 00:50:05.716 CXX test/cpp_headers/vfio_user_pci.o 00:50:07.089 CXX test/cpp_headers/vfio_user_spec.o 00:50:07.656 CC test/event/app_repeat/app_repeat.o 00:50:07.915 CXX test/cpp_headers/vhost.o 00:50:08.173 LINK app_repeat 00:50:09.109 CXX test/cpp_headers/vmd.o 00:50:10.484 CXX test/cpp_headers/xor.o 00:50:11.423 CXX test/cpp_headers/zipf.o 00:50:13.952 CC test/nvme/aer/aer.o 00:50:15.329 LINK aer 00:50:17.860 CC test/nvme/reset/reset.o 00:50:18.430 LINK reset 00:50:18.998 CC test/event/scheduler/scheduler.o 00:50:19.932 LINK scheduler 00:50:21.311 CC test/nvme/sgl/sgl.o 00:50:21.964 LINK sgl 00:50:27.250 CC test/rpc_client/rpc_client_test.o 00:50:27.509 LINK rpc_client_test 00:50:27.767 CC test/thread/poller_perf/poller_perf.o 00:50:28.332 LINK poller_perf 00:50:29.269 CC test/thread/lock/spdk_lock.o 00:50:30.205 CC test/nvme/e2edp/nvme_dp.o 00:50:31.580 LINK nvme_dp 00:50:32.954 LINK spdk_lock 00:50:41.065 CC test/nvme/overhead/overhead.o 00:50:41.065 CC test/nvme/err_injection/err_injection.o 00:50:41.065 LINK overhead 00:50:41.323 LINK err_injection 00:50:43.224 CC test/nvme/startup/startup.o 00:50:43.224 CC test/nvme/reserve/reserve.o 00:50:43.482 LINK startup 00:50:43.482 CC test/nvme/simple_copy/simple_copy.o 00:50:43.741 LINK reserve 00:50:44.306 LINK simple_copy 00:50:45.692 CC test/nvme/connect_stress/connect_stress.o 00:50:45.954 CC test/nvme/boot_partition/boot_partition.o 00:50:46.519 LINK connect_stress 00:50:46.777 LINK boot_partition 00:50:49.308 CC test/nvme/compliance/nvme_compliance.o 00:50:50.683 LINK nvme_compliance 00:51:00.648 CC test/nvme/fused_ordering/fused_ordering.o 00:51:00.648 LINK fused_ordering 00:51:12.916 CC test/nvme/doorbell_aers/doorbell_aers.o 00:51:12.916 LINK doorbell_aers 00:51:13.482 CC test/nvme/fdp/fdp.o 00:51:14.416 LINK fdp 00:51:14.416 CC test/nvme/cuse/cuse.o 00:51:16.970 LINK cuse 00:51:29.201 08:02:02 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:51:29.201 make[1]: Nothing to be done for 'clean'. 00:51:35.770 08:02:08 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:51:35.770 08:02:08 -- common/autotest_common.sh@726 -- $ xtrace_disable 00:51:35.770 08:02:08 -- common/autotest_common.sh@10 -- $ set +x 00:51:35.770 08:02:08 -- spdk/autopackage.sh@48 -- $ timing_finish 00:51:35.770 08:02:08 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:51:35.770 08:02:08 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:51:35.770 08:02:08 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:51:35.770 08:02:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:51:35.770 08:02:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:51:35.770 08:02:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:51:35.770 08:02:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:35.770 08:02:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:51:35.770 08:02:08 -- pm/common@44 -- $ pid=184124 00:51:35.770 08:02:08 -- pm/common@50 -- $ kill -TERM 184124 00:51:35.770 08:02:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:35.770 08:02:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:51:35.770 08:02:08 -- pm/common@44 -- $ pid=184126 00:51:35.770 08:02:08 -- pm/common@50 -- $ kill -TERM 184126 00:51:35.770 + [[ -n 2839 ]] 00:51:35.770 + sudo kill 2839 00:51:36.350 [Pipeline] } 00:51:36.370 [Pipeline] // timeout 00:51:36.376 [Pipeline] } 00:51:36.397 [Pipeline] // stage 00:51:36.405 [Pipeline] } 00:51:36.426 [Pipeline] // catchError 00:51:36.439 [Pipeline] stage 00:51:36.441 [Pipeline] { (Stop VM) 00:51:36.459 [Pipeline] sh 00:51:36.747 + vagrant halt 00:51:40.942 ==> default: Halting domain... 00:51:50.938 [Pipeline] sh 00:51:51.218 + vagrant destroy -f 00:51:53.747 ==> default: Removing domain... 00:51:54.324 [Pipeline] sh 00:51:54.606 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output 00:51:54.615 [Pipeline] } 00:51:54.633 [Pipeline] // stage 00:51:54.638 [Pipeline] } 00:51:54.654 [Pipeline] // dir 00:51:54.661 [Pipeline] } 00:51:54.679 [Pipeline] // wrap 00:51:54.686 [Pipeline] } 00:51:54.702 [Pipeline] // catchError 00:51:54.712 [Pipeline] stage 00:51:54.714 [Pipeline] { (Epilogue) 00:51:54.729 [Pipeline] sh 00:51:55.014 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:52:13.117 [Pipeline] catchError 00:52:13.119 [Pipeline] { 00:52:13.137 [Pipeline] sh 00:52:13.420 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:52:13.680 Artifacts sizes are good 00:52:13.731 [Pipeline] } 00:52:13.753 [Pipeline] // catchError 00:52:13.764 [Pipeline] archiveArtifacts 00:52:13.772 Archiving artifacts 00:52:14.112 [Pipeline] cleanWs 00:52:14.129 [WS-CLEANUP] Deleting project workspace... 00:52:14.129 [WS-CLEANUP] Deferred wipeout is used... 00:52:14.155 [WS-CLEANUP] done 00:52:14.157 [Pipeline] } 00:52:14.175 [Pipeline] // stage 00:52:14.181 [Pipeline] } 00:52:14.199 [Pipeline] // node 00:52:14.205 [Pipeline] End of Pipeline 00:52:14.242 Finished: SUCCESS